Should discussing AI generated programs be allowed in the Haiku forums?
The New Category for “AI-Assisted” Software? thread discusses this, but is a bit spicy. I want to try an experiment. This thread will be heavily moderated by me.
It is up to you whether you want to participate or not. If you believe I am rejecting a post that is valuable, I recommend putting it in New Category for “AI-Assisted” Software? or similar.
I’ve got a general idea of how I will moderate this thread. But, I have never moderated before, not to mention trying an experiment like this. So, what I will do isn’t set in stone yet. But, the general principles go like this:
Off-topic posts
Feel free to discuss offtopic topics as they come up! You can discuss whether you think me moderating this thread in this way is good, whether you think Claire Obscure losing a major gaming award to LLM-generated content is fair, etc. Just put them in a details tag at the end of the post. The syntax is like this, if you’re unfamiliar with them:
[details="Summary"]
This text will be hidden by default.
[/details]
and they look like this
Summary
This text will be hidden by default.
If a post doesn’t put something in a details tag when I think it should, I will edit it to do that. If a midly off-topic comment at the end of a post ends up sparking a whole side discussion, I’ll probably put that comment and all future comments in later posts referencing that into their own details box like that.
Aggressive posts
If I think the tone in a comment is too aggressive, I will probably deal with it with one of these actions in order of preference:
Edit it. Keep the original opinion intact but soften the language.
Remove it. Also follow up with the poster about how it can be improved.
Remove it and don’t follow up with the poster. I’ll probably use this sparingly if at all. I want everyone to be able to voice their opinion and make arguments for and against other people’s ideas on this thread! Just not with an aggressive tone.
If I lose energy
All this may be too much for me to moderate. Again, I have never moderated, let alone done an experiment like this. I may lock the thread until I have energy to moderate again. Or, I might switch this thread to being moderated normally. I’ll make a post if I decide to make that switch, so you’ll know the thread is no longer being moderated by me.
Once again, it is up to you if you would like to participate in this thread or not. I hope this yields a more productive conversation, but who knows what will actually happen! This is still a topic where many people have strong and contradictory opinions, so this is unlikely to be perfect. If you are looking for somewhere else to post that isn’t moderated like this thread, I recommend the New Category for “AI-Assisted” Software? thread.
Summary of opinions on AI
I’ve gone through the New Category for “AI-Assisted” Software? thread, and here are the opinions that I currently saw floating around. If I’ve missed your opinion, feel free to post it! Feel free, also, to discuss these opinions and what your stance is on them.
How do you absolutely verify which have AI and which don’t. Individuals can be very sneaky. I mean just look at all the viruses out there in the wild and ask yourself who you trust? And can you trust any tool or any person to catch every potential update?
You don’t need to. Having a policy that such things are not welcome sets a tone and makes a statement on its own. A lot of the time you can’t be sure, but you can sure as heck make it clear that it’s not an AI-friendly environment.
I don’t think it’s possible to verify that with 100% accuracy.
In most cases,it’s totally obvious like having “Claude” in the contributor list or some specific style of writing or even when the author openly admits they used LLMs.
If the author invests more work to obfuscate it,some LLM programs may slip through,but that doesn’t mean that it would be okay in case a rule is made to disallow it.
That you can’t catch 100% doesn’t mean there shouldn’t be a rule against it.
We can still expect most people to contribute in good faith and simply don’t do it when they know it’s disallowed.
And in case it’s revealed later,the moderation team can take action.
Since we’re summarizing, here’s where I think (having read all the posts on the various threads, and done a quick re-skim for sentiment) the various options I listed stand as far as support:
Maybe as much as 25% support, though most of those folks have expressed that as “choosing for myself I’d do this” rather than “we should do this”.
Some developers have expressed opposition to this as impractical and an inappropriate imposition on the community.
Maybe 50% support?
Interestingly, there seems to be more support for this among developers than among the users.
Around 75% support, but with most of the 25% opposed having said they want a more extreme option.
Some moderators oppose this, since they think the moderators would also be avoiding this category, leading to a risk of under-moderation.
Very near to 0% support.
I do think the debate is at (or very close to) the point that not much new will be said?
To Horizons and nipos, what I worry about most is the intentions of the people running the AI companies. I don’t trust them to be honest with what their intentions are with AI.
This is a very good question,actually.
What I can say for sure is that it’s not about money,since none of the LLM companies is making a profit and it’s more than unlikely that they will in the foreseeable future.
Maybe it’s about control?
They control the server farms,they control their models,so they can control the opinion that is spread to the public.
Seeing the use for deepfakes,scams,phishing and false information,that isn’t a good thing.
Also,many big LLM companies have contracts with the military to help in wars,potentially with the plan to automatically kill people in the future.
Whatever their intentions might be,it’s pretty clear that they’re not good for the world.
I’m not clear about what side of the fence those developers are on. Pro or Con? If they WANT AI, what are THEIR intentions with AI. As a helper to write “good” code or a helper to disguise “bad” (evil) code?
I take it that “Very near to 0% support.” means they are against AI being part of the development of code with Haiku.
To be clear, the reason that I will be running Haiku and Arca Noae’s version of OS/2 is because of the much lower chance that AI will be part of the development and therefore any evil intentions of the AI CEOs.
Intentions of AI companies
Per Google Search:
“Who are the top 5 leaders in AI?
Leaders
Matthew Prince. Co-founder and CEO, Cloudflare. by Billy Perrigo.
Elon Musk. Founder, xAI. by Harry Booth.
Sam Altman. CEO, OpenAI. by Billy Perrigo.
Jensen Huang. CEO, Nvidia. by Harry Booth.
Fidji Simo. CEO of Applications, OpenAI. by Harry Booth.
Mark Zuckerberg. Founder and CEO, Meta. by Harry Booth.
I don’ t trust ANY of them any further than I could throw all of them individually let alone on a pallet.
Companies make products to “hook you” to their products. And once they have you hooked, they can do pretty much anything they want and hide it where you can’t see it. That is what I’m MOST afraid of.
I’ve put discussion about intentions of AI companies into a details box. This is an meaningful part of the discussion, however, so if you want to read them, go ahead!
This part is already solved and not under discussion currently. LLM generated contributions to Haiku are not allowed.
The ongoing discussions (public one in the forum, but there is also one on haikuports mailing list, and a private one for moderators about the forum moderation rules) are about wether we should try to go further than that, such as:
Banning “vibe coded” applications from the forum, or moving them to a dedicated category so people can ignore them
Banning them from being packaged in haikuports
Banning LLM generated posts from the forum (the rule is already in place, but there has been some discussion among the moderation team to clarify its intent)
Defining exactly what we want to ban in terms of LLM generated or assisted code. Is it just entirely vibe coded apps? Is it any and all use of LLMs including for research and code exploration (which seems impossible to do?) Somewhere in between? What about people who use some type of “smart” autocomplete that’s based on an LLM? Is that different from vibe coding? (from the software quality point of view, I think it is, but from the resources usage, independance from closed source tech, and political associations, I think it’s not).
And also, for which reasons we should or should not ban things. But maybe that’s less important in the end, and only a way to guide the decisions. And I think everything there was to say about this, has already been said.
Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!
Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It’s the people, not the tools, that are the problem.
We’re still in the early days, I suspect. Like back in the day when DEC’s founder opined that there’s no reason anyone would want a computer in their home. At the moment it is what it is, clearly there are some downsides, and I think practically no one wants to see a flood of vibe coded applications and libraries. Already though the stuff has made inroads, in commonly used applications, and from a practical perspective, if the code is good, it’s good. Both of these points will be increasingly salient - the code generation will get better, and it will be used in more and more applications ported to Haiku.
To me that argues against a “taint” perspective. The problem, as the Ghostly team observed, is not with the tool, and indiscriminately marking all uses of the tool will be less and less satisfactory as a way to deal with the problem.
From a perspective that’s more principle and less practicality, I hope Haiku remains a zone where the things we use are created by human hands and minds.
It’s not defensive. As stated by the slopware project:
Last reference is Daniel Stenberg’s (cURL author and core maintainer) post:
I don’t know what a “strict non-ai policy” would mean, but ignoring AI powered tooling in modern software security I would claim would ignore the state of the art tools. Unwise.
All right, I’m done for today. I’ve got to sleep sometime! I think I’ll resume this for one more day. I don’t have the energy for much more than that.
Good night!
Edit: Meh, the original topic is closed. I guess I will call it quits.
Thank you to everyone who participated in both topics, even if it got heated sometimes. Special thanks to @us3r1d and everyone else who tried to keep the discussion civil.
What happens next? As has already been mentioned, the moderators are working on figuring out a new rule/rules.