This thread moves too fast for me comment on all the things that I could…
That’s the same “reasoning” that smaller countries don’t need to do anything until the countries with large emmissions do. Never mind that some of the big polluters are actually massively investing in renewables, for example (see China).
It’s a typical debating strategy, it seems: First deny anything is wrong until that position becomes untenable, then deny it’s human’s fault until that position becomes untenable, then say “OK, you first”, and finally: it’s too late anyway.
In any case, the Haiku project may be a very small player, but it can show other projects - big and small - that we’re standing with them against the tide of AI. Together we may just save some of our computing culture.
Honestly, I find this quite a bit over the top. Using LLMs isn’t forbidden in the first place, so why should he have to answer that question? For what?
If that part of the conversation is annoying, then just let it die out.
I feel like this whole topic has got derailed and everybody got off on the wrong hand. I feel like there are better ways to handle this and stay on topic so can we like come to terms with this forum if not the best representation of haiku and things that were added, but it is good enough. My personal opinion is that AI is good and not good. You can use AI for a lot of things, but it is not perfect and the many issues that AI has caused was already bad enough to the environment. I feel like this is better if we all get off on a better foot and get back to topic.
And for the usage of AI, I feel like this AI could be helpful for beginners and then afterwards they can delete it. The only reason why this AI would be good for beginners is that it can help more porting happen and it can allow people to land a system more easier, though I do not like using AI myself. This is a step towards haiku being more close to the future while still keeping its legacy. I hope that everybody has a great day and that we feel better because this should not have gotten this heated in my opinion.
The forum rules don’t totally prohibit LLM usage, but they do prohibit passing off research (at least) done with an LLM as one’s own work. They also specify “Post Only Your Own Stuff”, which further indicates that posting content that isn’t yours without attribution isn’t allowed.
It was offtopic starting from the very first post from the original poster. This conversation is not fun, but it is important to have it and be clear about the path forward, and, if it comes to that, which part of the team has to leave the project: the people who do not want any LLM generated code in Haiku, or the ones who do want to generate code using LLMs and upstream it. For me, there is no compromise on that part.
I am ok with making compromises on messages posted to the forum. I find it surprising that some people don’t care enough and maybe let an LLM entirely write or at least “reformat” their messages, but I’m not convinced it deserves a ban. People doing this already lose their credibility: if you didn’t care enough about your own message to write it yourself, why would I care to read it? But that on its own does not oeserve a ban I think. Some people seem to not agree with even that.
I would rather not have this discussion left unresolved, taking the easy way of just closing the topic or banning the people who don’t agree with the current rules and are challenging them. This is important to me, that everyone has a chance to give their opinion, and then a final decision is made. Once that’s the case, we never need to haâe the same discussion again, we can write a clear rule (whichever way it goes), and everyone can refer to that and agree that the rule was written after a fair discussion, with a consensus from the developer team (for the code) or maybe from the users or the moderation team here on the forum, and if no consensus is reached, it will be through a vote. Then at least everyone knows what to expect, the rules are clear, and we can move on.
I actually wanted to stop myself from becoming also part of this sad thread, but it just hurts too much. A “tutorial” that starts by setting the whole house on fire just to have a discussion with the residents afterwards was quite obviously not a good idea.
What I find worrying is the possible formation of camps here, where two or more groups might end up irreconcilably opposed to one another. Especially in this forum, where the tone is usually so friendly and constructive. Alongside the disadvantages of AI already mentioned, social disruption also seems to be part of the equation.
On that note, I’d like to wish everyone a good day/night for the time being. Peace.
Come on, man. No one on the forum is obligated to answer questions, even if they’re asked by moderators. To me, banning someone for this specific reason is completely unjustified and looks like an abuse of moderator privilege.
I don’t see any reason to allow people to use an LLM to write a forum post. They are of course discrediting themselves, but I for one did not know that he was using an LLM when I first posted in this thread.
Sometimes you should be obligated to answer. It’s perfectly reasonable in this instance. We want to talk to human beings, not bots.
Hi Enrico, and welcome! Apologies for what you see here, we don’t usually have that much drama, or better to say, usually we don’t have drama at all. Apparently, the AI is a controversial topic here on which there’s no consensus yet and it sparks heated debates on both sides. Still, I hope you’ll stick around for a while and discover the nice things about the community and the project.
First off, competent translators are not build on LLMs, and secondly, they could just tell us?
In any case, translators do not make you “understand” text and more than an LLM does on it’s own, you will loose information, nuance, and get new information. At best this is a difficult, error-prone communication style. With an LLM this gets even worse since it will now insert confidentally wrong information into the post, and you can now argue with that instead.
Fair enough. So let’s establish a baseline (written by a human, BTW). What is the current state of play?
The Haiku project itself - No, no, a thousand times no. Fork the project and develop your own operating system with our blessings
Base libraries & utilities - No. There may be LLM code in vim already, but we don’t need to add more to it. If you insist, then fork the utility and develop it under a new name. Then see (5)
User-facing ports on Haikuports - Not entirely up to us. There might be LLM code in there already by the time we start porting it. Still, if you have reason to suspect there is, see (5)
Haiku-native user-facing applications on Haikuports - Unclear, but trending negative. suggestions that LLM-coded apps might be marked as such are controversial.
Anything on your private repositories or websites - Go wild. You be you. Establish the Salon des Refuséés of Haiku apps. Show the oldtimers how it should be done.
So (1), (2) and (5) are pretty much settled. The real need is for a firm policy on (3) and (4)
It seems clear to me that we do need a third-tier repository below the Haiku and Haikuports ones. One that can host apps that will not qualify for Haikuports inclusion, whether it is because they are LLM-coded or for any other reason. I don’t know, maybe Besly or Fatelk would be prepared to take on that role?
You can just distribute your own applications, or make your own repository in that case. We don’t have to cater to everyone for haikuports, it was also not ment for that originally.
Then learn English or go to one of the sub-forums for other languages. Reading comprehension in the English-speaking world is already on the floor. Let’s not make it worse by allowing bots.
That’s not the appropriate way to approach that and that is like very much out-of-pocket, but if a person wants to use AI for their English to actually get a message across I say they should let them that is fair. It’s better than them using a translator and that translator is messing them up or is messing with a lot of words and speaking from experience. English is a hard language to learn, especially for some people who haven’t spoken in their life, but they ain’t the point and we’re getting off-topic again.
Reading, I get: “LLM, summarize this conversation for me in [native language]”. LLMs are good at that, but I will still double-check.
But an LLM is not the right tool when you write a response. Just use Google Translate or DeepL
Bij voorbaat mijn excuses: ik weet het antwoord op deze vraag wel, maar ik spreek de taal niet, dus heb ik Deepl gebruikt om het voor mij te vertalen.
Ich entschuldige mich schon im Voraus: Ich kenne die Antwort auf diese Frage zwar, aber da ich die Sprache nicht spreche, habe ich sie mit Deepl übersetzen lassen.
Je m’excuse d’avance, je connais la réponse à cette question, mais comme je ne parle pas cette langue, j’ai utilisé Deepl pour la traduire.
Yes, I agree. This is also a kind of AI, a kind that predates LLMs. If you know any of the three languages, it probably comes across as a little formal and stilted. But it is better than not being able to join the conversation at all, and the fact that someon took the trouble to write an answer, have it translated and admits to that makes it more human to me, not less. There is effort behind it.
They should post messages in their preferred language, optionally with a translation below.
I happen to read 3 languages fluently, a lot of people here are not native english speakers and will have at least another language available. So, posting the original language will help get better replies. And people can translate it with their own tool if they need to - maybe not even to English but another language.