Now count what portion of new power and water use, post-2024, is AI. (And by the way, I care very much about agricultural water use too, and it’s part of the reason - ethics being the other part - of why I don’t consume animal products.)
Friend, semiconductor companies are saying outright that they’re prioritizing capacity for AI.
Their AI product lines are not profitable or close to it - and Microsoft is basically irrelevant as a model vendor, instead sponsoring OpenAI, who is nowhere near self-sufficient.
It’s been demonstrated, at length, that people have a tendency to “just prompt what you want” instead of doing actual programming once they start doing so.
You’re confusing GPU chips(Nvidia) with SSD/HDD storage. Semiconductor companies prioritizing AI chips for data centers isn’t what’s causing storage inflation that’s mostly down to geopolitics and wars affecting the supply chain. AI didn’t start the trade wars
You say Microsoft is ‘irrelevant’ as a vendor, but they own the infrastructure. Google and Microsoft have billions in cash from search and cloud they are 100% self-sufficient and aren’t going anywhere. They don’t need ‘AI profit’ today to be the most relevant tech companies on earth and that not to mention there AI are more advanced then openAI and make more money than open ai?
This is the biggest reach. You can’t ‘just prompt’ a complex software port to Haiku. If you don’t understand the Be API, the AI’s code will just crash and you won’t know why. AI doesn’t replace the programmer; it replaces the boring part of looking up documentation, so we can actually get stuff ported.
My point is that we can heavily moderate AI use to help with porting games and software that Haiku desperately needs and that ai is not causing most of the problems, Yall say it causing.
I as a user of Haiku am looking for trustworthiness If there is AI generated code used in an app or in Haiku itself I’d want it to have been sufficiently peer reviewed and declared free of technical or legal issues and would want it to be actively maintained that it stays that way into the foreseeable future.
It seems to me that if code is AI generated, and is difficult for human beings to read, or to maintain so that vulnerabilities can be handily patched as they are discovered, I’d find Haiku to be less attractive.
TBH I don’t think lots of games is what people are looking for in Haiku. Those games are already available in other OS’s and consoles and it doesn’t gain anything from being on Haiku.
An OS and compatible apps that I can trust and which doesn’t spy on you is what I would like Haiku to be.
Seagate and WD have publicly stated that their disk manufacturing capacity is back-ordered into 2028 due to datacenter orders.
No “AI companies” are profitable. Companies selling the shovels for the AI gold rush are.
You can produce a surprising amount of what initially appears to be working code through flocks of agents. It will mostly suck, and be unmaintainable, but it will appear to work.
It’s causing lots of problems, and while I don’t care what third parties do, I personally would prefer not to use any software generated by LLMs - and I’d prefer it to stay out of Haiku itself.
Well we can’t just kick it out of haiku, because some people are gonna use it and some won’t this is a purist and pragmatist debate basically saying we should use ai for the expansion of software and assisting us or we should not use it and keep everything human made, can we Jsut find a middle ground that fit both side, My offer is that we can heavily moderate AI use to help with porting games and software that Haiku or we can find another person middle ground so both side of people can be happy
This is like asking if a student who copied their whole test paper (or at least huge portions of it) from another student would like feedback on “their work”. Fundamentally it’s not their work. Maybe they’ll learn something in the critique anyway, but it won’t at all be what they would have learned from a critique of their work.
Possible? No, I don’t think so; all this was possible before, people just didn’t have the time, or didn’t want to do things that way. Now, if you plagiarize other OSes and GPL components and other such things using “AI”, then indeed you can make Haiku do things it couldn’t before. But… you could’ve done that before “AI” too.
If the only way to win at something is immoral, then you are morally obligated to lose.
I think using “AI” is dubious morally, at best. And as I’ve said elsewhere, I’d rather Haiku die than use “AI” in such a fashion.
Well if ai can’t be used to help code, then that mean you have to make coding easy for haiku, so people can effectively use that code with easy basic stuff with needing to know a lot about C++/C make the learning curve good rather than allow people to fall deep into a steep ledge that or update all of haiku documentation and put haiku coding documentation first if ai is gonna be forcible banned.
that not to mention most of the time people use AI as easy way out due to the simple fact learning C++ is hard and then translating it into haiku format is even harder it would be more easy if the documentation was better or if the code was simplified that even a teenager could understand it.
No. @dodo75 with a help of AI tools managed to make functional disk encryption with user-friendly UI in a weeks that @axeld can’t achieve for decade. @Andrea managed to get EHCI isochronous transfers and USB cameras working that upstream Haiku developers are still not achieved yet. This is just a fact. Even looking/copying Linux/BSD code is not that simple because driver architecture is different and deep domain knowledge is needed.
As for copyright violation. First, it is software authors responsibility, not Haiku developers team or Haiku Inc.. Second, it can be handled on demand. If somebody find code parts that violate copyright, they can report it to an author and author will remove problem parts. Presumption of innocence still takes place.
Again, there are no universal humanity morals. Somebody may consider that it is perfectly morally fine. Haiku team is not parents or church to enforce morals.
It is fine for you personally to have such moral standards, but please do not enforce it for everyone.
I definitely do not want Haiku to die.
This way of thinking is kind of similar to “It is better for humanity to extinct rather then continuing destroying environment and/or fall in sin”.
This is Haiku’s forum, and we decide what’s acceptable here or not. This is not universal, just for here. People who want to “write” or use generated code should go find another place to do so.
Because we are busy and can’t do everything at once? This falls under “don’t have time”, not “impossible”.
I don’t have any machines without XHCI controllers that I use regularly, so I haven’t looked into isochronous support for EHCI because I don’t need it, and most users don’t seem to either. If there was that significant of a demand for it, well, where’s the upvoted ticket? Where’s the people complaining? “The squeaky wheel gets the grease”, as the saying goes. I suspect axeld was likewise quite busy and not prioritizing disk encryption, if he even had much time at all.
If that were so, it would be extremely strange for humans to make judgements about the morality or immorality of things done in distant countries. But in fact this is completely habitual for humans to do, whether or not one says those judgements out loud. Wars have been waged, whether rightly or wrongly, because one side was very sure the other side’s actions were grossly immoral or unconscionable.
The simple fact of the matter is that humanity acts all the time as if universal morals do exist. Sometimes even while explicitly claiming they don’t, on the basis nobody can agree what they are! But with rare exception, humans simply don’t believe in “might makes right” as the basis of morality, even when we fail to live up to it. (And the exceptions tend to wind up in prison.)
Or, think of it this way. Languages are many and different all over the world. But humans being humans, they all serve the same purpose: to communicate one’s thoughts. If there really wasn’t even a little bit of “universal human morality”, then there would be no words for it with close synonyms across many languages, because there would be no common concept of the thing called “morality”, the way we have common concepts for “earth” or the like. But in fact there are such words that closely correspond. Therefore at minimum the idea of morality is one common and shared across cultures.
There are plenty more groups that enforce behaviors besides those two. People who disrupt the activities of a community, or who are habitually rude, for example, will have some very basic standard of morality “enforced” on them, and shown the door (and/or made to use it) if they don’t.
Anyone who prescinds to judge humanity as a whole is either divine or insane. I am not doing any such thing here. In fact I am not even judging Haiku as a whole, but the actions of individual people, myself included, and stating that if the only way to achieve some outcome is to do an immoral thing, then the outcome simply should not be achieved. That’s not the same as saying the project should commit suicide or be killed or anything, but a remark that in the face of one particular evil, it would be better to accept death calmly, rather than fight fire with fire but in the process become evil or complicit ourselves.
I very much don’t want Haiku to die, either. Fortunately I don’t think we’re anywhere near that. But sometimes extreme cases are useful to clarify or elucidate when considering less extreme ones, hence my consideration of the possibility. Were it ever to come to pass, well, I would be disappointed to say the least. But I could very much live with those consequences.
AI tools are the things that can save time. For projects made for free in spare time fully on enthusiasm, saving time can easily become a borderline between impossible and possible. Also AI tools allow to compensate domain knowledge so developers unfamiliar with driver development become able to develop drivers with AI assistance. It can be simply no developers that have proper domain knowledge that also makes a borderline between impossible and possible.
For infinite human resources yes, everything is possible. But resources are not infinite.
This development process as explained on the developer’s GitHub looks to me like this is not just “AI” slop but something which should be worthy of being tested and (hopefully) audited by the Haiku community.
As the lack of an encryption app like Veracrypt and compatibility with Yubikey in Haiku has been always been at the back of my mind I think that I will wait in anticipation of this new piece of software for Haiku being tested by some of the developers in the community!
—
Development
This project was developed by a human developer in collaboration with Claude AI(Anthropic). Claude assisted with code generation, security review, and test development. All design decisions, architecture choices, and final code approval were made by the human developer.
Oh, now it gets personal. Actually, DriveEncryption was working just fine for years. It doesn’t work anymore because of changes made in Haiku in the mean time. Unfortunately, I found a work-around (by letting the host do the handle the encryption) for my NAS, so I didn’t need to find the time to fix it yet. Plus, I’d like to integrate it into Haiku, which is a bit more work due to the TrueCrypt license I had to adopt back then.
From a pragmatic POV, LLMs can be great tools. However, code quality really is a big issue, and if you don’t use it very carefully, and know your stuff, code quality will decrease considerably. Unfortunately, people tend to be lazy; it takes a lot of energy to review and refactor AI code, so much that one usually won’t do it in the depth it should be done.
Also, for open source projects, reviewing code is always a bottle neck. It’s worth it, because when you do it, usually both parties involved learn from each other. With almost every developer interested in contributing to Haiku long term, it will get much easier to review over time, as one gets used to each other and each others way of working. It creates synergies, and the project and everyone involved benefit from it a lot. It’s like getting a new co-worker who needs to be onboarded, until they can work mostly on their own.
However, with AI reviews, there is no such thing as growth. There is no such thing as a coherent thought. There is no learning. You get a new co-worker for each push request, sometimes several ones for each. While the project can certainly benefit in functionality, it’s draining energies of the people involved, and it’s really hard to review.
For 3rd party apps, I don’t mind people using AI from a technical POV (from an ethical POV that’s very different, though) - let them, if they want to. Not everyone can program either, and the code quality usually is not that important if you’re just using the tool. For security related applications, that should be a bit different, though.
Due to the widespread use of AI tools nowadays, I still think it doesn’t make much sense to mark apps that use AI, I would rather mark those apps whose projects do not allow AI generated content. There is also a wide variety of how you may use AI: you could just generate the whole app for the most part, but you could also just let it find a bug in existing code (and then fix it yourself, if it turns out to be a real one) – again, from a technical POV alone, the latter will not degrade code quality at all, even if it is AI assisted in some way.
Politicians in western countries are now earmarking nuclear power plants to AI companies. The same amount of power that would go to an entire city will just go to an AI company. This is completely unprecedented.
The explanation you quoted is rather obviously itself generated by AI. Is writing explanations really so difficult? If a human can’t be bothered (or isn’t able) to explain what they themselves did or are doing, but “delegate” even that to “AI”, how can we trust that they really did audit the code carefully?
Note what this doesn’t say: it doesn’t say that the human actually wrote any code at all. This explanation is entirely compatible with the human involved writing no code at all. But of course as I just noted this appears to be itself “AI” generated, so who knows if it’s an accurate summary or not?
So this doesn’t indicate the project isn’t “slop”. Actually it is a further indication that it’s pure “vibecoding”.
“Guilt by association” means someone is deemed guilty when they had no part in guilty actions but just associated or are connected in some way with people who did commit them. Here, the person and project in question directly used “AI”. So there’s no “by association” happening here.
Let’s see, someone posts an application on his own site. Doesn’t set up a Haikuporter recipe, doesn’t ask “hey, how do I get this into Haikuports?" or, God forbid, “How do I sneak this code into the operating system itself?”. None of that. He just put up his application on Github, doesn’t hide the fact that he used LLM. and then announces it so that people can look at it. Or not.
And the reaction? “This can’t be merged because …” The mere existence of LLM code in the project is seen to imply that it must be part of some underhanded plot to contaminate the Haiku codebase. Yes, I call that guilt-by-association. You code with Claude, so you must be part of this imaginary cabal that wants to take over Haiku.
There are good, rational reasons to avoid LLMs. We have gone far beyond those. We are now seeing irrational hatred. I am finding myself increasingly defending the pro-LLM camp when I’m quite ambivalent about LLMs myself.
Honestly the “it took X years for haiku to get Y” argument is kinda dumb, since that applies to literally everything that haiku is getting now. Add a new API tomorow, bam “it took 25 years for Haiku to get this api”
You are missusing the phrase, that is all that Waddlesplash was saying. There is no guilt by association here, because there is no primary party, that is guilty, that they are asociated with.
That zeldakatze pointed out that this code can be merged when, let’s see, the OP started with “Haiku now has working hardware bfs encryption support” I don’t really see that as problematic in any way. Nobody is obligated to try and interpret more into this than they want, you don’t have to guess the authors intent to construc the argument “they used ai” → “AI usage for merged code is forbidden” → “clearly they don’t actually want to merge it” when the OP never said anything pro or contra trying to upstream it. The assumption that they wanted to upstream it when claiming haiku got support for a feature, and not that their application has one, is fine…
Sure, pointing out that code is not mergeable is irrational hatred…