If it works under the bundled “PoorMan” app, you’re there!
EVER? You mean it the next trillion years or so will AI EVER be used in the development of Haiku?
Sorry for the sarcasm but yes, eventually SOMEONE will use AI to build versions of Haiku but I hope that it doesn’t happen soon.
Why not? Because I PERSONALLY don’t know the motivations behind how the people building AI hope AI creates things. Will they be trapping you into using THEIR product “forever” somehow? Or will there be hidden things in the code that nobody knows about that is taking personal information about people and using it for things that we never signed off on.
There are MANY paranoid reasons why NOT to trust to people at companies building AI. And until those reasons are vetted and we can be ASSURED that AI is not being used in some way that is bad for us as people then I HOPE that the HAIKU team doesn’t want to use AI until then and that they are not taken over by people with motivations that are against what is good for us as human beings.
People’s morals are not known until for some reason they see the light of day. People didn’t know about Jeffrey Epstein and what he was doing until someone shown a spotlight into his life and what he and his friends and business partners were doing against the will of the women that they were entrapping and peddling as if those women had no worth.
How do you KNOW that the developer of AI products aren’t REPUGNANT people (I’m putting that VERY lightly with anything that the AI developers are doing with their software. Will someone in that group use it to spy on people and steal personal information. There is a LOT that people can use YOUR information for that you would NEVER know about until it was too late and everything you had in life, your reputation, your net assets, EVERYTHING is suddenly gone.
Paranoid? Until they are vetted I am DAMN paranoid about their intentions and motivations. Obviously they hope to make insane amounts of money. But more than that, look at Elon Musk and what money got him. Power to f#$*&#$ up a LOT of people’s lives. You NEVER know what the results will be of anyone gaining LOTS of money and because of that, POWER. So BEFORE we start using AI for everything, maybe we want to slow down and THINK and VET the people involved to find out what type of CHARACTER they have.
Take the former CEO of Uber and the top levels of management at Uber and what they used THEIR power for. There are many examples of GOOD people and many examples of people that I don’t trust at all because they are HORRIBLE people inside that we just don’t know about YET.
It is the YET part that worries me. Oh, they might say everything the right way so that they seem to be benevolent (someone with kind and positive intentions) only to turn out to be a complete #(&#$(&# who will use their power to devastate peoples’ lives for their own pleasure and gain.
Be CAREFUL for what you wish for!
I one BILLION percent agree with you. That’s on the LOW end of how I feel. ![]()
Uh I mean we can build our own AI or Ilm
It is my sincere hope that this policy stands (and frankly, I’d welcome it getting more hardline against AI.)
Even aside from my moral distaste for abominable intelligence, I feel that one of the reasons that Haiku is such a wonderfully programmer-friendly OS is that the system itself is written by humans, who give a crap about skill and artistry. I have no faith that a slop-coded OS would retain any of that excellence.
It should be a system by humans, for humans.
Uh I mean we can build our own AI or Ilm
Good luck with that, training a good LLM requires an insane amount of computation.
About usage of AI for software development…
A LLM can never be trusted. Therefore, a LLM must never write production code.
For a prototype everything goes though, if you manage to extract knowledge out of it. But Sabon has a point: you can’t trust AI people. And I observe that you also can’t trust people who lean on AI too much.
I trust Haiku, so I wish it to stay away from suspicious people.
I agree, but there is unfortunately a “stay out of politics” rule in our policies which led to the LLM banning being based solely on the copyright issues. That is the one that put the project itself at risk. The dangers on climate change, the software industry as a whole, civilization, democracy, or anything else are deemed not relevant in that context.
But the end result is achieved, LLMs are not allowed. No need to have that dnfficult debate this time, everyone in the team seems ok with that decision. Good enough for now…
Predictions are hard, especially when they concern the future.
Will someone run some ai software on haiku?
Almost certainly yes.
Will haiku integrate so functionality into the OS at some time?
Maybe, probably
Will ai be used to help develop haiku?
Almost certainly, I don’t think we can completely stop this trend
But ai should be a tool, nothing more
In many ways this rule is a two-edged sword – but in both cases it tends to cut far more to our advantage than disadvantage. Because “stay out of politics”, in our interpretation, doesn’t mean “don’t do anything that could be perceived as political” (obviously restricting LLM usage is quite a “political” decision, or at least a “cultural” one, in many circles!), but rather, “don’t make decisions for political reasons”, and instead make them on grounds that are related to the project itself.
Obviously all of us being humans have political views, and now and again something happens on the forums that clues one in to the fact that Haiku’s community is quite a diverse bunch, politically speaking, and that there would probably be multiple unpleasant schisms if we did “talk politics” regularly and openly. But the fact that we don’t talk politics doesn’t mean that, internally, people aren’t caring about things for reasons that are “political” (or at least for reasons closely tied to one’s “political beliefs”). It just means that, even if they believe in something strongly, people are forced to frame arguments in terms specific to Haiku, and its values, and goals, and all that.
For “AI” in particular, I don’t think this is a particularly hard argument to make. The copyright one is the most obvious and currently strongest argument for banning LLMs, and one which (IMO) ought to apply across the board to basically anything; but there are plenty of others one could make without bringing “politics” into it. I once made another such argument on the forums here when asked about using “AI” to write the monthly reports (post). I think there are a number of others that could be made based on the project’s philosophies and principles, for that matter.
But of course sometimes “politics” is simply unavoidable. If there was some matter related to the project’s culture that the development team had reached a consensus on, but which wasn’t somehow derivable from the project’s established philosophies and principles, then I suspect we would find some new principle to lay down about whatever that thing was. And what would the process of hashing out that principle be, if not “political”? Where would it come from, if not our own principles and beliefs, and likely the very same ones which, in our ‘personal lives’, feed our political beliefs and actions?
Those sorts of arguments, though, tend to be quite messy and often have rather ugly fallout. And so, when at all possible, it’s best to try and see if one can find clear “non-political” justifications for the things one wants to do, because more often than not, if it really is a good thing for the project, there’ll be ways to justify it that aren’t so controversial, or maybe even controversial at all. Or maybe even in the process of trying to formulate a “political” idea in a “non-political” way, the idea itself is shown to be flawed, and so you wind up finding a better one.
You are posting this message on a forum about a project that exists at all because a bunch of people wanted an operating system that was extremely niche at best to live on. If we were the sort of people who capitulated to the majority, we would have all given up and switched to Linux a long time ago. (Or Windows, depending on which majority we’re talking about.)
Obviously we are a tiny project and are in no way capable of “stopping” trends. But we very much can, have, and do “buck” them. I don’t see any reason we can’t continue to buck this trend, indefinitely.
Finding precedence with a pattern-matching algorithm is useful if you’re looking for a target of innovation. Anything else to do with compute infrastructure is mostly hardware and driver support.
I dont believe it, all that can be fixed.
Please explain how you’d solve each of the arguments I mentioned.
I honestly don’t think it can be fixed,instead I think the technology is flawed by design.
Yes. Already so. Any developer using any tool that analyzes, compiles, profiles, debugs, or searches code, a website, or document is using a form of AI..
The problem is when the output from an AI-assistant is not of your own work but from external resources (aka books of knowledge, databases, or knowledge bases). What happens is people repost things straight from another source and it may get mistaken as their own words, hanidwork, or code - without proper referencing to the true source.. Usually, done for self-credit, self-enhancement, or profiteering. Also, the inherent human nature of laziness in trustful use of automated toolis without careful inspection and/or review.
Enough for now…