Next round of AI models being released

Dumb GPT-5 Copilot didn’t bother to find out that Falkon, qBittorrent, LMMS,Krita, Supertux… are already ported.

But the issue here is the excesive fixation over porting and porting software from Linux (with the thousands of dependencies they carry with themselves) instead of writing native software with the local API, achieving full system integration, lightweight impact on resources and not committing the same usability mistakes some Linux DEs’ apps do.

5 Likes

Those are “current” problems. And I didn’t see them being solved, nor I saw AI being at least somewhat better, in programming. It was and always is trash. It is only able to give you correct code for elementary, trivial problems (of course it does, it only copies code it found somewhere). And, even then, it is very easy to confuse it. Those are facts that anyone spending more than 5 minutes in AI can verify.

There was one time AI managed to impress me giving a smart answer. One in 100+ tests. Then I realized the exact same topic was discussed and solved in Stackoverflow, which AI obviously copied. A glorified search engine with a very good parser, that’s exactly what it is. And the parser makes it dangerous, because it makes people think this thing has any kind of intelligence. No it does not. It gives crappy answers presented in a convincing way.

2 Likes

I never said AI would replace artists, though. There will always be demand for either now that both exist.

  1. I mean, Claude AI is pretty good at programming.
  2. How do you know AI will never get better, though? You guys keep repeating that claim when I ask how you know, but simply repeating it never actually substantiates your claim. As I’ve said earlier, you’re going to have to ground your arguments in actual facts, otherwise, your argument(s) will continue to be floating abstractions. So far, all I’ve seen is argument from incredulity at best – “I haven 't seen it happen yet, so it must be impossible!” despite AI improving significantly over the past few years since introduced.
1 Like

Claude seems to be somewhat better, even though you can’t trust it. I still get crap when I test it, while Perplexity or You, or Chat-GPT, or whatever else may give a better answer when asked the exact same question. It is more like a hit-and-miss situation than anything resembling “progress”. This AI did well on this question while the other AI did better in another. I don’t see a clear winner that could justify hopes for a “clever” AI. And I don’t see how people do hope such a thing, either.

I don’t think a glorified search engine (that’s exactly what it is) can get any better than mediocre, and that if you are lucky. It just searches data available and combines them, often with hilarious results well worth being called “BS generator”. Sure, you can make a better search engine over time, using better models. But that means nothing, a search engine is a search engine, you cannot change that. What you can do, however, is to call it “intelligence”, market it as such, and sell it for profit.
Every answer given by AI must be thoroughly verified (some more decent AIs clearly say so). This has not been changed. And I don’t see how it can change. The only way to get a guaranteed correct answer is to know the answer beforehand and help AI to find it. Sometimes it manages to do that.

The only thing that seems to change is a new generation of “programmers” who think they can do what they can’t because AI will help them. Even here, in this forum, I’ve seen people saying “maybe not now, but in a few years I will be able to use AI to do this and that”. My answer to such people is simple:
How about you learn programming to do it yourself, instead of hoping AI will get better one day to do it for you? How about you getting better using your brains instead of praising a search engine with IQ exactly zero? A search engine you just hope, without any real evidence to support your hopes, that will do it one day for you?

2 Likes

Basic math knowledge.

The term “AI” here is alsoa bit inacuracte, LLM is a bit more descriptive.

But in general it is input → output with a known sets of inputs and outputs as a basis. This can never work for substantially different or new problems in IT, as you are bound to encounter. And when working with existing problems the LLM has a huge pool of wrong and bad code examples to draw from.

This technology, as it is, can never work reliably for this usecase. It’s a bit like trying to convince me your beer opener is a good screwdriver, I have indeed seen you unscrew some things with it, but that won’t make it a good screwdriver, no matter how much beer opener innovation there is.

5 Likes

The only thing that seems to change is a new generation of “programmers” who think they can do what they can’t because AI will help them. Even here, in this forum, I’ve seen people saying “maybe not now, but in a few years I will be able to use AI to do this and that”.

This reminds me of Moral lessons from free software and GNU Emacs | Protesilaos Stavrou.

Moving to free software changed my life for the better because it put me
in a course of escaping from /heteronomy/: rule by another. I wanted
/autonomy/: rule by one self. My time buying apps for the Mac was one of
heteronomy not only because I did not control the software, but mostly
due to the mentality that is associated with using tools that you do not
understand: you are always dependent on someone else, you are trapped in
that cycle of powerlessness and victimhood that I alluded to earlier.

It seems like AI is the ultimate road to heteronomy.

2 Likes

As far as I know, all the companies trying to do that are not actually making any profit out of it. So this will only last as long as some investors keep pouring money into it with no returns, or maybe if they manage to convince their user base that paying $400 a month for it is worth the help that the AI brings. Meanwhile, studies show that the AI makes experienced oevelopers slower with their job, and they only write half-broken code. All of this while emitting tons of CO2 in the atmosphere, consuming a lot of drinkable water to cool down the datacenters, and also doing the training by ignoring copyright laws.

Even if it was actually good and helpful, it would not be economically viable, and it will result in worsening climate change and killing people. I wouldn’t want my code to get blood-stained by that, personally.

3 Likes

Well…

We can train a “Haiku Guru” LLM with Haikubook and plenty of source codes for training Haiku API. This would be useful for providing instructions to users to create Haiku apps step-by-step.

Another use of LLM mentioned above, for software porting suggestions is also helpful. Just train the AI with full Haikudepot list so that they can quickly check whether the dependencies exist already, and prevent suggesting already ported apps.

Maybe we should put effort in actually writing the Haiku book first?

You cannot train an LLM (or human developers) using training material that doesn’t exist…

No, no it is not.

If you are bored and want to port software to Haiku, you go to the haikuports bugtracker where there is a list of several hundred pieces of software that actual Haiku users need, at least enough to open a ticket about it. If you get to the end of that, you send a message to the firum, asking people, “hey, I have fulfilled all the requests, anything else I should try porting?”

And when you get to the end of that, you can go to repology.org and see what other operating systems and distrbutions are packaging, and go fishing for useful software you may have missed. But if you get to that point, I suggest that you instead take a break from porting thousands and thousands of things, and go enjoy a nice sunny day instead (if the temperature is still suitable for humans to go out, this week it very much isn’t over here).

You do not ask a robot to generate a random list of things to work on.

And that’s assuming that you are bored and ran out of your own things you need ported to actually use Haiku. This is usually how a typical day of Haiku work goes for me. I want to do some task. The tool does not exist or is not ported. So I port it or write it. While doing so I may find 10 other tools in a similar situation, or slightly broken. Or I may hit a bug in Haiku itself, that I try debugging and fixing.

I already do not have time to help other people with the existing port requests and tickets, and now you want to automate that part to have fake requests for software that no one will actually use?

6 Likes

Very good point to stop calling LLM as AI. Ofc Sam Altman wouldn’t agree, because he is selliing AI to investors, rather than LLM.

3 Likes

Like Amiga Guru - exactly… good find.

It’s just a tool, that could be used to help with troubleshooting, knowledge-base, testing code, speeding up acceptance of code, to learn coding by porting apps etc - and THEN ppl could code native apps by hand. One use could be to help the porting of Haiku to the RISC-V platform.

But:

  • It spits out garbage - not if it’s sufficiently prompted to focus on a topic.
  • It wastes a lot of energy - engineers will work out a way to reduce energy usage.
  • It’s bad for the global environment - yes, but Haiku R1 comes first :smiley:

Anyway, pro LLMs cost money, so individuals may have to use one for themselves.

I can pretty much gurantee that “I did not bother to write or understand this code but gave a mathematical model without any fundamental understanding vague instructions” will not fast track you through code review.

5 Likes

Why do people keep suggesting that LLM could help with things that are already done?

The RISC-V port works. Kallisti5 has recently set up an haikuports builder and the pacbage repo is beingfilled in. We will likely consider it for an official beta next time we do one. No AI needed there.

But maybe that’s the only thing LLM can do. Help you find the existing solution to already solved problems that it has in its training database.

5 Likes

And the ARM port?

And how long have ppl been talking about a book for Haiku - months… years? It can be done using an LLM - and it offers special areas to focus on, like apps, kernel, packaging system, sample apps, diagrams, etc. It could write several books in a day. They might contain some mistakes - but at least it provides info to read, and if people spot mistakes they can be fixed. It’s better than writing a book from scratch. e.g.,:

Moderator edit: Huge AI outline removed

Wrong info is worse than no info.

No, it is not. For me writing is easier than reading and checking someone else’s work. Especially if it’s a bot, and if I spot something wrong, I cannot tell them, instead I have to fix it myself. And in the end I spend time reading + time checking + time rewriting, instead of just writing it in the first place.

So, yes, you can generate a lot of text and then spend a lot of time reading and trying to fix it. Or you can start from scratch and write correct text from the start.

7 Likes

Don’t post AI slop, it is effectively spam and it will be removed as such.

10 Likes

Don’t spread FUD about tech developments, and don’t treat people like dirt - it reflects poorly on Haiku.

If you’re using it like a glorified search engine, I suggest you take a serious look at your prompts. If you ask an AI questions like you do with Google, you will get garbage output.