Haiku bfs encryption and two factor Yubikey authentication

I am curious to hear what @axeld thinks, since he’s the author of DriveEncryption, a similar tool.

This just can’t work. AI generated code is written without intent. It is like asking us to find poetry in a phonebook. When reading code, I can understand the mind of the person who wrote it, what they were trying to do, what they may have misunderstood, or what they did that I don’t understand.

There is no such thing in LLM generated code. You can think about it in two ways, either it is void of any consistent meaning, or it is an accumulation of little snippets from the LLM training data, and then it’s like trying to understand 1000s of people talking at the same time. Either way, I cannot make sense of it and I’m not interested because code review is a social thing, that you can’t do with a machine. I will only have questions that no one can answer. A waste of time I’d rather spend on code that people actually took care to write…

9 Likes

If you don’t want dnscussions about AI, maybe don’t write your applications and posts with AI?

6 Likes

Aren’t you the guy who was comparing the Haiku Depot potentially labeling LLM-generated apps… to the Holocaust? Why are you even here?

2 Likes

Calling in Sheriff @nephele to restore order and get us back on topic! :rofl:

Yes, it’s me, and I have no problem restating this: I don’t want to be labeled based on the tool I use for fun in my free time, just as I don’t think it’s right to label someone for the car they drive to work or the religion they practice.

I think it’s pointless to keep blabbering in a thread created by someone who took the time to gift the community a program that everyone is free to use or not. But above all, I find it frustrating that none of the ‘big bosses’ has the courage to open a dedicated topic to discuss this, instead of letting every single post by users who freely and peacefully use available tools go off-topic.

1 Like

Software is not poetry, it is engineering. It do not matter what author was thinking. The only thing matters is does it work or not. And it can be demonstrated in practice.

This is ridiculous.

Many Haiku users want more native software, no matter are AI tools were involved or not. It is shown in practice that AI tools give significant practical benefits. Many things become possible with AI that nobody able to achieve for decades of Haiku history.

Please do not make obstacles for users who want native Haiku software written with AI tools. If you don’t want it personally, just ignore it. Do not raise flame wars.

5 Likes

Can we not get off topic and just talk about the app and how it works or if the app is compatible and things that should be fixed on the app rather than going about AI rant that get it, I should not be allowed and some people want to use AI for their things but generally, this is starting to get annoying and I’m pretty sure everybody’s just tired of talking about AI by this point that and have y’all noticed that there’s been a unusual amount of less activity in the forum ever since we had the AI discussion?

3 Likes

I agree, let’s get back on topic. This whole thing has become exhausting because some people have such a fundamentalist mindset: it’s either their way or the highway. This is exactly the kind of reasoning that drives people away. If we stop acting like the ‘AI police’ and just evaluate the work for what it is, the entire community benefits.

Ideally discussion about using AI should be splitted into another topic and some announce about that fact that AI tools have been used should be added to this topic first post.

1 Like

Other than toxic “backseat moderation” in this topic (seriously, if something is offensive or offtopic, just flag it. No need to start an argument about that), I don’t see any off-topic posts. Discussing wether the technology the software is based on and it’s implications on security, for a security software, are definitely on topic. (Though wether forum rules should be changed for marking such topics can move to a new topic)

Pulkomandy already elaborated on this, but let me give you an example, say you were to ask me to review some code from the curl library… I could probably try and review some of it, maybe comment a bit. But assuming you also did not write curl each time I’d ask you “why is this like that?” you could only anwer “I don’t know”.

With code from an author, I could atleast look at the code comments, with AI code I don’t even have that luxury (Assuming there are comments, those are unlikely to match the code correctly, or explain reasoning, as a thought process is missing…). So unless I would take the time to do a forensic review, examine the entire codebase and look for cracks, I really can’t review code like that.

Say, if you were to submit a change to haiku, and you would use an API I’d think is a bit ill-suited for the purpose, then I could ask you why you chose to do this, and what I might have used instead. Then either you can explain to me why your choice fits better, and I as a reviewer learn something, or maybe you learn something about a different API you could use.

An “AI” can’t really answer you which api would be better fitting, but it sure can create examples of using both :frowning:

4 Likes

No one said that it’s not good; several folks have said the app is not trustworthy because it’s made with an LLM, and that’s a very different thing.

It may be totally fine; @dodo75 may be a competent subject-matter expert who has carefully reviewed each LLM-generated line. This is extremely unlikely, though, since that would take the same level of expertise and as much time as just writing the thing themself would have taken.

And if that’s the case, there’s no reason to use the LLM.

So the likely scenario when presented with an LLM-assisted project is that the person involved is either a) not sufficiently expert to have written it solo, b) put less time into it than they would have had they written it solo, or c) both.

And any of those scenarios makes the resulting project less trustworthy than one that was produced without the LLM.

It doesn’t mean it’s necessarily “not good”, it just means it’s more likely to be. (In LLM-land, everything is a probability; even whether the output is any good.)

And to be fair to @dodo75/Claude it’s almost certainly better than I could do in a week; I doubt I could make anything that even seemed to do drive encryption correctly in that time. :slight_smile:

3 Likes

I feel like LLM, should be working to port more app and game to haiku since they could solve some problems that are caused by Linux ports plus when I tried this encrypted bfs it really help a lot and didn’t cause memory leak like other one

ehm, you do realize that your code could not be merged as LLM generated code poses a risk of copyright violations, even if we ignore any security concerns?

A pattern I’ve noticed for myself is that, if I’m having a look at a project, one of the first things I’ll check is if there is any hint of AI slop (checking mentions, contributor lists, existence of claude.md/agents.md etc.) and if that’s the case, avoid the project. So having a strict no LLM stance is actually a good thing imho.

Tbh. I just find your whole attitude about discussing stuff just borderline despicable. Discussing how a program was designed is just as important of a discussion as anything else about a program, especially if security is a concern. You seem to answer a lot of critique with a mixture of ridicule and policing (i.e. labeling something as irrelevant). I personally think that this is a source of toxicity which could damage the community.

It’s relevant as most of the commit history is littered with an LLM. If you don’t want that, just don’t produce software with an LLM. Otherwise you might have a program where you don’t quite know how it’s designed to behave.

eh, no. Implementing cryptography yourself usually is just bad practice. A quite common piece of advice I’ve heard when writing stuff that has to do with cryptography is, unless the aim of that project is to just implement those algorithms (like openssl etc. do), to never implement these algorithms yourself (one exception being studying them). Implementing those algorithms can be the source of many security pitfalls and nobody will likely ever do such a finegrained security audit for an embedded encryption scheme as would be necessary. In other words, unless your project specializes on doing just common encryption algorithms (like openssl, wolfssl, mbedtls do), best practice is to use one of these libraries which are being well audited and reviewed. All of those libraries are FOSS aswell, you can study, modify, fork etc. them if you want.

I kind of have a feeling that there is just a flood of projects that “do something with AI” lately (both in general and on this forum). I personally find this concerning as this very much normalizes a technology that in the current socioeconomical context is just dangerous. (I think the reasons were mentioned often enough already.)
I personally just don’t want to have to deal with LLM slop if I don’t need to and personally would love to see an anti-llm stance in the context of Haiku

5 Likes

But LLM’s ain’t really hurting nobody and if somebody wants to use them, they can use them using it for system Based task is bad but having get used for porting app, software, and all that it’s very helpful Also, a anti-LLM stance would sort of defeat the whole point of having software be ported to haiku would be very helpful to haiku as there isn’t a lot of games or software on haiku, which is low-key needed nobody is really questioning how it’s being put there not to mention LLM aren’t really being forced to your face, it would be different now the legal problem of it would be integrating LLM as you need a license for that but overall using it as a tool to port games to haiku or make software for haiku is not really gonna hurt haiku, especially if you train it to understand haiku

But in this case, I would say it’s bad for LLM to be controlling encryption on boot partitions as one wrong move could break the whole system, but I could is very resilient so I don’t really see the likelihood of that happening, but this is for me personally outright banning LLM already when haiku doesn’t have that much software and really needs more software and more engines for coding and making games for it is not really gonna help it in the long run, especially if a lot more people are trying to develop games for haiku or start building code for haiku. It ain’t really hurting the community neither is it really being forced into your face. You ain’t really obligated to like ai assisted software but then again you would have to realize that you probably were using AI assisted software well before it became apparent in these months

But this is my personal opinion, I hope you have a great day

No, it’s both.

The process of creating software “at scale” is indeed an engineering discipline, a science that’s more hard than soft. But writing code, on the other hand, is an art; even organizing one’s code well is not really a scientific discipline but an artistic one. So software absolutely can be “poetry”.

But whether or not it’s poetry, code is clearly a form of communication. Humans communicate concepts and meaning and all those other good things, whether in prose or in poetry. Machines do not.

Reading code written by another human (or group of humans), I get senses for what they were thinking about and what they meant, even behind the mere logic of what the code is actually doing (this is one of the ways you can find bugs, by spotting discrepancies between intent and logic.) The same can’t be said for code generated by a machine. But PulkoMandy has already outlined all this in his own posts already.

6 Likes

They do. As a very short and incomplete list:

  • power and water usage
  • copyright violations (LLMs would make a perfect to for hiding plagiarism.
  • a lot of ethical problems
  • being connected to a very unstable economic bubble that is responsible for the astronomical inflation of storage media, among other things
  • the possibilty of enormous price increases of this technology as funding for development dries up and most companies in that field are not really self-sufficient.
  • deskilling and making people dependant on this technology (which is liked to multiple economic uncertainties)

I think that should already be quite enough reasons to see why such models are dangerous

7 Likes

Actually agriculture use more water and power in the U.S alone, ai only make a small portion of that

Surprising not true it one of the reason but it not the primary reason it actually one of the lesser reason as Ai is relatively new and still haven’t left a big mark for it to cause that much financial damage to the world economy, is mainly cause by political and wars

most companies in that fields like google and Microsoft are actually self sufficient and have been working in that specific industry way longer and make a lot of money from it even without ai they are very self sufficient and relevant in the world

This is partially true a lot of people have been cut but that doesn’t mean they have lost there skill, as human have more experience and ai can only assist us with what we building

Everything else like ethical reason and environmental reason depends but I overall don’t see know bad thing on AI as long as it ain’t causing that much damage to the world and the direction haiku forum and community is heading in due to this specific topic has definitely caused some out roar, so I say agree to disagree that Ai can be used but must be heavily moderate and can only be use for specific port and making software but nothing system wide as a ai can not understand a whole system.

1 Like