Haiku bfs encryption and two factor Yubikey authentication

Nice to see encryption tools like this.

If you check his .claude it looks like he using Claude to review his code. If he needs to include a label for that, how can we be sure that Copilot hasn’t suggested any changes someone else’s source code before a merge? Or that someone asked ChatGPT for a suggestion? This anti AI has to stop.

8 Likes

Asking Google earlier on something I wasn’t fully clear of gave me a short AI answer (yes they do that), it helped me solve the puzzle, does that mean I support it, no, but I can understand it can be a handy add-on, not a replacement.

These discussions have been running on my mood lately too, so I’m all in for letting users decide and just mark the topic as being somehow AI involved, in the matter how much it is involved, maybe a voting would be a good thing to shut these discussions down and settle this once and for all. (mind the typos) :slight_smile:

5 Likes

Maybe having a filter to ignore post marked with “AI” could come in handy then. :wink:

3 Likes

That is definitely something that bugs me about it, yeah.

It’s not the main thing, but it definitely makes it hard to share a social space with them. :slight_smile:

3 Likes

We have “Software / Native” and “Software / Ports”; we could maybe have “Software / AI-assisted” or something like that?

7 Likes

Why don’t you just create a dedicated thread instead of cluttering this one with off-topic comments? You’re worse than people who use rubber duckies to write code! :rofl:

And here to provide an example of the difficulty in sharing a social space …

Absent a policy that would involve labeling, discussion of the ai-ness of a piece of software is on topic for any post about a piece of software.

Especially one that failed to mention genAI use in the original post.

3 Likes

I’m not anti-AI. AI can be very useful for some things. I’m against how some people are using it.

1 Like

You’re still off-topic. Have the guts to open a specific thread and we can discuss it there.

Does your employer ask what car you drive to work? Or whether you prefer religion A or B?

These are all OT rants compared to the time a developer spends building a project and sharing it with the community.

The tool looks itself fine. I don’t really encrypt my disks, so probably won’t use it.

But it seems, everytime someone says something about AI here, everyone just start fighting.

As long that software wasn’t fully generated by AI, I don’t really have problem.

Well thank you for casting doubt on the ENTIRE open-source movement. :grinning_face:

But really, people. Does EVERY topic now have to be hijacked by the same old AI arguments? I couldn’t blame @dodo75 if he never wrote for Haiku again. He’s not asking for his app to be included in Haiku, or even in HaikuPorts. Is he now not even allowed to mention it on the Forum?

12 Likes

honestly, I can only recognize the legal trouble behind it, but that’s only if you integrated in this case scenario, it’s not being integrated. It was only used to build a software there shouldn’t really be no problem behind it being integrated versus it being used as a tool to port something to a OS is entirely different because you’re not being forced to use it, and that’s the same line that people tend to forget in this case of scenario it’s being used as a tool. It’s not being integrated.

And from my experience, the app actually runs pretty well. The only issue I wanna make relevant that it’s not compatible with SD card or USB you need an actual storage based device to actually run it so that probably needs to be worked on and improved overall this is a good thing and if AI eventually gets accepted and there’s a whole separate category for AI assisted apps rather than AI integrated apps then there would probably be a uprise in haiku ports

1 Like

But the encryption algorithm’s implementation can be seriously flawed and lead to vulnerabilities, depending on the algorithm and how much it depends on e.g. CSPRNG, key processing, etc.

Not if the algorithm is implemented incorrectly. And that’s indeed the major source of concern here. Full-disk encryption that doesn’t provide at-rest protection is basically worthless.

Sure, but we can absolutely decide that such software is dangerous or falsely advertised or the like, and on those grounds say that it should not be promoted, and then we can take steps to disallow its promotion here or other places we control.

I didn’t say that there were necessarily more of us who are against AI than not, but that the number who are against is “significant”. The ratio is now 6 to 8, which is far too small a sample to indicate much of anything.

Exactly: it looks good. “AI” is very good at creating things that “look good”. But is it really? If you don’t know how to evaluate an encryption algorithm, then it’s quite easy to think something which looks fantastic is really bogus. “Snake-oil salesmen” have made killings on less.

Now, AES is a real encryption algorithm that is very secure – when implemented properly. I don’t have much confidence (read: basically none) that this is implemented properly. So it may “look good” but in practice that may not mean much at all.

No, “claude” is listed as a “co-author” on many commits in the repository to actual code, not just the READMEs.

Not relevant here, because “Claude” was and is clearly used to generate code. We don’t need to allege any secret use of “AI”: it’s right there in the repo history for all to see.

Note that the “pro-‘AI’” crowd largely has only pragmatic arguments, whether positive (e.g. “it makes writing code faster”) or negative (e.g. “it’s impossible to avoid using software made with AI assistance”), while the “anti-‘AI’” crowd has arguments from principles (replacing humans, ethical concerns, etc.) as well as pragmatic arguments. If and when those principles are demonstrated to be false, then the “anti-‘AI’” crowd will have less of a case. But until they are, all the effort to try and put them down on pragmatic grounds isn’t going to work.

In other words: No, it doesn’t have to stop, and it shouldn’t.

And how exactly have I done that?

When you install software, you are putting your trust in the people who developed that software, and the people who evaluated that software’s trustworthiness. Doubly so for software related to security and encryption. We trust projects like OpenSSL not because they’re free of flaws (the CVEs attest to the opposite), but because they’re carefully developed and audited by lots of people who are very much experts in cryptography, who will prevent many flaws from ever arising or correct them quickly when they are discovered. Likewise when you download Haiku, you are trusting that the developers who build Haiku know a thing or two about how to put an OS together, that it won’t be a total waste of your time.

But you can’t trust an “AI”, because it’s not a person. It’s a machine. And machines don’t have any concept of “doing good” or “making mistakes”. They just do what they’re told; unlike humans, who can and very much do things contrary to what they’re told, especially if what they’re told makes no sense at all, or they know what they hear others telling them to be false.

Cryptography, even more than most other fields of computing, is built on trust. Trust in many things, but especially that the people writing the encryption code know what they’re doing in a mathematical sense, and aren’t trying to steal your data (or make it possible for others to steal your data.) But a machine knows nothing of these concerns, because machines are machines, and don’t “know” anything.

7 Likes

Until the forum has a formal policy on this issue, probably; where it’s relevant anyway.

I probably would have just ignored this one if it’d acknowledged its LLM use in the post, though, so maybe an informal convention of disclosure would be keep things settled until there’s a policy?

I can’t speak for anyone else, but just honest labeling is enough for me.

2 Likes

I think it can be tested by passing the same inputs to reference encryption algorithm and comparing results. If results are byte-per-byte the same, algorithm implementation is fine.

Another topic is generation of initialization vector, but it also can be verified.

3 Likes

That only works on “deterministic encryption”, where the output will always have the exact same bytes for the exact same input. Standard AES is not such an algorithm, and will produce a different encrypted data for the same input at different times. So validation is a much more complicated process.

2 Likes

AES itself is deterministic algorithm. It will produce the same output for the same input, key and initialization vector.

1 Like

I would find it more constructive if the devs here would point out flaws in the code itself, or have any constructive bug reporting, instead of them bickering away and saying this app is not good, because it’s made using AI.

I agree with Michel here that Waddlesplash pulled the rug out under open source by his post. The source code of the app can be viewed by everyone, audited by everyone, forked by everyone, etc. Just like Linux. Just like Haiku. Is everyone capable of that? No. But if you are, or interested in doing that, you can. That’s the backbone of code being open source.

2 Likes

Why should I waste time providing code feedback when there’s not an actual developer who will learn from it?

5 Likes

Let’s ask the OP, @dodo75 :

Would you like feedback on your app, so you could learn from it?