But the encryption algorithm’s implementation can be seriously flawed and lead to vulnerabilities, depending on the algorithm and how much it depends on e.g. CSPRNG, key processing, etc.
Not if the algorithm is implemented incorrectly. And that’s indeed the major source of concern here. Full-disk encryption that doesn’t provide at-rest protection is basically worthless.
Sure, but we can absolutely decide that such software is dangerous or falsely advertised or the like, and on those grounds say that it should not be promoted, and then we can take steps to disallow its promotion here or other places we control.
I didn’t say that there were necessarily more of us who are against AI than not, but that the number who are against is “significant”. The ratio is now 6 to 8, which is far too small a sample to indicate much of anything.
Exactly: it looks good. “AI” is very good at creating things that “look good”. But is it really? If you don’t know how to evaluate an encryption algorithm, then it’s quite easy to think something which looks fantastic is really bogus. “Snake-oil salesmen” have made killings on less.
Now, AES is a real encryption algorithm that is very secure – when implemented properly. I don’t have much confidence (read: basically none) that this is implemented properly. So it may “look good” but in practice that may not mean much at all.
No, “claude” is listed as a “co-author” on many commits in the repository to actual code, not just the READMEs.
Not relevant here, because “Claude” was and is clearly used to generate code. We don’t need to allege any secret use of “AI”: it’s right there in the repo history for all to see.
Note that the “pro-‘AI’” crowd largely has only pragmatic arguments, whether positive (e.g. “it makes writing code faster”) or negative (e.g. “it’s impossible to avoid using software made with AI assistance”), while the “anti-‘AI’” crowd has arguments from principles (replacing humans, ethical concerns, etc.) as well as pragmatic arguments. If and when those principles are demonstrated to be false, then the “anti-‘AI’” crowd will have less of a case. But until they are, all the effort to try and put them down on pragmatic grounds isn’t going to work.
In other words: No, it doesn’t have to stop, and it shouldn’t.
And how exactly have I done that?
When you install software, you are putting your trust in the people who developed that software, and the people who evaluated that software’s trustworthiness. Doubly so for software related to security and encryption. We trust projects like OpenSSL not because they’re free of flaws (the CVEs attest to the opposite), but because they’re carefully developed and audited by lots of people who are very much experts in cryptography, who will prevent many flaws from ever arising or correct them quickly when they are discovered. Likewise when you download Haiku, you are trusting that the developers who build Haiku know a thing or two about how to put an OS together, that it won’t be a total waste of your time.
But you can’t trust an “AI”, because it’s not a person. It’s a machine. And machines don’t have any concept of “doing good” or “making mistakes”. They just do what they’re told; unlike humans, who can and very much do things contrary to what they’re told, especially if what they’re told makes no sense at all, or they know what they hear others telling them to be false.
Cryptography, even more than most other fields of computing, is built on trust. Trust in many things, but especially that the people writing the encryption code know what they’re doing in a mathematical sense, and aren’t trying to steal your data (or make it possible for others to steal your data.) But a machine knows nothing of these concerns, because machines are machines, and don’t “know” anything.