My controversial opinion about AI

The biggest drawback of these LLMs is that the information they provide is not guaranteed to be correct or complete.

Anyway, for research, it is an interesting tool.

I have noticed that in areas where you are more knowledgeable, this tool is of little value, and in areas where you are ignorant — it is a dangerous deceiver!

4 Likes

Yes, and this means, AI in (advanced) scientific research is useless.

AI is helpful in searching what already is done in some area (by humans). It can find, aggregate, explain in simpler wording, and prepare an abstract of already existing knowledge.

AI does not produce new knowledge. AI is not capable to reasoning, it does not put questions, has no doubts and is not connected to reality. AI has no common sense and almost all the rest is necessary to push the knowledge forward. AI is only able to present someone other’s reasoning, questions, doubts and common sense.

If some researcher would act after algorithms used by AI, he would be accused of plagiarism. In human academic society this behavior is convicted, but very same behavior is praised if done by AI.

This would be a perfectly acceptable result,… but AI also generates and transmits its own and others’ errors!

2 Likes

Well, my personal opinion is: Humanity can accepts AI as truly “intelligent” if / when the natural intelligence will degrade sufficiently bad. I would not love to live in that world.

And this is a true danger of modern way AI takes. In human society, the Low of Conservation of Intelligence is verified. It states: “The more intelligent are tools and gadgets used by human, the less intelligent is human. So, the sum of intelligence of human and his aids is preserved.”

The name AI is misleading, because there is no real “intelligence” in that thing. Because “intelligence” is impossible without understanding. Things can’t understand. So “intelligent”, “smart” tools and gadgets exist only in the mouths of advertisers of those products.
… humans will be ok.

1 Like

If people act like they can make gods or politicians out of computer parts, I’m calling it out even if it is politics and religion both. Bots have no place in either location.

I don’t agree with that. Quite the opposite, in fact. Our brains have a limited capacity to do things. Some things they are very good at, and unmatched by current technologies. Some other things are done much more efficiently by other means, and continuing to use your innefficient brain forit is a waste of time.

By combining humans and computers in a good way, you can be a lot more efficient and productive. With internet you have access to unlimited information. With comhuters you don’t need to perform computations by hand. However you have new skills to use the computer efficiently, and organize things so you can find them.

Now the AI people are trying to replace one more thing and move it into the computer. But so far they are not quite succesful at doing it more efficiently.

Also when comparing to the “good old days”: you have to remember how there were armies of people doing low-pay jobs to keep things working. “Computers” used to be people doing comhutations by hand. Secretaries takind meeting notes or transcribing audio recordings. If you go a few years further back it will be their wife taking care of the home so they don’t have to think about that. So if you compare one modern human augmented by a computer, to one human from these days without all this helping staff, you will see that the comhuter certainly helps get a lot more things done by each human.

And that even assumes that this is a metric that’s worth optimizing. What’s the goal? Augmenting humans to be more productive and profitable? Advance scientific research? Let the computers do the boring things and enjoy a life of leisure? So far, the current LLM things are too unreliable and I think their overall effect is negative. But that’s not the case for some other computer based technologies.

2 Likes

@PulkoMandy, I do not say AI is useless, and I give some examples of good using it. AI is a tool worth exploring in any meaningful activity just like computer. I am saying AI is not intelligent, so the term is misleading.

And so, usual capability of performing computations by hand tends to vanish.

And so, humans slowly loose the ability to write by hand.

Indeed, with computer many tasks are done more quickly, exactly and at higher quality. On the other hand, there are as many jobs not more existent because are replaced by computer, as many completely new jobs introduced to maintain computers.

Generally speaking, in the days of Steam Revolution, the prediction was: “Humans will have 4-day working day and produce sufficient stuff to live”. This did not happen. Similarly, when computers and internet emerged, the prediction was even more brave: “Just 2 hours of working day will be sufficient”. This also did not happen. This means, the quantity of human labor is not quite affected by introduction of smart tools. I agree however that the quality of human labor changed significantly, and humans prefer modern labor, which makes them less capable in certain domains.

3 Likes

So far it seems to actually make things worse. It costs more energy to produce a lower quality result in a lot of cases.

Instead as a society we chose, with capitalism, to be more productive, to some extent. Here in France, the working week is now only 5 days instead of 6, we have 5 weeks of paid leave (this didn’t exist during the industrial revolution), and the hours per week is 35 (which could be done in 4 or 4.5 days, but usually people will instead have shorter days). But also, productivity per employee is way up. Recently it took a dip due to Covid, and ithas not recovered from that yet.

We could surely decide to do 4 day weeks, and indeed some companies are doing it. The result is their employees are happy about it and even more productive (per week, even taking into accout that they work 20% less time) than in other places.

So I don’t think the problem here is that computer and automation makes people less efficient. There is a choice to keep the working hours somewhat high, either based on a perception (proven false, but still believed) that more hours = more work done), or, to keep people busy at work and make sure they don’t start a revolution or something.

5 Likes

Truer words have not been spoken. It’s not intelligence, it’s just polished regurgitation.

1 Like

Based. I don’t think it’s something that should be incorporated into everything, but that’s something the market will sort out.

2 Likes

Okay? That sounds like a problem for the web hosters and AI companies to figure out.

AI is not useless as it is a collective product of information to help aid us in tasks and decisions. Very similar to the purpose of various consumer products as well as books, libraries, and schools.

We’ve had AI for quite some time in various consumer products… Products with “auto-sensing” or “ask me” features.

Yet, no “true” intelligence (aka “out-of-the box” intelligence) - beyond inherited programming…

1 Like

I test different AIs regularly, both the ones I’ve already tested before (to see if they are improving) or new ones I was not aware of. Usually I do that when I am developing something, not because I want AI to help me but to see how it performs in “real world” situations. In fact, I do that from time to time for about two years now. Call it curiosity.
I will not mention any particular AI here, because I don’t want to sound like I am attacking or promoting any of those. Also, obviously this is not a survey but rather part of my experience from testing AIs. I will restrict it in just two example cases.

A few days ago, I’ve spent some time finding something I needed about a rather popular C library. Then I asked AI as well, and it gave me a wrong answer - as it is usually the case before guiding it. I gave it a hint, but still got another wrong answer. After a few iterations, I was expecting it to fall into the endless loop, giving the exact same answer as before, even if I keep telling it “you are just repeating the same wrong answer”.
That particular AI though actually gave up, openly admitted it cannot find the correct answer, and asked me what that answer is. I could not resist the temptation: I replied with nonsense, just presented reasonably (a thing AI is notorious for). Call it revenge. :grin:
Guess what, it happily accepted my nonsense and even used it to write an example program. The program was actually well written, but based on my nonsensical answer. It didn’t try to verify my nonsense. Then I told it I was joking, and gave it the real answer. As a small exercise, I also asked to use that information to solve a side problem concerning computer arithmetic in general. It was not hard to solve, but I can easily think of some students failing to solve it. That AI actually found the solution, and reasoned it well. I am guessing it used info gathered from StackOverflow or something, since this was a general question you can find info everywhere. But the fact is, AI actually solved that little problem.

In another case, just yesterday, I asked yet another new AI to write a C function that should use a rather niche library for timing. Its answer was incorrect because it assumed that library measures time in CPU cycles. I asked “are you sure it’s not milliseconds?”. It said “yes, I am certain”, and added a long text “reasoning” why. I said “you are wrong, look at the library implementation” - which I had to do just before I asked. It kept saying it was certain with yet another long text of reasoning. Then I pointed out where exactly it should look. This time it admitted time returned is indeed in milliseconds. It even pointed out some details I was planning to ask next. I checked that and it was correct. That actually saved me a few minutes.
In this case AI failed to find the correct answer on questions about the niche library, apparently because there is not much info about it on the web. It also failed to “think” where to look in library’s source files. But in the end, after guiding it, AI could find and explain the correct answer, and even saved me some boredom looking at source code.

My general impression is, AI is indeed improving a bit. But you should never forget what it is, and never ever trust it. Never assume its answers are correct, always verify them. A certain AI even openly says so.
In “real-world” situations, where you really look for an answer, you still need to guide it and, with proper guidance plus a pint of luck, it can even save you some time. But overall, I don’t think the time needed to guide AI and verify its answers is worth it. Most of the time, you will find out what you are looking for faster without it.

3 Likes

There’s not much left to figure out.
Websites have been using robots.txt files that tell search engine scrapers which contents they are allowed to index for decades,some AI scrapers simply ignore them and scrape anything they can.
Website owners noticed that and started blocking User Agents of misbehaving AI scrapers to prevent that,but those bots started pretending to be Chrome to circumvent that.
Website owners also noticed that and started blocking IP address ranges of those AI scrapers,but those bots started renting proxies on thousands of residential IP addresses (probably through some botnet of insecure IoT devices) to even circumvent that.
Now we’re in a situation where you have to either deploy a proof of work challenge (Anubis) that slows down bots but also annoys legitimate users,or give control to a Big Tech gatekeeper like Cloudflare hoping that they can handle bot detection better,or just close down your nice little hobby websites because those AI companies with basically endless investor money have much more power than you.
AI scrapers of at least some companies (like Perplexity) are outright hostile to the open web.
There’s a reason even the Haiku bug tracker and cgit require a proof of work challenge now.

1 Like

Okay? Your point?
Welcome to the future – problems are neither created nor destroyed, they merely change state.

1 Like

My point is that the behaviour of AI companies is hostile to the open internet and therefore they should not be supported.
Do you also buy cheap clothing made by children in Africa or buy single-use plastic stuff and throw that into the ocean?
I guess not,so why support awful behaviour of companies in the digitial world?
If you don’t see the point in requiring some ethical standards,then scroll on,nothing to see here - For me that stuff matters.

5 Likes

Exactly how is scraping a website any different than a person visiting all its pages?
I’m just not sold on this idea that AI scrapers are drawing this much bandwidth, unless their scraper is that poorly optimized/written. I mean, search engines have to do the same thing, yet they’ve never been an issue.

Their scrapers being poorly optimized/written is exactly the issue.
Search engine crawlers are not an issue because they respect robots.txt instructions and the crawl delays specified there,and even without special rules,they visit only one page every few seconds.
The crawled page is then cached and only visited again after one or more days (you can also control that in robots.txt).
Search engine crawlers are written carefully to avoid overloading webservers,because they need those websites for the search engines to work.
AI scrapers don’t care.
They request the exactly same page over and over again,request thousands of pages every second from different IPs and basically act like a DDoS attack.
And they don’t care if stuff breaks,if they get blocked they just move to their botnet on residential IPs.
There are many good articles online about this problem,here’s one how it especially hurts FOSS infrastructure:

It also gives examples on the scale of the problem,like one crawler downloading 73TB in a month which costs the website $5000,or the block of AI crawlers reducing traffic usage from 800GB/day to 200GB/day,saving costs of $1500 per month.

3 Likes

There is plenty of evidence. On my website it was 10s of gigabytes a day, for months on, until I blocked various IP ranges and set up firewall rules to stop them. Now I’m back to <1GB per day as my website is not content heavy, just blog posts that don’t need so much space. Most of that traffig is still regular search engine and SEO crawlers, mind you, but the LLM things surely took it to a whole other level.

5 Likes