Artificial Intelligence is currently a hype, and we have seen the first such tools like chatGPT. Actually, this is just a subset of AI, which is intended to answer questions. Such an ‘AI’ generates a text from the content of the internet. We’ve heard this a lot in the media. A few have already tested it. What nobody seems to have understood: The main achievement of ‘AI’ is not to know, but to formulate!
This machine does not know because it is not an AI. It is a search engine, a huge amount of text in a database. What the ‘AI’ does is
So the second point is the actual achievement, search engines have been around for 20 years.
So again, the ‘AI’ formulates a beautiful text. And we automatically assume that the content is just as beautiful. We trust a beautifully printed book more than a photocopied leaflet. A paperback more than a dime novel. Even if it contains the same novel.
The problem, however, is that the content is only partially trustworthy because it is a search result from a search engine. Just search for a term and look at the Google result, and you’ll know what I mean.
Some time ago I was on a site created in exactly this way. But how was I supposed to know? It wasn’t written anywhere. The layout and text were almost flawless. Except for one flaw, the content of the text was mostly wrong. But if you don’t know anything about the subject and find information on this site, you have to believe it.
I had a whole range of other sources, and this site was the only one telling nonsense. I was able to establish this because removing this single page from the sources suddenly made the result consistent. This one single page claimed something different and thus became untrustworthy by majority decision. Now, the credibility of a text is not necessarily something that should be voted on democratically. So this solution is suboptimal. And above all, it won’t be long before there are more generated pages than actually written ones, simply because it’s much cheaper. And then the page which claims something different from all the others will probably have to be regarded as the not generated one.
This is the great danger of ‘AI’, that we will be flooded with nonsense which is not recognisable as nonsense. We will live in an arbitrary illusory world in which truth becomes blurred and is replaced by false appearances.
And this problem is well known and actually already has a name, which is hallucination or artificial hallucination It is also called bullshitting, confabulation or delusion. The topic even has a very long Wikipedia page.
The following examples are certainly generated by AI, although they don’t admit it. It is impossible to imagine a human being writing such outrageous nonsense. A huge cave church with frescoes is described, and on the Google Maps photos you can see a narrow, sooty corridor with icons. A church which was built in a small cave in 1947 becomes a massive building from the Middle Ages. Lava caves suddenly have stalactites and are located in areas where there is no volcanic activity. The list is long and depressing.