AI hallucinations

By the term AI hallucination we mean the phenomenon that occurs when AI finds patterns that do not exist, thus creating answers that are incorrect or completely fabricated. This occurs when the algorithm does not generate answers based on the data it has, when incorrectly interpreting or not interpreting the corresponding pattern at all.

To better understand this phenomenon, we must first analyse the models on which AI is based, and these are large language models or LLMs. They represent neural networks that are built to understand and generate answers based on the data they have. When this information is incorrect or the model did not find suitable patterns, we get an answer that most often does not make sense or is completely fictional. In practice, there are many examples of this phenomenon, which vary from incorrect information, non-existent information up to the appearance of the Eliza effect (a phenomenon which occurs when human emotions are attributed to artificial intelligence).

The main danger of AI hallucinations is precisely the human factor, that is, the dangers it can lead to if we rely too much on all the answers we receive. These hallucinations are difficult to perceive in certain situations because we will get an answer that looks extremely convincing. The only solution is to manually check each answer or to use certain systems that are made to stop this phenomenon such as TruthChecker.

How often these hallucinations occur is hard to say. According to Vectara’s (an American AI startup) data, this phenomenon occurs between 3% and 27% of the time. Various companies have started to put disclaimers to warn people that fake or fabricated information can appear and in case they suspect that some information is false, they should report and verify the veracity of the information on multiple platforms.

Since it is impossible to completely solve AI hallucinations, there are several ways to reduce the chance of them and increase the veracity of the answers we get. The first way is to improve the quality of data that AI has. Then increased model testing before use by the experts and users in order to spot all errors in time. Ethical use of such programs can increase the accuracy of the information that is obtained and that way AI hallucinations can be reduced.

AI technology represents a challenge that should be approached carefully, precisely because it is one of the most powerful tools of today’s technology. AI hallucinations arise from certain limitations which large language models have. While scientists are working to reduce mistakes and repair the quality of the answers that are provided to us, it is up to us to use AI with caution and to approach the answers we receive with a certain amount of doubt.

Darinka Bulatović
Women4Cyber Montenegro

Share the news: