I don't know if this is relevant, but after reading the article, I asked Grok (Yes, I know, xAI, but I find it better to experiment on) and it also hallucinated until it was forced to search and correct itself. So, basically, AI, that doesn't have constant access to the internet will hallucinate answers to any question, and should not be trusted if there are no sources where you could look and see if it's saying anything coherent.
Of course, people who have used and knew of AI already know this, I just find the article's technical explanation interesting and wanted to express my own interpretation.
koavf in typography
Why do LLMs freak out over the seahorse emoji?
https://vgel.me/posts/seahorse/I don't know if this is relevant, but after reading the article, I asked Grok (Yes, I know, xAI, but I find it better to experiment on) and it also hallucinated until it was forced to search and correct itself. So, basically, AI, that doesn't have constant access to the internet will hallucinate answers to any question, and should not be trusted if there are no sources where you could look and see if it's saying anything coherent.
Of course, people who have used and knew of AI already know this, I just find the article's technical explanation interesting and wanted to express my own interpretation.
Hallucinations are inevitable: https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html