Generative AI can seem safe, yet be thoroughly wrong
Language-based AI tools such as ChatGPT, Claude and Copilot often appear confident and articulate. That’s precisely why they’re also risky. They can make claims disguised as facts, and do so without hesitation. The phenomenon is referred to as hallucination and is one of the reasons why people do not become superfluous, but all the more […]
When AI invents things
These models are capable of producing entire research articles with seemingly credible authors, titles and references. The problem is that much of this can be fiction. The result is content that looks serious but lacks grounding in reality. A famous example came in 2023 when a lawyer in New York used ChatGPT for legal research. The documents contained court decisions that had never taken place. The mistake was quickly revealed in court and the case gained international attention. The incident illustrates a point many people overlook, namely that AI doesn’t know when it’s wrong.

Why it happens
The reason lies in how language models are built. They have no understanding of the world like humans do. They don’t evaluate truth but probability. According to Martin Jensen, Head of AI and Transformation at TRY, the language model is just the core of a larger system. It generates text based on patterns in huge amounts of data without assessing whether the content is actually correct.
A modern language model can deliver convincing answers, but is optimized for the most probable, not the most correct. When the training base consists of the entire internet, with a mix of quality propaganda errors and pure invention, the result is also unpredictable.
This is why AI can present completely fabricated research references with the same authority as real sources. The same goes for analysis assessments and recommendations. The form is solid, but the foundation can be weak.
Skepticism is also evident among users. Many find it challenging to verify information from AI, and trust in AI-generated content is lower than before. When reports also show bias in how AI assesses humans, it’s not just about factual errors, but about practical consequences.
Why people are becoming more important
It is important to distinguish between different types of artificial intelligence. Rule-based systems deliver predictable results. Hallucination is mainly related to open language models used for text and dialog.
Even advanced AI agents need human control. Asking AI to quality assure itself only works partially. Responsibility cannot be automated away.
Hallucination is not something you can fix with a simple update. It’s part of how language models work. That’s why human judgment is a prerequisite.
The article is based on insights from TRY’sreport“AI anno 2025 – A deep dive into creativity, ripple effects and profitability”