A new international study has revealed that AI assistants make widespread errors when answering questions about the news, with nearly half of all responses containing at least one major inaccuracy and over 80% showing smaller issues in sourcing or interpretation.
The research, which analysed 3,000 prompts across 14 languages, found that assistants such as ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity often struggle with identifying credible sources and distinguishing facts from opinion, as outlined in a recent report.
The most frequent issue involved misattributing or inventing sources, with one AI assistant showing sourcing errors in more than 70% of its responses.
Others produced factual mistakes in around 20% of cases, including false claims about policy decisions and misidentifications of government leaders — mistakes that could easily mislead users who rely on them for up-to-date information.
An independent review from the European Broadcasting Union echoed these findings, calling the errors “systemic and language-agnostic” in a detailed analysis.
What makes this so worrying is that people, especially younger generations, are increasingly turning to AI chatbots for their daily news updates.
Studies show that around 15% of users under 25 already prefer digital assistants over traditional outlets.
This trend raises concerns that misinformation could spread faster and appear more credible when wrapped in the polished tone of an AI-generated answer — an issue explored further in a recent investigation.
Experts say this problem isn’t just about bad data — it’s about how AI systems are designed. Large language models generate fluent and confident prose, but as recent findings have pointed out, their confidence often masks inaccuracy, creating a false sense of authority.
Researchers warn that this “illusion of truth” effect could erode public trust in both journalism and technology, a phenomenon unpacked in a broader analysis.
From my perspective, this feels like a turning point. I love how fast and accessible these AI tools make information, but let’s be real — we’re not there yet.
They’re like overeager students: bright, articulate, and persuasive, but still capable of confidently giving the wrong answer.
Until these systems can reliably verify their sources, the smartest move for anyone using AI for news is to double-check before sharing.
In the end, AI assistants might someday rival human editors — but for now, they’re only as trustworthy as the humans keeping an eye on them.


