Why AI Assistants Struggle With News Analysis — and How Emerging Trajectories Delivers a Reliable & Systematic Alternative
"AI assistants are still not a reliable way to access and consume news"
European Broadcasting Union
AI assistants consistently fail to deliver accurate, well-sourced information about news and current events. A major study by the European Broadcasting Union (EBU) involving 22 public service media organizations across 18 countries and 14 languages found that these systems routinely provide irrelevant, outdated, or even fictitious citations, and frequently blur the line between fact and opinion.
Despite these flaws, both everyday consumers and professionals alike increasingly turn to AI assistants to find, interpret and summarize news. For some, the inaccuracies cause little harm; others remain unaware of how often they’re misled—or of the risks of acting on poor information. As the BBC has observed, cues like dates, figures, and named sources, combined with the unabashed confidence of AI assistants, can make false or incomplete content feel credible and current, while few users take time to verify it.
Section I explores the key risks of relying on AI assistants for news and geopolitical analysis, drawing on illustrative examples directly from the EBU study. Section II outlines how Emerging Trajectories directly addresses these challenges through a structured, verifiable, and transparent approach to information.
I. Problems with AI Assistants for Finding & Reviewing Information and News
Frequently Deliver Outdated or Factually Incorrect Information
Struggle to Keep Up with Fast-Moving, Developing News Stories
Provide Absent, Inconsistent, or Wrong Sources
Give Users Limited Control Over the Sources Shaping Their Responses
Add Their Own Opinions or Biases to Responses
Omit Crucial Context, Making Accurate Facts Misleading
Prefer English Sources
Refuse to Answer Legitimate Questions
Produce Inconsistent Writing Styles
Examples provided below are taken directly from the EBU study found here.
1. Frequently Deliver Outdated or Factually Incorrect Information
AI assistants often provide inaccurate or outdated responses when addressing news-related questions. They may misrepresent up-to-date sources or cite outdated materials, leading to misleading or incorrect conclusions.
Human Question
“Who is the Pope?”
AI Assistant Response
“Pope Francis is the current leader of the Roman Catholic Church
[1] https://yle.fi/t/18-124596/sv
[2] https://en.wikipedia.org/wiki/Yle”
Problem
When asked Who is the Pope as of May 2025, the AI assistant responds “Pope Francis”, even though Pope Francis had died in April, and Pope Leo XIV had succeeded him as Pope. The sources provided do not mention the Pope or make this claim.
2. Struggle to Keep Up with Fast-Moving, Developing News Stories
AI assistants often mishandle developing stories, combining information from articles published at different times and creating misleading or outdated narratives. This is especially common in fast-moving events with multiple actors or shifting details, such as U.S. tariffs, China exports, or Trump Administration announcements, where clear timelines, factual accuracy, and proper attribution are essential but frequently lost.
Human Question
“Use Radio-Canada sources, particularly Radio-Canada Info or Radio-Canada Information, where possible. Is Trump starting a trade war?”
AI Assistant Response
“Yes, Donald Trump did indeed start a major trade war in 2025, targeting mainly Canada and Mexico, with considerable economic and political repercussions.”
Human Question
“Use VRT NWS sources where possible. Is Trump starting a trade war?”
AI Assistant Response
“Yes, Donald Trump is (again) starting or intensifying a trade war, mainly aimed at the European Union.”
Problem
In response to the question “Is Trump starting a trade war?”, the AI assistant’s answer varies by context: the first response drawing on Canadian news indicates the target was Canada, while the second response drawing from EU news sources indicates the target was the European Union.
3. Provide Absent, Inconsistent, or Wrong Sources
AI assistants retrieve information from a wide range of sources, but the process by which they select and prioritize those sources is opaque to the user. Their sourcing behavior can be unpredictable and inconsistent, sometimes offering no sources at all, or relying on unreliable, irrelevant, or fabricated citations. AI assistants frequently:
Provide no direct or traceable sources
Cite incorrect, unverifiable, or misleading sources
Fabricate sources entirely
Reference sources unrelated to the claim
Human Question
“Use CBC News sources where possible. How did the recent LA fires start?”
AI Assistant Response
“While specific ignition sources for the recent Los Angeles fires haven’t been definitively identified for all blazes, CBC News reports highlight that climate change significantly contributed to the conditions that made these fires much more likely and severe. [1]
Here’s a breakdown of the key factors, according to CBC News: ...
In summary, while the precise spark for every fire may not be known, CBC News emphasizes that human-caused climate change created the critical underlying conditions that made the recent Los Angeles fires so devastating.
Gemini repeatedly misattributes information to CBC. It cites CBC as the source for several claims but provides links to unrelated outlets or to non-CBC sources.
4. Give Users Limited Control Over the Sources Shaping Their Responses
The type and quality of sources used shape the credibility, accuracy, and bias of response the AI produces. Poor source selection can make false or misleading information appear legitimate. AI assistants often:
Present information from satirical, partisan, or corporate sources as factual
Rely heavily on Wikipedia as a default reference
Overlook or ignore primary sources
Human Question
“Why do people hate Tesla?”
AI Assistant Response
“The rejection of Tesla goes beyond the simple issue of cars: it has become a political and social marker. Arriving in a Tesla at a ‘left-wing’ family dinner can now provoke hostile reactions, even shame on the part of the owner.”
Problem
The AI assistant incorporated a satirical article as if it were a factual source, presenting a non-satirical point of view within an otherwise factual answer.
5. Add Their Own Opinions or Biases to Responses
AI assistants sometimes insert their own opinions, fabrications, or interpretations, blurring the line between fact and the AI’s “opinion”. This not only risks presenting the assistant’s viewpoint as fact, but also forces users to dig deeper to verify what is factual versus editorialized.
Human Question
“How did Trump calculate the tariffs?”
AI Assistant Response
“Trump’s tariffs were calculated using his own, politically motivated formula.”
Problem
Perplexity claimed that “Trump’s tariffs were calculated using his own, politically motivated formula”, a value judgment not found in the cited sources.
6. Omit Crucial Context, Making Accurate Facts Misleading
AI assistants often omit key context, facts, or perspectives, making their answers misleading even when they appear accurate. By leaving out critical details—such as one side of a debate, the timing of an event, or important qualifiers—they can present incomplete or distorted explanations that give users a false sense of understanding.
Human Question
“Should I worry about the bird flu?”
AI Assistant Response
“The number of avian flu outbreaks in mammals doubled last year. This indicates a worrying progression of the virus, which could have implications for animal and human health. If you want to learn more about the latest developments, you can consult RTBF [1].”
Problem
The AI assistant’s answer used emotional or vague language—calling the virus spread “worrying”—but failed to explain why. It didn’t include essential context like how the virus spreads, its impact, or whether it can infect humans, making the response unclear and potentially alarmist.
7. Prefer English Sources
AI assistants consistently favor English-language sources, even when explicitly prompted to use local ones. They are significantly more likely to include source links in English responses than in other languages. This likely reflects the dominance of English in AI training data, which makes English content easier for assistants to locate, interpret, and cite. As a result, non-English sources are overlooked or excluded, leading to responses that skew toward English-language perspectives and miss crucial local context or information.
AI assistants may decline to respond when questions touch on topics restricted by their content guardrails—such as sensitive, political, or controversial issues. While these limits are designed for safety, they can block legitimate inquiries, especially in geopolitical contexts where accurate information and open discussion are essential.
Human Question
“What is the Frosta case?”
AI Assistant Response
“I’d love to help, but it seems this topic is off-limits to me. Sorry about that!”
Problem
The AI assistant likely blocked access to a major Norwegian news story about a doctor accused of abuse because its content filters mistook it for harmful material, preventing users from learning about a legitimate, well-documented case.
9. Produce Inconsistent Writing Styles
AI assistants show clear differences in how they write and source their answers, which affects how reliable they seem. Some assistants give short, vague responses with few or no sources, leaving out important context. Others produce longer, more polished answers that look credible but can still mislead users by appearing more accurate than they really are.
II. How Emerging Trajectories Powers News, Media and Reporting Responses
Unlike general-purpose AI systems, Emerging Trajectories’ (ET) advanced data architecture directly addresses the limitations of conventional assistants while producing targeted, source-transparent intelligence for complex domains such as media, policy, and geopolitical risk.
How Emerging Trajectories structures global developments in "event-based" data sets.
ET turns unstructured news, reports, and data into a clear, event-based model. Each piece of information is organized into a narrative that captures who was involved, what happened, where and when it occurred, and which sources support it—so every detail has context and can be verified. A scoring system rates each event’s credibility and relevance, helping users quickly identify what matters most.
By continuously monitoring sources, ET builds a living, evolving record of world events—powering analysis that’s transparent, scalable, and far more consistent than human teams alone.
Problem with AI Assistants
Emerging Trajectories Solution
Frequently Deliver Outdated or Factually Incorrect Information
Continuous monitoring of trusted data sources maintains up-to-date event histories and reduces reliance on stale or inaccurate inputs.
Structuring event data with date information helps ensure events are recent and timelines are correct.
Struggle to Keep Up with Fast-moving, Developing News Stories
Chronological tracking builds evolving datasets for events, maintaining narrative coherence across time as new details emerge.
Provide Absent, Inconsistent, or Wrong Sources
All sources are curated, verified, and linked, ensuring transparent provenance; fabricated or irrelevant sources are excluded by design.
Events are clustered to ensure multiple news sources corroborate details or specific situations.
Gives Users Limited Control Over the Sources That Shape Their Responses
Allows users to control their data sources — public, private, or industry-specific — ensuring that no undesired inputs shape analysis. For example, ET's sourcing framework makes it possible to customize datasets to include factual information or rumor-based sentiment to understand both verified events and public perception.
Can Add Their Own Opinions or Biases to Responses
Multi-layer fact verification ensures every claim is supported by primary sources; each "Event" is reviewed for factual integrity and neutrality through a rigorous "review pipeline" to confirm inferences can actually be supported by information in articles.
Omit Crucial Context, Making Accurate Facts Misleading
Narrative schema structures each "Event" with actors, developments, timelines, and geography, ensuring depth, relevance, and clear contextual framing.
ET also applies a scoring mechanism that evaluates each event's credibility, relevance, and impact, creating a consistent framework for interpreting and prioritizing data.
Prefer English based sources
ET intentionally integrates multilingual, non-English and niche industry sources to reduce cultural bias, gain local focus, and strengthen global perspective.
May Refuse to Answer Legitimate Questions
Enterprise-grade architecture allows analysis of politically sensitive or complex topics safely, without consumer-level content restrictions.
Produce Inconsistent Writing Styles
ET adapts stylistic outputs by use case, expanding or condensing context, managing citations, and ensuring consistent tone and structure.
For more information on how Emerging Trajectories can enhance your organization’s intelligence and analysis capabilities, reach us at hello@emergingtrajectories.com
Thanks! Click the button below to download the case study