A report that circulated on Perplexity AI news feed claiming that a SpaceX Starship explosion had occurred on December 21, 2025, stirred up the community and the users. People quickly flagged and reported the story to be fake and inaccurate, as the event did not occur at all, let alone on that particular day this news broke.
This was first recognised by a Redditor, who shared a screenshot of the fake news about the Starship explosion. Comments flooded with similar reports from different points earlier this year. There were some citations in which the AI system seemed to have misplaced the older reports with a new date that was covered like breaking news, for an event that never happened, like Xbox performance boost news.
Where this news likely came from
According to the Reddit discussions, the phantom explosion news was not AI-generated, but more like AI consolidated. Assuming it to be the result of multiple similar news stories being blended together, without a proper spine to it.
Perplexity news feed just showed me a fake starship explosion news?
by u/Headdress7 in perplexity_ai
One user shared a news report from January 2025 that included a SpaceX Starship test explosion that exposed the aviation safety gaps in execution and delivery. When the system attempted to generate a concise "news-style" summary, those fragments were merged into a single narrative, even though false.
Why was it a problem for readers
The news piece was not searched or questioned for; it appeared in a news feed context, surrounded by verified headlines. If you are to read news on a regular basis from Perplexity AI, you would usually not spot a lousy timestamp like such.
Even though AI news systems are optimized for relevance, a synthesized summary with such a strong headline can be assumed natural, even when detected as false. AI models are perfect at summarising what exists and far less reliable at determining the authenticity of something which has already occurred versus being speculated/rumored.
When this line of distinction collapses, the output can feel like a let-down on the AI-led life we are supposed to be living and vouching for, more often than not, blindly.
🚨 BREAKING: NYT sues Perplexity for copyright infringement claims AI scrapes paywalled articles verbatim, hallucinates fake news attributed to them, & competes directly without paying creators. Escalation in publishers vs AI war (after OpenAI suit). Will this force licensing… pic.twitter.com/H1BXRVXYy5
— Manish Balakrishnan (@Iamnotmanish) December 5, 2025
What's the take of the authorities
Currently, even after four long days since the report, there has been no major public statement in response to this specific incident, nor for similar cases across Perplexity earlier. Even so, such citations have pushed some AI companies to experiment with clearer source linking, stricter checks, and news timestamping accreditation authority.
As these systems become more embedded in daily news consumption, moments like this are less about embarrassment and more about learning where human judgment still matters most.
Discussion