Apple’s new Apple Intelligence feature, designed to summarize and group notifications using generative AI, is under fire after creating misleading headlines about high-profile news stories. The most notable incident involved a false AI-generated notification claiming that murder suspect Luigi Mangione had shot himself—an event that did not occur.
The BBC, whose reporting was misrepresented in the notification, filed a complaint with Apple, highlighting the potential harm of such inaccuracies. The journalism advocacy group Reporters Without Borders (RSF) has since called for Apple to remove the tool entirely, citing the risks it poses to the credibility of media outlets and public trust in reliable news.
The AI-generated notification falsely combined headlines, creating the impression that the BBC News had reported an article with the headline: "Luigi Mangione shoots himself." This was included alongside legitimate summaries about unrelated news topics, such as developments in Syria and South Korea.
RSF criticized Apple for deploying a tool it deemed "immature" and unsuitable for handling sensitive news.
"AIs are probability machines, and facts can't be decided by a roll of the dice," said Vincent Berthier, head of RSF's technology and journalism desk.
Apple’s tool has also misrepresented headlines from other publishers. On November 21, an AI-generated summary of New York Times articles incorrectly stated "Netanyahu arrested" in reference to Israeli Prime Minister Benjamin Netanyahu. The actual story reported on an International Criminal Court arrest warrant—not a direct arrest.
This and similar errors highlight ongoing issues with the AI’s ability to accurately summarize complex or nuanced news stories.
Despite growing criticism, Apple has yet to publicly comment on the issue. The company has provided users with the option to report inaccuracies in grouped notifications, though it remains unclear how many complaints Apple has received or how the feedback is being used to improve the tool.
The BBC confirmed it raised concerns with Apple about the Mangione incident but has not received a response.
RSF is urging Apple to act responsibly by removing the feature altogether, arguing that the risk of AI-generated false information undermines both public trust and the credibility of affected media outlets.
"The automated production of false information attributed to a media outlet is a blow to the outlet's credibility and a danger to the public's right to reliable information," RSF stated.
The controversy over Apple Intelligence highlights broader concerns about generative AI tools, particularly when applied to sensitive areas like news reporting. As companies like Apple continue integrating AI into their services, the need for careful oversight and safeguards to prevent misinformation becomes increasingly critical.
For now, the future of Apple Intelligence—and its role in the delivery of news—remains uncertain as pressure mounts for changes or its removal.