Fake image of Pentagon explosion sparks controversy on Twitter

The fake photograph of an explosion at the Pentagon created a viral craze on Twitter on Monday, May 22

Fake image of Pentagon explosion sparks controversy on Twitter

The fake photograph of an explosion at the Pentagon created a viral craze on Twitter on Monday, May 22. The markets even panicked for about ten minutes before the truth was restored. This controversy on social networks is once again reigniting the debate on artificial intelligence (AI) and the risks posed by this new technology.

The fake photograph, apparently made with a generative AI program (capable of producing text and images from a simple plain language query), forced the US Department of Defense to respond. "We can confirm that this is false information and that the Pentagon was not attacked today," a spokesperson said.

Firefighters in the area where the building is located (in Arlington, near Washington) also intervened to indicate on Twitter that no explosion or incident had taken place, neither at the Pentagon nor nearby. The image seems to have caused the markets to drop slightly for a few minutes, the S

Earlier today an apparent AI-generated photo showed a fake explosion near the US Pentagon. The news was shared by Russian state-media RT on Twitter, which helped it go viral. It was also tweeted by a verified Twitter account called "BloombegFeed" which has now been suspended.… pic.twitter.com/KN1wOptlRb

"There was a downside to this misinformation when the machines picked it up," noted Briefing.com's Pat O'Hare, referring to automated trading software that is programmed to react to network posts. social. "But the fact that she remained measured in relation to the content of this fake news suggests that others also considered it muddy," he added for AFP.

An account from the QAnon conspiratorial movement was among the first to relay the false image, the source of which is not known. The incident comes after several false photographs produced with generative AI have been widely publicized to show the capabilities of this technology, such as that of the arrest of former US President Donald Trump or that of the Pope in a down jacket.

Software like DALL-E 2, Midjourney and Stable Diffusion allow amateurs to create convincing fake images without needing to master editing software like Photoshop. But if generative AI facilitates the creation of false content, the problem of their dissemination and their virality – the most dangerous components of disinformation – falls to the platforms, regularly remind experts.

"Users are using these tools to generate content more efficiently than before […] but they're still spreading through social media," said OpenAI (DALL-E, ChatGPT) boss Sam Altman. of a congressional hearing in mid-May.