Since the escalation of military actions between Israel and Iran, a wave of disinformation has proliferated on social media platforms, with AI-generated content playing a significant role. Various misleading videos and imagery are being shared, amplifying the conflict's narratives, while both pro-Iran and pro-Israeli accounts contribute to the spread of false information.
Surge in AI-Driven Disinformation Amid Israel-Iran Conflict

Surge in AI-Driven Disinformation Amid Israel-Iran Conflict
An analysis reveals a significant rise in misleading online content as tensions escalate between Israel and Iran, driven in part by artificial intelligence technology.
A significant surge in online disinformation has emerged in the wake of escalating military activities between Israel and Iran. Following Israel's strikes on Iranian targets on June 13, various social media platforms have seen an influx of misleading content, much of it generated using artificial intelligence technologies. BBC Verify has identified numerous AI-crafted videos showcasing Iran's military strength, alongside fabricated footage depicting the impacts of strikes on Israeli sites. Collectively, three of the most viewed false videos have garnered over 100 million views across multiple platforms.
Notably, pro-Israeli accounts have also participated in disseminating disinformation, primarily by circulating outdated clips of protests and gatherings in Iran, misleadingly suggesting that they reflect growing dissent against the Iranian government and support for Israel's military actions. The conflict has led to Iranian missile and drone retaliation against Israel, escalating the online disinformation phenomenon.
An organization specializing in open-source image analysis has characterized the sheer volume of disinformation as "astonishing." It pointed to "engagement farmers" seeking financial gain from sharing misleading materials deliberately designed to attract viewers. Misleading content has included unrelated footage, recycled videos from previous strikes, and even clips from video games falsely presented as real events.
Some accounts, described as "super-spreaders" of false narratives, have experienced rapid growth in their follower numbers. For instance, one pro-Iranian account, Daily Iran Military, saw its followership jump from approximately 700,000 to 1.4 million in just six days, indicating the account's effective dissemination of disinformation despite lacking apparent ties to Iranian authorities.
This situation marks a unique instance of generative AI being employed extensively during an ongoing conflict, as highlighted by Emmanuelle Saliba, Chief Investigative Officer at the analyst group Get Real. Accounts commonly share AI-generated visuals to dramatize the Iranian government's military response, such as exaggerated representations of missile attacks on Tel Aviv. One such image alone accumulated 27 million views.
Claims regarding the destruction of advanced Israeli F-35 fighter jets have particularly drawn attention. Despite numerous claims that a percentage of these jets have been downed, experts have yet to authenticate any actual footage to corroborate these assertions. An alleged video showing an Israeli F-35 downed was proven to have originated from a flight simulator video game, prompting its removal by TikTok after a BBC Verify inquiry.
As the discourse evolves, even well-known accounts previously engaged in debates concerning the Israel-Gaza conflict have begun to partake in spreading disinformation, whether for monetization purposes or otherwise. Meanwhile, pro-Israeli narratives predominantly circulate narratives of escalating dissent against the Iranian regime, with a preference for sensationalist content that resonates emotionally with audiences.
Recent developments have also seen the emergence of AI-generated images depicting U.S. B-2 bombers over Tehran in anticipation of further military engagement. Official accounts in both Iran and Israel have mistakenly circulated fake images, illustrating the challenge of verifying authenticity amidst widespread misinformation.
Users turn to platforms like X (formerly Twitter) and its AI chatbot Grok to determine the validity of suspicious content. However, inconsistencies and continual misidentifications of AI-generated videos as authentic events further complicate the landscape. In recognition of its community guidelines, TikTok has stated that it actively enforces policy against misleading content and collaborates with independent fact-checkers for verification.
As misinformation continues to proliferate during this turbulent geopolitical conflict, the need for critical analysis and responsible sharing becomes increasingly important. Researchers indicate that binary perspectives inherent in conflict situations often promote rapid disinformation sharing, underscoring societal challenges in discerning credible information from sensationalist narratives.