Govt must act against the flood of digital lies

Ongoing trend offers grim glimpse of the future of political disinformation

An investigation by this newspaper reveals that the digital sphere in the run-up to the February 12 election has been overrun by fake content. Between mid-December and mid-January, the volume of AI-generated disinformation aimed at swaying voters more than tripled. For observers of democracy in the digital age, Bangladesh is becoming a distressing test case. The technology to warp reality, once the domain of state actors with hefty budgets, is now available to anyone with a smartphone and a political grudge. Our investigation identified nearly 100 distinct pieces of AI-generated content in a single month, garnering 1.6 million engagements within the first 24 hours of being posted.

These digital lies also offer a map of the country’s fractured political landscape. The fiercest digital crossfire is, unsurprisingly, between unofficial pro-BNP and pro–Jamaat-e-Islami forces. Pro-Jamaat actors seem to be the most prolific, while the BNP’s digital surrogates often return fire using AI avatars. Meanwhile, remnants of the Awami League have been seen to be using AI often to manufacture sexually compromising images of female politicians and student leaders associated with the interim government.

As separate fact-checking reports illustrate, the appetite for deception does not require high-tech tools. Recently, fact-checkers debunked a widely shared video of President Mohammed Shahabuddin appealing for a fair election. The video was genuine, but it was from 2023. Similarly, fake photocards bearing the logos of news channels are circulating with fabricated reports of violence. This deluge of fake content suggests that the algorithm does not care whether a video was made by a neural network or simply dragged out of a three-year-old archive. It cares only that it is shared, and shared widely. In Bangladesh, where digital literacy has failed to keep pace with digital penetration, the “harm threshold”—the point at which online lies spark real-world violence—is dangerously low.

Against this backdrop, the response from tech giants remains woefully inadequate. While platforms like Facebook possess the tools to detect coordinated inauthentic behaviour, their enforcement in non-Western markets like Bangladesh is lethargic. So the authorities must hold tech giants accountable for their own terms of service. Dhaka should immediately establish a high-level, transparent working group with Meta, TikTok and YouTube to demand robust content moderation, particularly for the election period. It must insist that tech companies apply the same speed and rigour to removing harmful content in Bangladesh—specifically deepfakes that incite violence or suppress voting—as they do elsewhere. This should include requiring companies to publish weekly transparency reports specific to Bangladesh, detailing exactly which political advertisements and networks were removed and why.

The authorities must treat disinformation not merely as a digital nuisance, but as a contagion requiring urgent intervention. To stem the tide, the Election Commission should also direct cyber-security agencies to de-platform serial offenders, while simultaneously enlisting independent fact-checkers to build a rapid-response defence. This proactive stance is vital to dismantling viral falsehoods before they ignite a real crisis at this crucial juncture in Bangladesh.