Opinion

Safeguarding voters’ minds against AI

S
Shoukot Ali

The people of Bangladesh are expecting a free and fair election after a long time. However, voters are currently facing not only traditional deep-rooted political issues but also the manipulation of information online. And AI has been the go-to tool for spreading manipulated information during this election season.

Bangladesh’s population is remarkably youthful, with a median age of 26 at the end of 2025. Around 47 percent of the population is also connected to the internet, while 36.3 percent are social media users. These factors can bring both opportunities and threats. With the nationwide spread of high-speed internet connectivity, any news can reach people within moments. During elections, this means that even a single deepfake going viral could potentially distort public perception and fuel unrest.

Globally, there is precedence for elections being influenced by AI. During India’s 2024 elections, deepfakes were employed both creatively by campaigns to reach more voters and offensively to discredit opponents. For instance, two widely circulated deepfake videos showed Bollywood actors Aamir Khan and Ranveer Singh attacking Prime Minister Narendra Modi and supporting the opposition parties. The celebrities denied their involvement in these videos. On a positive note, one political party used AI to dub their leaders speaking in the languages/dialects of various regions. AI chatbots were also used to clone politicians’ voices to make personalised calls to voters. Additionally, some parties “revived” deceased leaders using AI and recreated speeches in their voices to encourage youth support. During Pakistan’s February 2024 elections, deepfakes were used for and against opposition leader and former Prime Minister Imran Khan, who was in prison at the time. His supporters used his image and voice to create a deepfake of him, motivating people to vote for his party. There were also deepfake videos created to undermine Khan’s campaign on Facebook and TikTok. In one video, he was shown calling for an election boycott, creating confusion amongst his supporters. 

In the US, AI-generated deepfakes were used for voter suppression during the 2024 presidential election. One such instance was a robocall in January that recreated Joe Biden’s voice to urge New Hampshire primary voters to abstain from voting. This led to the Federal Communications Commission (FCC) imposing fines and indictments. Meanwhile, Romania’s Constitutional Court annulled the country’s 2024 presidential election because of AI-enhanced misinformation and disinformation created by Russian interference. The foreign actors ran campaigns on TikTok and Telegram, flooding the platforms with content featuring fake endorsements of far-right and pro-Russia candidate Călin Georgescu.

It is crucial for the Bangladesh government to develop workable, multi-layered defences that combine technology, communities, and policy for quick impact against AI-generated disinformation during this election season. The interim government could consider establishing a high-powered task force to take down deepfake content in collaboration with the Election Commission (EC), Bangladesh Telecommunication Regulatory Commission (BTRC), Meta, and Google. It could also develop a rapid-response fact-checking unit and help political parties to employ their own fact checkers. The government could also require commitment from political parties to avoid making or promoting deepfake content and label AI-generated campaign materials accordingly. 

In the long run, Bangladesh should integrate AI literacy into its national curriculum and work with universities and NGOs to conduct workshops for the same. To ensure platform accountability, Bangladesh should require Meta and Google to employ Bangla-fluent fact checkers and produce post-election reports on information transparency. Passing a comprehensive AI policy and relevant laws with technical assistance from development partners is also necessary. The government could also join an international election integrity forum by collaborating with the UN to exchange intelligence on deepfakes and other AI-generated information, and build a firewall against foreign meddling in the country.

The deeper question haunting democracy is not whether the EC, government authorities or internet platforms can move fast enough to stop deepfakes, but whether societies can build the resilience needed to withstand such manipulated information. A multi-layered, comprehensive response package—rapid deepfake detection, transparent AI labelling, strong platform accountability, better AI literacy, and international cooperation—can significantly reduce the impact of AI-driven manipulation. 

Evidence from Romania, the US, India, and Pakistan show that when safeguards exist, AI’s influence on election results remains limited. Bangladesh’s upcoming election could become the first one where the country proves that it can protect voters’ minds from artificial deception. The true cost of failure is not just a contested result or post-election unrest, but the slow collapse of public trust in what they see and hear—without which democracy becomes conceptually meaningless.


Shoukot Ali is working with the BC Ministry of Children and Family Development in Canada.


Views expressed in this article are the author's own. 


Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries, and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.