Tamara Yesmin Toma

Research Officer, Dismislab
GenAI Storms Bangladesh’s Pre-Election Campaigns

GenAI Storms Bangladesh’s Pre-Election Campaigns

Tamara Yesmin Toma

Research Officer, Dismislab
Supporter-led Veo 3 videos flood social media, raising concerns over ethics, influence on voters and electoral codes

Just as she was returning from the market, arms full of shopping bags, an interviewer stops the woman on the street and asks: “Who’s getting your vote this time?” She says she’s giving it to Bangladesh Jamaat-e-Islami, an Islamic political party in Bangladesh. She is not identified in the so-called interview clip, but her vermillion-lined hair part, her attire, and her confidence suggest she’s a middle-class Hindu woman.

In another video, a tired rickshaw puller is approached by a similar street interviewer. The elderly man, clearly worn by years of hard labor, declares that this time he’s voting for the Dari-Palla (two-tray scales), the electoral symbol of Jamaat, because he wants justice and fairness.

At first glance, these videos seem to offer a sweeping portrait of public sentiment across class, religion, and profession. But there’s one problem: none of the people in these videos are real. Every face, every voice, every setting is synthetic. Made of AI.

Since mid-June 2025, dozens of such videos started to appear on Facebook feeds, asking users to vote for Jamaat-e-Islami, in what the party described as a supporter-led campaign, not officially endorsed by its central command. Together, these videos garnered millions of views, and hundreds of thousands of likes and comments, highlighting their substantial reach and engagement.

It all began as Jamaat’s AI-driven campaign, but soon evolved into a broader trend, with supporters of other parties producing similar content to promote their candidates or undermine opponents, data gathered for this investigation show. Dismislab identified and reviewed 70 such campaign videos posted on Facebook between June 18 and 28 this year. Similar videos were found on Tiktok, shared by party activists. 

What makes these videos different from earlier uses of AI in politics is their fully synthetic construction. All the faces, voices, and settings are generated mostly using Veo (a text-to-video model developed by Google). Most of the analyzed videos did not disclose they were AI-generated, and appeared without any labels. Particularly on Facebook, automated systems also failed to detect them.

Experts warn that these AI-made campaign videos raise serious ethical concerns, more so because the characters are entirely fictional and rarely labeled as synthetic. Some say this blurs the line between genuine voter sentiment and manufactured messaging while others caution that without strong digital literacy and clear regulations, voters may be easily misled.

Jamaat supporters lead campaign, appeal through identity and emotion

Jamaat supporters appear to have started this AI campaign first, making videos that cover more socio-economic, age and religious groups than any other political parties. Most clips are 8 seconds long, but several have been stitched together into longer, episodic campaign videos. The content shows AI-generated people from all walks of life: fruit sellers, rickshaw pullers, day laborers, professionals, elites, and also Hindu women backing Jamaat—a party long associated with conservative Islamic politics. The goal appears to be rebranding Jamaat as a party accepted across all segments of society.

Many of the videos make strategic use of identity. Hijab-wearing women, sindoor-wearing  women, professionals, and working-class figures all declare their support for Jamaat. Some describe Jamaat as the only party supporting families affected by last July’s mass uprising that toppled Sheikh Hasina. One woman states, “Only Jamaat is checking in on us. No one else is. This time, our whole family is determined to vote for Jamaat.”

Religion is a common thread. “Jamaat is selling tickets to heaven,” says one video. Another features a cleric urging the youth to support Jamaat at the ballot box. Others invoke Islamic values more generally with lines like, “We want Allah’s law.”

The videos also frame Jamaat as a refuge for those disillusioned by politics. One character, wearing a Mujib coat, typically associated with the Sheikh Hasina-led Awami League, says he’s seen enough of other parties and now supports Jamaat. Another common narrative portrays young, first-time voters choosing Jamaat as a clean alternative to discredited mainstream parties.

A watermark traced many of these videos back to a page named “Jamaat Shibir Supporters“. Its creators told Dismislab that the AI campaign is not centrally directed and was rather launched by the party supporters and activists on their own.

No official party pages are known to have endorsed the campaign, but several accounts tied to senior Jamaat leadership actively shared the content. Among them is Ishaq Khondoker, member of Jamaat’s central Majlis-e-Shura who is expected to run for parliament from Noakhali-4 constituency. His page featured multiple AI-generated campaign clips. The reach extended further through Facebook pages claiming to be news pages like Hikma 24 News and Islamic Time Media embedded their own logos and disseminated the content.

Jamaat’s Assistant Secretary General and Head of the Central Media and Publicity Department, Ahsanul Mahboob Zubair confirmed that they are aware of the AI-generated campaign. “Yes, we’ve seen the videos. But these are not part of any organised campaign. We believe local media cells and supporters are behind them,” he told Dismislab. “We do have plans to institutionalize the use of AI going forward. It’s hard to avoid using AI these days. We have a few ideas in mind. Once our election manifesto is finalized, we’ll share more about this in an official capacity.”

From attack to counter-attack

Many of the Jamaat supporter videos targeted their new-found political opponent BNP (BNP and Jamaat had long been political allies until before the fall of the Hasina government), often labeling the party and its leaders as violent and immoral. In a viral clip with 3.5 million views and 74,000 reactions, a woman replies to the question “What do you think of BNP?” by saying, “BNP is a party that people want to see fall, even before it comes to power.” However, the clip was later deleted from the page Hikma 24 News. One video linked BNP with corruption, stating: “Our business will thrive when Tarique Rahman becomes PM — we just have to give him a 10% commission.” Another alleges: “BNP terrorists killed a man because he refused to let his daughter marry a BNP leader. BNP has already murdered more than 150 people.” Dismislab found no credible evidence supporting the second claim.

Jamaat’s wide-ranging campaign triggered a flurry of responses from other actors, including BNP, NCP and AL supporters. 

It took a couple of days for BNP supporters to  launch their own campaign, also seeking votes and targeting rivals. One of their videos featured a young voter saying they would stand for sovereignty. Indicating Jamaat as an anti-sovereign force, he added, “Those who don’t want the country’s independence, how will they protect our freedom if they come to power?”

Another video takes the critique further. When a character suggests Jamaat deserves support as an Islamic party, another replies, “Politics in the name of Islam and politics for Islam are not the same. Don’t vote for merchants of religion.” A third adds, “Politics is now overrun by goons and religious opportunists. Let the youth vote for Dhaner Sheesh.” In another video posted by a BNP supporter, a synthetic character appears in swimwear, sarcastically warning that voting for Jamaat would leave the country “stripped bare”.

An user named Mamun Abdullah launched a campaign attacking both Jamaat and BNP, while promoting the newly-formed, student-led National Citizen Party (NCP) and its Shapla (water lily) symbol. One clip dismissed Jamaat as a “religious business,” urging voters to reject parties that blend Islam with politics. At least three videos (1, 2, 3) mocked BNP activists as extortionists at local human hauler stands. One of his endorsements of NCP stated, “Shapla means courage and the promise of transformation.”

NCP faced criticism too. One AI video, posted in a pro-Awami League page, labeled it the “National Cheaters’ Party,” and sarcastically advised, “If you want to learn how to deceive people, join NCP.” 

For its part, the AL, whose political activities have been suspended by the interim government pending trial for its alleged crimes during the July uprising, started its own campaign with calls to boycott the election, and as there is no “boat” (its election symbol) there is little left to participate in an election. In another, a synthetic voter says, “…I’ll vote for Dhaner Sheesh, but I’ll never vote for a party that opposed the Liberation War,” clearly targeting Jamaat for its controversial wartime role.

At one point of the investigation, tracing the origin of videos became more complicated as party supporters began downloading and reposting content produced by others, especially when it targeted their political opponents. For example, some AL–aligned Facebook pages shared content (1, 2) from rival campaigns, republishing anti-BNP or anti-Jamaat videos created by others. It further muddied the waters.

A video from a pro-AL page mocked Jamaat’s social media tactics: “To be Jamaat-Shibir [Shibir is Jamaat’s student wing], you need 11 fake Facebook IDs, and of course, you have to master rumors and insults.” The same video was found, posted by a pro-BNP page, Youth Network. A satirical portrayal implying that BNP members engage in petty extortion and face public backlash for it, was posted by Jamaat supporters, but first appeared in Mamun’s profile, which otherwise promotes NCP. 

Among others, Islami Andolon Bangladesh supporters entered into the scene, emphasizing Islamic identity and rejection of mainstream parties. Among their AI-generated videos, one said, “For 52 years we’ve heard all kinds of systems. This time we want Islamic governance.” Another asserted, “They won’t unite with Jamaat, so let’s all attend their June 28 grand rally.”

Beyond party messages, several politicians also jumped in to seize the momentum. In Jessore-3 constituency, BNP politician Anindya Islam Amit and in Kushtia-2, Ragib Rouf Chowdhury received AI-driven campaign support, painting them in a positive light. Supporters of Jamaat extended the tactic to at least 10 of their party leaders (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) portraying them as potential candidates. 

Veo 3, rise of softfakes and ethical concerns 

Dismislab analyzed a total of 70 AI-generated political videos, including reels, which collectively garnered over 23 million views and one million reactions. On average, each video received approximately 328,000 views and 17,000 reactions, highlighting the scale of their reach and engagement.

Some gained more traction than others. For example, one NCP-aligned video by Mamun Abdullah received 1.9 million views on Facebook and a reel urging people to vote for Jamaat, released by Jamaat-Shibir supporters, was viewed more than 1.2 million times.

The sudden proliferation of seemingly realistic AI-generated election content in Bangladesh coincides with the introduction of Google’s Veo 3 video generation tool, released in May 2025. Earlier, AI videos were often easy to spot, as there were warped faces, awkward voice delivery, and distorted motion. But Veo 3 has largely resolved these artifacts, producing short, high-quality clips that are visually polished and convincingly lifelike.

The Jamaat Shibir Supporters page, one of the main publishers of these clips, confirmed that the group used Veo 3 to create them. While voice delivery in these videos still retains a slightly mechanical cadence, as observed by Dismislab, it is no longer jarring enough to signal artificiality. The smooth rendering of facial expressions and consistent visual backgrounds further masks their synthetic origin.

Meta’s policy, effective since July 2024, requires labels for AI-generated images, audio, and video. However, none of the 70 reviewed videos featured any “AI-generated” labels. Meta’s labeling relies on two methods: automatic detection or creator’s self-disclosure. In these cases, the creators did not disclose the use of AI, nor did the content appear to trigger automated labeling, leaving viewers unaware of the synthetic nature of the content.

TikTok’s rules are stricter than Meta’s. TikTok requires creators to label all AI-generated content  that contains realistic images, audio, and video by applying a specific “creator labeled as AI-generated” label, or by adding a clear caption, watermark, or sticker. The platform also has systems to automatically label content it identifies as completely generated or significantly edited with AI. 

However, 9 of the 26 TikTok videos sampled for this report had no visible AI labeling, while the remaining videos included a generic notice at the bottom of the video description stating, ‘This information is AI-generated and may return results that are not relevant,’ likely after detected by the platform’s systems.

Dr. Rumman Chowdhury, CEO and co-founder of Humane Intelligence and a globally recognized expert on responsible AI, has long warned about the risks of softfakes – a term she used to describe a certain type of AI-generated content. 

In a 2024 article in Nature, she defined softfakes as: “…images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate’s campaign team itself.”

Chowdhury notes that even when AI-generated content isn’t overtly malicious, its use by politicians and parties raises serious ethical concerns. She called for rules both from the companies that generate AI content to the social-media platforms that distribute them.

The AI-generated election videos analyzed in this report are a close yet distinct example of this form. Unlike deepfakes, which mimic real people, these fully synthetic personas speak directly to voters, often presenting themselves as ordinary citizens like garment workers, rickshaw pullers, schoolteachers, and first-time voters professing support for political parties. 

Dismislab sent the sample videos to Dr. Rumman Chowdhury for review. In an email response, she said, “It certainly runs that middle ground of being a deepfake, but not the aggressive, malicious kind. I actually think this is more dangerous than the deepfakes we imagine, because we’re often less on guard.”

Threats and regulatory gaps

In the 2024 national election, Bangladesh saw only a handful of AI-generated disinformation incidents, often including deepfake videos falsely showing candidates withdrawing from the race, fabricated speeches, and misattributed statements. While concerning, such cases were sporadic and much of the disinformation landscape at the time was dominated by manipulated images, misleading videos, and text-based false narratives.

Analyst Liz Carolan has documented the rise of AI videos in South Asian politics. During Pakistan’s last election, for example, while Imran Khan was imprisoned, AI-generated voice clones were used to create emotional speeches in his name. In one fully AI-made video, he was even shown declaring victory, stitched together with old footage.

In neighboring India, ahead of its last year’s national election, political parties and candidates were estimated to have spent around $50 million on AI-generated content, much of it through a local industry of synthetic media start-ups.

However, the 2026 pre-election environment in Bangladesh marks a dramatic departure. AI-made videos now appear almost daily, as they are increasingly sophisticated and cheap. According to Fahmidul Haq, a faculty member at New York-based Bard College, “It’s natural that a lot of content will be created on digital media ahead of the elections. But presenting something nearly ‘impossible’ in a natural way will raise ethical concerns. There’s good reason to be concerned about this kind of content, especially because it will become much easier to mislead the general public.”

Tropa Majumdar, an academic and director of a leading advertising agency Expression Limited, said, “These videos are pure fiction; the people don’t even exist. So the entire premise is a lie. That makes it unethical, whether it’s used in commercial marketing or politics.” She argues that while product marketing can use synthetic imagery, political ads falsely imply real support and cross a serious line.

“However, once someone discloses that a video or image was created using AI, it’s harder to stop them. But that means the responsibility shifts to the government, to civil society, and to responsible organizations to educate people,” she added.

These videos are pure fiction; the people don’t even exist. So the entire premise is a lie. That makes it unethical. — Tropa Majumdar

In the United States, 25 states have passed legislation addressing the use of AI in elections. Texas and Minnesota prohibit the publication of political deepfakes within a certain number of days prior to an election. Another 23 states require disclosures on media content similar to those required for political ad sponsorship, stating whether it contains a deepfake. Ahead of the 2024 European Parliament elections, the European Union’s AI Act highlighted the risks AI poses to democratic processes. The Act outlines four risk categories, including the prohibition of manipulative or exploitative AI and the classification of systems that directly influence voting as ‘high-risk’.

“There must be policies [in Bangladesh]. For example, AI-generated campaign content should be labeled clearly. We need forensic labs to detect synthetic media, be it government-led or private,” urged Haq.

Methodology

Between June 18 and 28, 2025, Dismislab documented all AI-generated political campaign videos that appeared on the writer’s public Facebook feeds. Repeated videos were excluded to create a final dataset of 70 unique clips. 

Each video was classified by party affiliation, determined through visible language, watermarks, political symbols, or page profiles. The narrative content of each video was analyzed for core themes, including vote appeals, identity framing and party targeting. Technical indicators were also assessed to identify signs of Veo usage, such as fixed 8-second durations, stitched sequences, facial consistency, and voice cadence. 

Party- and symbol-specific keyword searches were also conducted on TikTok to determine whether similar videos or reposts appeared on those platforms. For both Facebook and TikTok, researchers reviewed AI-labelling in the videos in line with platform policies. All videos included in the analysis were publicly visible at the time of review.