
Unlabeled AI-generated media floods election campaigns as platforms fail to enforce rules
On January 4, a Facebook video depicted an old man in a busy marketplace, saying, “I will vote for the Sheaf of Paddy, Bangladesh Nationalist Party (BNP)’s electoral symbol. If they come to power, they will provide us with family cards, agriculture cards, and health cards. In this old age, I won’t have to work as a day laborer every day. I will be able to live in peace.”
In another Facebook video just a day later, a woman stood in a rural setting and declared, “The Scales (Jamaat-e-Islami’s electoral symbol) is the symbol of justice. Justice means fair judgment and neutrality, where everyone’s rights are ensured equally, and there is no room for bias or injustice. For this reason, I prefer and love the Scales, that is, Jamaat.”
Upon closer look, however, the inconsistencies in both videos become clear. In both videos, before every 8 seconds, there is a cut in the scene and a new scene starts.
The fact is both videos are AI-generated and are circulating in social media as part of the political campaign ahead of the 13th National Parliamentary Election on February 12 in Bangladesh. The use of AI in political campaigns is not prohibited, but the problem is none of the clips mentioned above contain the AI label, which is a violation of Meta’s transparency policy.
Between January 1 and 15, Dismislab documented over 800 AI-generated videos on Facebook, YouTube, and TikTok, with some videos uploaded across multiple platforms. On Facebook, 60% of these AI contents had no AI disclaimer.
The situation is identical for YouTube and TikTok. Of the 21 Facebook pages, seven posted the same videos on YouTube, and two pages on TikTok. A review of 181 videos from the seven YouTube channels shows that 94.48 percent of them had no AI label. Not a single of the 50 videos posted on TikTok contained the AI disclaimer.
In the case of Facebook, where 576 AI videos were posted from 21 pages, the “AI info” disclaimer is visible only on the primary Facebook app. When the same video is viewed from Facebook Lite or a computer, that label does not appear to the user.
In Bangladesh, where digital literacy is low, many users mistake these AI contents for authentic information. The Bangladesh Election Commission itself has expressed concerns over the spread of AI content at least twice, first on December 12 and again on January 20, just two days before the official electioneering kicked off.
Experts warn that voters are particularly vulnerable to AI‑generated content because many already lean toward information that confirms their political beliefs. During elections, this confirmation bias intensifies, making people more likely to trust and share AI content that favors their side. Negative propaganda becomes particularly powerful, as voters quickly accept harmful claims about their opponents without verification.
AI-led positive narratives
From supposed police officers to political figures and commoners, there are no shortages of AI characters in the videos analyzed. In these videos, the AI figures declare their endorsement, building favourable narratives for candidates across the political spectrum.
False endorsement using “official figures” and commoners is prominent in the AI videos that support Bangladesh Jamaat-e-Islami. One clip features an AI female police officer in khaki uniform. She states that Jamaat is currently immensely popular in Bangladesh and that various foreign superpowers want to see Jamaat in power. Another fake police officer declares that he personally believes the country would be better off if Jamaat came to power. Researchers found both videos were made with AI.

Characters from minority communities and different religious groups were also used to solicit votes for Jamaat. In one video, an AI female character claiming herself Hindu, says: “We Hindus will vote for Jamaat the most.” She adds that Hindus would be safer if Jamaat came to power, whereas leaders of the “other party” are thugs, extortionists and criminals.
AI characters are also being used to praise Jamaat for forging an electoral alliance that also includes National Citizen Party, led by July uprising frontliner, and Liberal Democratic Party, headed by Col (retd) Oli Ahmad, a freedom fighter. They depict the party as visionary and selfless, with one video claiming Jamaat’s “second sacrifice” in this election since the party gave up seats held by long-term candidates for alliance partners. Another character asserts that Jamaat will contest alongside the freedom fighters of 1971 and the “July warriors” of 2024.
In other examples of AI content, a “survey” video shows a supposed journalist asking seven people about their voting choice, with five strongly favouring Jamaat and two supporting BNP.
On the BNP front, multiple videos used AI-generated children to seek votes for the party. In some cases, these AI characters wear attire made with Sheaf of Paddy, BNP’s electoral symbol. In one video, a girl child declares, “Tarique Rahman said this country does not belong to any single person; this country belongs to the common people. The Sheaf of Paddy is the symbol of their hope.” Another video features another girl child, saying, “When I hear Tarique Rahman’s name, hope arises in my mind that this country will become even better.”
AI-generated videos of BNP Chairman Tarique Rahman’s daughter Zaima Rahman are also being used to deliver promotional messages, and even to seek votes (1, 2).
Another video uses the digital likeness of Zaima Rahman to claim Fatema Begum, Khaleda Zia’s longtime companion and aide, has been legally inducted into the Zia family and accepted by Tarique Rahman as his sister. Fact-cheekers debunked this, noting instead that after Khaleda Zia’s death, Fatema Begum now continues as Zaima Rahman’s companion.

AI figures attack opponents
Both sides are also using AI videos designed to discredit their opponents. The content that appears to push Jamaat’s agenda frequently portrays the BNP as extortionists and deceivers. These same campaigns attack the BNP’s proposed “Family Card” program as an “inducement” or “trap of deception”. On the other hand, anti-Jamaat videos attempt to brand the party as “anti-state”, depicting its ideology as opposed to Bangladesh’s founding spirit.
One video, for example, features an AI-generated shopkeeper declaring he will not vote for the BNP because people allegedly use the party’s identity to intimidate small businesses. He claims BNP-affiliated individuals eat at his shop without paying, and when he asks for payment, they threaten to shut his business down. The video ends with the shopkeeper vowing to reject those who “usurp the rights of the people” and pledging his vote to Jamaat instead.

In another video, an AI character, a woman wearing vermilion and shakha-pola (traditionally worn by Bengali Hindu women), warns that if BNP representatives arrive offering the “Family Card,” the public should meet them with jhhata (brooms). She adds that voters are now more aware and have decided to support Jamaat rather than be swayed by misleading incentives.
Counter-campaigns targeting Jamaat-e-Islami deploy similar tactics, using AI-generated personas to paint the party as fundamentally “anti-state.” These videos argue that Jamaat’s view contradicts the core ideals of Bangladesh. In a podcast-style video, an AI character asserts that Jamaat is loyal not to Bangladesh, but to Pakistan. Another video claims that after failing to turn Bangladesh into Pakistan in 1971, Jamaat is now attempting to turn it into Afghanistan, an effort the public will resist. One female AI character even states that she fears being followed by a Jamaat supporter “more than a bunch of dogs”, amplifying an atmosphere of alarm around the party.
At the same time, AI-generated disinformation circulates across the spectrum, such as videos falsely accusing BNP leader Mirza Abbas of involvement in the Sharif Osman Hadi murder case or wrongly claiming that Tasnim Jara left the NCP to join the BNP although she is contesting for Dhaka-9 as an independent.

Impact on voters
Dismislab’s analysis and global studies show AI contents can influence voters and shape their opinions. In the comment section of the two videos cited at the beginning of this report, many users made positive, supportive comments, indicating that they believe it as real.
On the video that promoted Jammat, one viewer wrote: “Mashallah, little sister, Allah has given you the ability to understand Deen (religion); I wish you well.” Similar comments were also observed in the pro-BNP content, where one user commented, “Thank you uncle. Everyone, come forward. Vote for Sheaf of Paddy to make Desh Nayak (leader of the country) Tarique Rahman the country’s next prime minister.”
Similar positive comments were observed in other videos we analyzed.
Global studies have long established that AI can influence users’ decisions. In a study, Fereniki Panagopoulou, a professor at Panteion University in Greece, found that AI models are now capable of understanding human emotions and linguistic nuances. Consequently, political parties are using “micro-targeting” strategies to craft tailored political narratives for specific voter groups. This approach creates an uneven playing field in electoral competition, as parties with access to advanced AI technology and large datasets can easily influence voter sentiment.
Dismislab spoke with an administrator of a page that creates AI-generated election campaign videos. The admin said that they created these videos because they wish to see an Islamic party in power in Bangladesh. The individual claimed no direct financial motive but noted that the pages do generate income. The same admin manages two other pages included in this research.
Sumon Rahman, Dean of the School of Social Sciences and Head of the Media Studies and Journalism at the University of Liberal Arts Bangladesh (ULAB), said, “These types of videos will undoubtedly impact voters. Most people lack AI literacy. There is also the factor of ‘confirmation bias’, which heightens during elections.
“Elections are inherently about taking sides. Therefore, if an AI content aligns with my side, I am naturally inclined to believe it first and then share it. In this context, negative propaganda proves highly effective. When I see something negative about my opponent, I believe it without verification because I want to believe they are bad.”
Platform policies and reality
Analysis of video after video shows they were posted across Facebook,YouTube, and TikTok without the AI disclaimer. For example, a video posted on a Facebook page shows that although it was AI-made, the content did not carry the “AI Info” label. The same video appeared on the TikTok account associated with that page, again without an AI label. Another video from a separate page, featuring an AI‑generated portrayal of Dr. Zubaida Rahman, wife of BNP’s Chairman Tarique Rahman, also carried no AI disclaimer. The identical footage was also uploaded to YouTube without any mention of its synthetic nature.
Dismislab has identified a significant gap in social media platforms’ global commitments to maintaining electoral transparency. Analysis shows that a single page may contain one video with an AI label while another, equally AI‑generated, remains unflagged. Even when labels are included, technical inconsistencies persist. Meta’s policy states that the company will add prominent labels to high‑risk content if it determines that the material could mislead the public.
In practice, however, Meta’s role in moderating such content during election campaigning appears limited. Moreover, differences in AI labeling between two versions of the same app raise further questions about the platform’s transparency and the reliability of its labeling mechanisms.
No AI label appears on the Facebook web (left) or Lite (middle) versions; however, it’s visible on the main app (right).
An investigation by Mozilla and AI Forensics found a similar gap in TikTok Lite. Their report states, “TikTok Lite lacks basic protections that are afforded to other TikTok users, including content labels for graphic, AI-generated, misinformation, and dangerous acts videos. TikTok Lite users also encounter arbitrarily shortened video descriptions that can easily eliminate crucial context.”
The issue of transparency is also absent on YouTube, which has clear guidelines requiring creators to disclose the synthetic nature of the content. The guidelines further state that for sensitive topics like elections, content makers must add prominent labels to ensure transparency. However, Dismislab’s investigation found that among the videos reviewed from seven YouTube channels, only 5.52 percent of the content followed these rules. Despite TikTok’s stated strict policy on AI content, Dismislab found no evidence of AI labeling in any of the videos reviewed from the two TikTok accounts.
An analysis on the 2024 US elections by the Brennan Center for Justice highlighted the growing risks of AI content in elections and recommended greater transparency and accountability of the social media platforms.
“Social media platforms and AI developers must implement measures to disclose the origins of AI-generated content. Watermarking and other tools that establish content provenance could help voters discern authentic information from manipulated media. Additionally, platforms should reinvest in trust and safety teams, many of which have been significantly downsized leaving gaps in oversight that bad actors are eager to exploit,” it said.
Violation of Bangladesh laws
“Artificial Intelligence shall not be used with dishonest intent in any matter related to the election, including campaigning,” subsection 16(b) of the Code of Conduct for Political Parties and Candidates says.
No political party, candidate, or any person on behalf of a candidate may create, publish, promote, or share any false, misleading, biased, hateful, obscene, indecent, or defamatory content, whether in general, edited, or generated by Artificial Intelligence (AI), on social media or any other medium, with the intent to mislead voters or to engage in character assassination or damage the reputation of any person or candidate, regardless of gender, according to subsection 16(g) of the electoral code.
To clarify the meaning of “dishonest intent”, Dismislab spoke with Md. Ruhul Amin Mollick, Director (Public Relations) and Information Officer of the Election Commission. He explained that the dissemination of false information is primarily what constitutes “dishonest intent”. He added that, in the context of election campaigning, criticism itself is not a problem as long as it does not involve false or fabricated claims.
Supreme Court lawyer Barrister Jyotirmoy Barua told Dismislab that the current law lacks sufficient clarity. He argued that AI content should be required to carry appropriate disclaimers, and that the Election Commission should have issued explicit directives to that effect.
There are global examples of clear guidelines for AI-generated content in election campaigning. India has taken a very strict stance to ensure digital security and electoral transparency. In preparation for the Bihar Assembly elections, the Election Commission of India (ECI) has mandated strict guidelines for the use of synthetic media in political ads. Political entities are now required to explicitly label AI-generated or digitally altered content with tags such as ‘AI-Generated’ or ‘Synthetic Content.’ These disclosures must be prominently displayed, occupying no less than 10 percent of the visual frame.
Similarly, the European Union (EU) has established a standard through its AI Act. According to this law, the use of artificially created images, audio, or video in any political campaign must include a mandatory disclaimer or label.
In Bangladesh’s case, Jyotirmoy Barua believes it is not yet too late. He said the Election Commission of Bangladesh can still issue a notification making such disclosures mandatory for the remainder of the campaign period.
Methodology
Through daily monitoring, Dismislab identified 21 Facebook pages and profiles where AI-generated videos are regularly posted. Dismislab collected all video posts uploaded by these pages and profiles between January 1 and 15, 2026.
Then, Dismislab analyzed only the videos created using AI among those posts. Through this analysis, the content was categorized into election campaigning, general political promotion, and other types of videos. Dismislab also analyzed the videos uploaded from the same time period on the YouTube channels and TikTok accounts associated with these pages or profiles. Finally, the videos were cross-checked to see how many of these posts actually included an AI label.


