
AI Used to Inflate Crowds in Awami League Rally Video
The use of artificial intelligence (AI) in political campaigns on social media is not new. It has been used to promote rallies, gatherings, and protests. Recently, however, a real Awami League rally with a small crowd was edited using AI to make it appear several times larger. Because the video contained no AI label or disclosure of digital manipulation, many viewers believed it to be real. A 2025 study found that AI-generated visual content can influence public perception of mass protests on social media.
AI-edited Rally Video Goes Viral on Facebook
A video showing a massive crowd marching with a banner reading “Sheikh Hasina will return, Bangladesh will smile” recently went viral on Facebook. The caption read: “We will all be martyrs to bring Sheikh Hasina back—Awami League rally.” In the video, a group of people is seen marching under the banner of Dhaka North City Awami League, chanting slogans such as “We will all be martyrs, we will bring Sheikh Hasina back,” “If anything happens to Sheikh Hasina, fire will burn from house to house,” and “If anything happens to Sheikh Hasina, there will be war in Bangladesh.”
The video, said to show a rally organized by affiliated bodies of the Awami League from Dhaka-11 constituency, was shared from a page named “Hossain Saddam.” One user commented: “Sheikh Hasina will return to Bangladesh, Inshallah. Joy Bangla, Joy Bangabandhu, no Razakar or Al-Badr can stop it.” The video was shared over 1,500 times and viewed nearly 400,000 times. However, a Dismislab fact-check found that the video was edited with AI.
Keyframe searches revealed that the same footage had been previously shared in multiple edited versions from different pages and profiles. On April 17, the page using the name “Hossain Saddam,” featuring an image of Bangladesh Chhatra League president Saddam Hossain, posted a 16-second version that was viewed more than 300,000 times. The voiceover in that video said, “This is just the beginning. This is only the trailer, my friend, photos are coming.”
The same day, a longer 31-second version of the edited footage was posted from a Facebook profile named “Nurunnabi Chowdhury Shawon MP.” It featured the song “Mahapraloy, Joy Banglar Joy.” Nurunnabi Chowdhury Shawon is a former Awami League MP from Bhola-3 constituency. The same video was also shared on TikTok from a page named “AppaSociety,” with the song “Joy Bangla, Jitbe Ebar Nouka.”
To verify the video, Dismislab used keyframe searches, finding multiple news reports (1, 2, 3) containing the original footage. All were published on April 15. Channel 24 captioned its video, “Awami League’s flash rally in Dhaka.”
A YouTube channel called “Bangla Affairs” reported: “A massive rally organized by Dhaka North City Awami League, led by organizational secretary Mazhar Anam, was held in front of the TV Center in Rampura, Dhaka, at 7:30 a.m. on Tuesday (April 15).”
The banners, slogans, organizers’ names, and designs matched perfectly between the viral video and the verified news footage. The colors of the participants’ clothing and visible background elements such as tall buildings, an under-construction footbridge, towers, roadside electric poles, and an inverted banner also matched.
However, analysis of the genuine footage with the edited videos revealed clear differences. In the original video, the two individuals in red and black were male, but in the viral video they appeared as women. Their bodies also morphed unnaturally. Zooming in revealed that many faces in the crowd were blurry or distorted, and their movements appeared unnatural. The buses visible in the original video were clear, with legible names and colors, but in the viral version they were obscured. The slogans in the edited videos differed from the original, replaced with new audio tracks.
In the original footage, the crowd density was moderate, with visible gaps between participants. The number of people decreased toward the back. In contrast, the edited video showed a street packed with people and almost no empty space. While the real actual had fewer than a hundred participants, the AI-edited version appeared to show thousands.
These inconsistencies – such as gender changes, distorted bodies, blurred faces, missing environmental details, and unnatural crowd additions – indicate that AI was used to manipulate the footage, artificially inflating the number of participants. Further verification with AI detection tool AIorNot produced the same conclusion.

Study Finds AI-generated Content Can Shape Public Perception
A recent study examined whether AI-generated images can change people’s perceptions of protest crowd sizes. Conducted by Stephan Scholz and Nils B. Wiedmann of the University of Konstanz, Germany, in 2025, the study used 814 authentic images from eight protest movements in Chile and Lebanon, each with at least 10 photos per event. The researchers used AI-based segmentation and diffusion models to alter the crowd sizes, either enlarging or shrinking them.
Participants were shown protest photos –some real, some AI-altered. The results showed that image manipulation works
According to the researchers, “The size of crowds at protests is a highly politicized figure, as it reflects the level of public support for a regime or a specific political issue. Different political actors have varying interests in how the public perceives crowd size. Some may seek to downplay the crowd size to undermine legitimacy, while others exaggerate it to boost the protest.”
They added, “One way to do this is through the spread of misinformation through social media. Manipulated visual content, in particular, can be effective because it still has the reputation of being trustworthy for what happens on the ground, wherefore the content is often difficult to verify. Today, the latest generative models enable image manipulations on a large scale. These AI-generated images have become nearly indistinguishable from authentic images and can be used to either inflate or reduce crowd portrayals on images. ”
AI Video Manipulation Worldwide
The use of AI-generated visuals in politics is now a new normal. Recently, a video showing “thousands” protesting against immigration in Australia was found to be AI-generated. Faces appeared blurred, identical figures repeated, and trees remained unnaturally still. Media and police later confirmed that the actual rallies in Sydney, Adelaide, and Melbourne were much smaller, and the location in the viral video did not match any real site.
In August this year, during student protests near Indonesia’s parliament, a video claiming to show “thousands of students” also went viral. AFP Fact Check later confirmed it was AI-generated. The video contained meaningless banner text, distorted faces, and oddly moving vehicles—hallmarks of AI-synthesized visuals.
AI-generated Political Videos in Bangladesh
After the fall of the Awami League government, multiple AI-generated videos calling for rallies in support of the party have circulated on social media in Bangladesh.
Fact-checking organization Rumor Scanner identified several such videos showing alleged Awami League rallies in Dhaka or Chattogram, boat processions on rivers, “Joy Bangla” marches, and anti-Jamaat-Shibir protests. These videos showed clear inconsistencies, such as mismatched lip movements, people merging into one another, sudden body distortions, and warped or unreadable banner text.

Several fake rally videos were widely circulated in August and September 2025 from multiple accounts (1, 2). Notably, Google DeepMind released Veo 3 in May 2025, adding to the users’ ability to generate videos not just from text but also from images, with optional audio generation or custom soundtracks.
The “Hossain Saddam” video differed slightly from earlier AI-generated rally clips. Previous fake videos showed unreal locations with no match to real geography. In this case, however, the buildings, footbridge, and tower matched the original footage. The banner text and design were also clear and undistorted. This suggests that the AI-generated video was more realistic and closer to real footage.
Tech Company Policies and Warnings
Connor Leahy, CEO of AI safety firm Conjecture, told Time magazine: “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI.” He added that failing to regulate or penalize such irresponsible behavior could have “terrible consequences” for innocent people worldwide.
Meta has updated its labeling policy for AI-generated and manipulated content. The company said that if Meta detects AI-created images – or if users disclose they used AI tools — the post will be labeled as “Made with AI.” However, if a post is only partially edited with AI (not fully AI-generated), the “AI Info” label will not appear directly on the post but will be accessible through the post menu. Fully AI-generated content will display the label directly.
TikTok requires creators to label any realistic content generated or heavily edited with AI. If a post lacks a label, TikTok may remove, limit, or label it based on potential harm. In May 2024, TikTok announced that it would automatically label AI-generated content uploaded from specific platforms. Previously, the company labeled only content made with its own AI tools and relied on creators to tag other AI-generated content.
The viral Awami League rally videos on Facebook and TikTok had no such labels.
Julia Smachmann of the Ada Lovelace Institute said,“Existing technical safeguards implemented by technology companies such as ‘safety classifiers’ are proving insufficient to stop harmful images and videos from being generated,” She added, “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
In 2024, Raquel Miguel, senior researcher at EU Disinfo Lab, said, “Labelling does not address all the risks posed by AI technologies; it should complement other moderation measures and not necessarily prevent other harsher actions against harmful content.” She noted that many companies shift responsibility to users or AI detection tools. “Platforms continue to place the burden of responsibility on users (as in the case of YouTube) or on the AI industry for labelling content. Little effort on self-detection is made, which can leave a loophole for content that the tech industry or users do not identify.” she said.
Disclaimer: The original version of this fact-check report was published in Bengali on Dismislab’s Bengali website on October 8, 2025. The English translation was completed later; however, to maintain time accuracy and avoid any potential misinterpretation, the English version has been published with the original publication date.