Minhaj Aman

Research-Lead, Dismislab

Partho Protim Das

Engagement editor, Dismislab

Abrar Ifaz

Research Officer, Dismislab
A coordinated political bot network on Facebook exposed

A coordinated political bot network on Facebook exposed

Minhaj Aman

Research-Lead, Dismislab

Abrar Ifaz

Research Officer, Dismislab

Partho Protim Das

Engagement editor, Dismislab

On June 21 this year, an article titled “Is There an Invisible Code on Every Page Printed by a Color Printer?” appeared on the website and Facebook page of Bangladesh’s online news outlet, bdnews24.com. The technical nature of the story might have eluded the interest of political enthusiasts, but it caught the eye of a researcher from Dismislab—not for the article itself, but for a comment under the Facebook post:

“This election will be transparent. People will be able to vote freely in the upcoming election. The BNP is scared to participate because they can’t rig the vote this time.”

The researcher was taken aback by the subsequent comments. Even months after the election, users were still making remarks critical of the opposition party and its leaders, expressing hopes for a successful and fair election, and insisting that the election be held under the current government. But why would anyone make such comments so long after the election, and that too, on a completely unrelated, non-political post?

This question led Dismislab to uncover a bot network comprising 1,369 Facebook accounts, responsible for more than 21,000 coordinated comments, all in favor of the then-ruling party, Bangladesh Awami League, across various Facebook pages.

The network became active just before the 12th Parliamentary Election and continued using these comments even afterward, particularly targeting pages linked to established media outlets and the opposition, especially the Bangladesh Nationalist Party (BNP). 

The research began by collecting every comment from that bdnews24.com Facebook post. A subsequent Google search identified 196 additional posts with similar comments, resulting in a database of around 35,000 comments. Among these, over 21,000 were traced to the bot accounts, all derived from just 474 unique comments that were repeatedly posted across different pages. To ensure accuracy, the final dataset focused on comments that were posted more than 10 times. Given the scope, the actual number of bot accounts and comments might be even higher.

The bot network typically focused on political posts, especially those containing specific keywords. However, they sometimes made errors, particularly with terms that, when written in Bangla script, coincidentally contained the letters corresponding to “EC” (Election Commission). For instance, in the case of bdnews24.com, an article about color printers mentioned “Machine Identification Code (MIC).” The bots mistakenly identified the “IC” in “MIC” as “EC” when transliterated into Bangla, leading them to flood the post with hundreds of comments. The study revealed several such instances of these bot blunders.

The final database provided insights into the political keywords and comments the bot network used. Analysis of each account linked to the network revealed that most profiles shared common traits: locked or private profiles, activation before the election, missing or stolen profile pictures, very few or no friends, and following two specific pages. These characteristics are consistent with the conventional definition of political bots.

Meta describes Coordinated Inauthentic Behavior (CIB) as coordinated actions aimed at manipulating public discourse to achieve strategic goals, often using fake accounts to mislead others. The activities of this bot network appear at least partially, if not entirely, computational. They chose posts based on specific keywords and then posted pre-written comments, with little involvement from humans.

Dr. Naeemul Hassan, an Assistant Professor at the Philip Merrill College of Journalism and Information Studies at the University of Maryland, noted, “The findings of Dismislab’s investigation suggest the use of computational tools.” However, it is important to recognize the limitations of this analysis. While the repetitive and coordinated nature of the comments points to computational methods, the investigation lacks direct evidence of automation, such as a detailed analysis of bot behavior over time or specific insights into any algorithmic processes involved.

Profile patterns

Dismislab identified 1,369 accounts in this research. Most of these profiles (77%) were under female names, with striking similarities. For example, 24% of the female profiles had the last name “Akter” (spelled as Akter or Aktar), with names such as Diya Akter, Riya Akter, Liza Akter, Lima Akter, Lisa Akter, Misa Akter, and the list goes on.

Male profiles frequently used “Ahmed” and “Islam” as the last name. A common trait observed in both male and female accounts was that 90% of the names consisted of two words. Sometimes, a single name was split into two words, such as Ri Pa, Mi Na, Li Za, Zu Thi, Lam Ya, etc.

Fact-checking organization Snopes and cybersecurity firms Cloudflare and McAfee (1, 2, 3) suggest looking for specific traits to identify fake Facebook profiles, including restricted privacy settings, minimal information in the “About” section, very few or too many posts, profile pictures taken from elsewhere on the internet, and usually a low friend count.

In line with these criteria, Dismislab reviewed the privacy settings, friend counts, posting tendencies, identity information, and profile pictures of the 1,369 profiles. It was found that 247 profiles were locked. Of the remaining 1,122 accounts, 70% used profile pictures taken from elsewhere on the internet. In some cases, the same picture was used across multiple profiles.

For example, a profile picture not found elsewhere online was used by eight accounts: Fahad Islam, Rajib Ahmed, Mamun Ahmed, Ahmed Tahsin, Riyad Khan, Tanzim Ahmed, Jahangir Alom, and Ariyan Islam. The picture was uploaded on all profiles on the same day, November 30, 2023. Similar findings were observed for 19 other profile pictures, which were used on multiple profiles despite not being found elsewhere online. A total of 111 accounts used these 20 profile pictures.

In female-named accounts, the same profile picture was used across different accounts. McAfee notes that fake accounts often use attractive pictures of women or handsome men as a lure. This trend was also observed in this network, where pictures of Bangladeshi actresses and models like Ashna Habib Bhabna, Tama Mirza, Naila Nayem, and South Indian actress Malavika Menon were used (The archived links lead to fake profiles using the mentioned actress’s picture.).

Among profiles that were not locked, 85% had no identity information in the About section. 93% had no public posts except for profile pictures and cover photos. Nearly half (49%) did not display friend counts, and 45% had fewer than 200 friends. Only seven accounts posted regularly, often sharing similar types of posts.

Interconnected bot network and coordinated behavior

This research identified 21,221 political comments, but only 474 were unique. The bots circulated these 474 comments across different posts repeatedly. The analysis clearly shows the interconnection among the bots (see the visualization below). For instance, a profile named Riya Akter made 138 comments on different posts over the last six months. On May 18, long after the election, one of her comments was: “Bangladesh is an independent, sovereign country. An independent Election Commission (EC) has been formed here by law. The upcoming national election will be neutral, I hope.” This comment was posted on 96 posts by 109 bot accounts, including Diya Akter, Raisa, Rafia Akter, Nahid, and Nipa.

Another comment, “Only those who have no faith in the people are afraid. That’s why BNP is scared to participate in the election,” was the most frequent, appearing 244 times. It was made by 217 bot profiles across various posts. On a sports news post on Channel i’s Facebook page, eight people, including Arian Munna, Nusrat Faria, and Ratna Akter, made the same comment.

There are numerous examples where multiple bot accounts posted identical comments with the same spelling errors. Meta also mentioned such linguistic errors in the case of a Chinese CIB network in 2022.

Another indication of the interconnectedness among the profiles is their tendency to like two specific pages. Of the profiles that were not locked, 70% followed either Banglar Khobor, Awami League Media Cell, or both. However, since the fall of the Sheikh Hasina government, the Awami League Media Cell page is no longer available.

Analysis of the timing of Facebook comments shows that 88% of the comments were made within two hours. For example, on a news link about BNP Chairperson Khaleda Zia’s treatment on Prothom Alo’s Facebook page, 151 bot comments were made between 11:45 AM and 1:40 PM, a span of 1 hour and 40 minutes.

Meta has taken action against various pages and profiles in Bangladesh for CIB—removing nine pages and six profiles before the 2018 election and 98 pages and 50 profiles in 2024.

Bot narratives and targets

The bot network mainly targeted 42 Facebook pages, which were primarily related to various media outlets and political opponents of the Awami League. Of these, 68% were top-tier and well-known media pages, and 31% were BNP or BNP-affiliated pages. The network also targeted the page of the political party Ganasanghati Andolan.

Content analysis shows that 86% of the 474 unique bot comments, were critical of BNP and its leaders or attacked them. Examples include: “BNP is a terrorist organization. They should be punished” or “BNP is conspiring to plunder Bangladesh by embezzling and smuggling money and creating Hawa Bhaban” or “BNP is the main player in the international conspiracy against Bangladesh.” The remaining 14% of the comments praised the government and the Election Commission and called for peaceful or fair elections.

Among the news articles where comments were made, 36% were related to the BNP, while the rest touched on topics like the Upazila elections, the National Parliamentary Election, and the Election Commission. The bot network was particularly consistent in targeting posts that featured key terms such as BNP, Nationalist Party, Election Commission, EC, Khaleda Zia, Awami League, and Sheikh Hasina, ensuring coordinated comments on any news items that included these words.

Where bots went wrong

Bangladesh’s 12th National Parliamentary Election was held on January 7 this year, and the Awami League formed the government for the fourth consecutive term. However, bot comments posted in February, March, April, May, or June continued to use phrases like “the upcoming 12th National Parliamentary Election” or “as the time for the 12th National Parliamentary Election approaches.” This indicates that the comments were created before the election but were posted indiscriminately even afterward.

For example, the comment “The upcoming 12th National Election must be free, fair, and participatory so that other countries and people find this election credible. The Election Commission is working towards this goal”—was made 123 times across 49 posts. All these posts were published on Facebook after January 7. In at least 15 unique comments, the context was pre-election, but they were posted post-election, indicating a lack of human oversight in the process.

Another example of the bots’ blunders is their posting of political comments on irrelevant news items. At least 61 such instances were found (about one-third of the total posts) where comments criticizing the opposition and praising the then-government were made on unrelated news, such as a bank vault robbery in Bogura, the death of former Iranian President Ibrahim Raisi, the ICC requesting a change of venue for a cricket match in Pakistan, or news about an actress being hospitalized due to illness.

It may seem puzzling at first why these political comments appear under unrelated posts. However, a closer analysis of the text accompanying these posts reveals a common pattern. When certain terms—like ICU, ICC, Raisi, IFIC, OIC, ICT, Dialysis, or Crisis—are written in Bangla script, they contain the sequence of letters that correspond to “EC” in Bangla. For instance, the bdnews24.com post about the color printer, which sparked this research, included the term “MIC” (Machine Identification Code), which also contains this letter sequence when transliterated into Bangla.

“EC” is an abbreviation for the Election Commission. However, the bot network jumped on any post containing the term “EC,” whether relevant or not, and posted coordinated political comments there. This suggests that the network used computational tools based on keyword detection to select which posts to comment on.

In 2022, Meta identified a Russian CIB network where similar bot-comment descriptions were found under irrelevant posts. The social media company also mentioned a similar Israeli network in its 2024 Adversarial Threat Report for the first quarter.

Computational propaganda and democracy

Dr. Naeemul Hassan of the University of Maryland believes that the bot network uncovered by Dismislab is an example of computational propaganda. After independently reviewing the study’s results, he said, “These activities are mainly carried out by using modern technology, with the coordinated efforts of both humans and computational tools.”

Not all such propaganda activities are necessarily carried out with technology alone. As Samuel C. Woolley and Philip N. Howard noted in their book Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (2018), “when it comes to effective use of computational propaganda, the most powerful forms will involve both algorithmic distribution and human curation— software bots and human trolls working together.” 

Since the 2016 US presidential election, examples of computational propaganda and political bots have been observed in countries such as Brazil, Russia, Ukraine, Poland, and the Philippines. The Oxford Internet Institute’s 2020 Global Inventory of Organized Social Media Manipulation report mentioned that they documented incidents of computational propaganda and political disinformation on social media in 81 countries that year.

Research has shown that the proliferation of bot networks can have a profound impact on democratic processes and the integrity of information. According to another study published by Woolley and Howard in the Journal of Democracy (2016), such networks are capable of distorting public discourse by amplifying certain viewpoints while suppressing others. This manipulation can create a false sense of consensus, polarize public opinion, and ultimately undermine trust in democratic institutions. 

Naeemul Hassan said, “In the future, these technologies will become even more advanced with the use of artificial intelligence (AI). For example, it will no longer be necessary to use the same profile picture across multiple fake accounts, as AI will be able to create many profile pictures instantly, making it difficult to identify coordinated misinformation campaigns. Therefore, it is essential to raise public awareness about these issues from now on.”


Methodology

This study by Dismislab was sparked by the discovery of an irrelevant political comment under a Facebook post by Bangladesh’s online news outlet, bdnews24.com. To investigate where else such comments had surfaced, the initial 109 comments from that bdnews24 post were individually searched on Google, leading to the identification of 196 additional Facebook posts where similar comments had appeared.

Including the bdnews24 post, a total of 197 Facebook posts were scraped using Apify, an online tool that automated the collection of 35,930 comments from these posts. To ensure the analysis focused on bot-generated comments, we filtered the data to include only those comments that appeared more than ten times, resulting in a dataset of 21,221 comments.

From this refined dataset, we identified 474 unique comments that each had more than ten instances. These comments became the primary focus of our study, as they were more likely to be part of coordinated, bot-driven activity. From the list of comments, we identified 42 distinct Facebook pages that were frequently targeted. These pages were then classified by their type, resulting in two main categories: media and opposition party.

The analysis extended to the characteristics of the 1,369 Facebook profiles responsible for these comments, examining factors such as the number of friends, information in the “About” section, the timing of their first post, whether their profiles were locked, and whether their profile pictures could be found elsewhere on the internet.

Limitations: This investigation focused on specific Facebook posts under bdnews24.com and similar posts identified via Google searches, which may not capture the full extent of bot activity within the network. The reliance on reverse searching comments could have missed relevant posts or comments that did not match exactly. Automated data collection using Apify might have limitations, especially if content was removed or altered after scraping. Moreover, the focus on Bangla language content could limit the applicability of the findings to other languages, and the results are specific to Facebook, which may differ from bot activities on other social media platforms.