
Pro-Awami League network discredits media and critics with review bombing
A network of more than 1,000 Facebook accounts has been running coordinated review-attacks on journalists, newsrooms, political rivals and cultural institutions, using Facebook’s review tool to discredit critics and suppress reporting. These accounts repeatedly post negative reviews on pages critical of the Awami League (AL), lowering their public rating on Facebook.
Dismislab analyzed 62,529 review posts across 1,118 pages to trace how this network operates, who it targets, and how the same language appears hundreds of times in rapid bursts across the social media platform.
Between January and October 2025, this network went on to target at least 721 pages, mainly of newsrooms, activists, political rivals and cultural institutions and accused those of spreading “false information”, “terrorism” or “social harm”. The same accounts also posted positive reviews on dozens of pro–AL pages, and on a number of Indian media pages and the Facebook profile of a Bharatiya Janata Party (BJP) politician, boosting one side while discrediting the other.
The review-bombing targeted criticism. For example: News outlet Dhaka Stream was hit hours after publishing critical stories about former prime minister and AL president Sheikh Hasina. Drik, an independent media organisation, faced attacks after posting images from its exhibition on the July-August uprising that toppled Hasina. Dismislab itself was targeted immediately after reporting on the online harassment of Dr Tasnim Zara, a leader of the newly-formed National Citizen Party (NCP). The pattern was also consistent: two to three-hour bursts of identical comments, coordinated timing, and repeating texts across pages.
Media analysts say such reviews can distort public perception, creating what one editor described as “an online mob” where the first impressions override facts and discredits the targeted page.
Review bombing on Dismislab page
On Facebook, review-bombing generally means flooding a page’s recommendation or review section with negative comments or ratings to damage its reputation. Individual criticism is common—people may disagree with a report or dislike a publisher. But when many reviews arrive in a short time, repeating the same wording across multiple accounts, the pattern signals coordination.
The review-bombing of Dismislab began on October 2, 2025, shortly after it published a report on the online harassment of Dr Tasnim Zara. The first review came from the profile Chakrapati Arpita (later changed to Orpita Chakroborthy), who wrote: “I do not thank ‘Dismislab,’ one of Bangladesh’s leading organizations working in favor of rumor and information-terrorism, for this responsible role.” (Translated verbatim from Bangla)
Within minutes, other reviewers repeated the same text. By the morning of October 3, between 7:00 and 9:00 a.m., activity peaked, with 48 reviews appearing within two hours—sometimes only seconds apart, many carrying hashtags such as #StopPropaganda or #ProtectHistory.
Over three days, 60 profiles posted negative reviews, each containing a comment. But when analyzed, those comments turned out to be only 23 original messages reused across accounts. One of the most common lines, posted by 11 profiles, read: “not just talk – they incite damage: fake news, vandalism narratives, and social harm.” Dismislab later traced these same 23 comment scripts across 315 other pages, appearing 1,473 times.
To identify who carried out the attack, Dismislab reviewed the 60 accounts behind the posts. The names varied. Some profiles appeared under personal identities, such as Sheikh Rasel, Asim Sutradhar, Neel, Nira Nira, Emad Bin Rohan or Haidar Ahmad. Others used explicitly political titles, including BNP মানে বিনোদন (BNP Means Entertainment), NCP = National Chandabaz Party, 1971 to 2024: Rajakar-Lalbadar, or Sheikh Rasel 71/75.
At least 19 accounts were named after foods or condiments: Roti Burger, Tropic Mayonnaise, Sosej, Sos Salad, Mushrooms Soup, Chicken Slice, Hot & Spicy, Mayo Best, Black Pepar, Swiss Bear Cheese and others. Thirteen of these pages were created over just two days – May 31 and June 1, 2025. Together, they posted more than 3,000 reviews on different pages, most using the same small set of repeated texts seen in the attack.

The investigation found 36 of the 60 accounts that attacked Dismislab also left positive reviews on a Facebook page called Rog Porichorja Kendro. The page identifies itself as satire or parody, but its posts regularly expressed support for the AL party and mocked opposing groups as observed during the investigation. One anniversary post described how the tone had shifted: what started as humour had, after August 5, 2024, moved toward actively teaching followers how to “fight online with memes.” Laughter, it suggested, was a tool for confrontation.
60 profiles lead to a network of over 1,000 attackers
| Metric measured | Threshold applied | Result (accounts) | Interpretation |
| Page reviewing patterns | 23 unique review phrases | 278 | Core reviewer set |
| Comment frequency | ≥10 comments/user | ~600 | High-activity accounts |
| Comment frequency | ≥6 comments/user | ~1,150 | Medium-activity group |
| Copy-pasted text repetition | Text repeated ≥10 times | 1,019 | Coordinated or templated messaging |
When researchers began mapping activity linked to Dismislab, the initial footprint appeared small. A cluster of 23 repeated review comments revealed an initial ring of 278 accounts—statistically notable, though not large enough to indicate scale or depth. It prompted a broader analysis.
Using the full dataset of 62,529 comments, drawn from 1,118 pages reviewed by 60 primary actors, the research team shifted from language patterns to behavioural frequency. Comments were first sorted by account-level activity, and users posting fewer than ten comments were excluded. Under this stricter threshold, approximately 600 accounts remained demonstrating persistent engagement. When the threshold was relaxed to include accounts posting more than five comments, the estimated network widened to roughly 1,150 accounts, indicating a second layer of moderately active participants.
To assess possible coordination, the dataset was then filtered to retain only comment texts that appeared ten times or more, removing all instances below this frequency. This produced a set of 1,019 accounts that had copy-pasted identical messages—a volume unlikely to emerge from spontaneous, unconnected activity.
The extended group posted more than 13,000 review comments. About 94% of these comments pushed a pro-AL message, and roughly four out of five were negative attacks aimed at journalists, media outlets, activists, civil society groups and political opponents. Around 16% praised pages aligned with the party or pages that support its narratives.
One comment alone – “this page spreading false information and promote terrorism. meta should remove it” – appeared 930 times, posted by 215 reviewers between March and October this year. Others accused pages of spreading hate speech, propaganda, lies, or of “glorifying traitors”, often with calls to remove the page altogether.
Data show that review-bombing activity linked to pro-AL accounts became visible in January 2025, with roughly 358 negative comments recorded that month. The volume rose steadily through the year. By September, the monthly count had increased nearly seven-fold to around 2,465, and the trend continued in October. While this report was being prepared, several accounts in the dataset became unavailable or appeared to have been removed from Facebook.
The dataset also revealed 24 users posting anti-AL reviews across pages, and 612 neutral or unrelated comments were traced to other pages.
Who gets targeted by the network
Dhaka Stream offered one of the clearest early signals of how the network operates. On September 17, just hours after the outlet published a report alleging that Sheikh Hasina had forcibly purchased land from a Hindu family, its Facebook page received 30 negative reviews between 5:55 and 6:36 p.m. All carried the same tone and wording, arriving in a tight wave. The outlet later disabled its review section.
When asked about the reason, Uchhash Khan, the deputy manager of social media marketing at the news outlet, said that a wave of negative reviews began appearing on their page immediately after they published a report on the land owned by Sheikh Hasina. Describing the reaction as a form of online harassment, he said, “It wasn’t just that people accused our page of spreading fake news or propaganda; many of our colleagues had their personal photos and family information circulated along with hostile reviews. To prevent this from continuing, we decided to disable the review feature.”

Drik experienced something similar. The award-winning independent media organization saw two rounds of coordinated review-bombing—first on April 19-20, during its Press Photo Contest exhibition featuring images from the July-August uprising, and again on October 9-10, shortly after it posted updates about managing director Shahidul Alam’s participation in the Gaza Freedom Flotilla and his detention. In both instances, the reviews arrived in clusters, using near-identical language. Shahidul Alam has long been a critic of Sheikh Hasina and has played a prominent role in the July-August uprising that ousted Hasina. Back in 2018, Shahidul was arrested for “spreading propaganda” against the Hasina government.
News outlets including Jamuna TV, Dainik Amar Desh, Star News and SA TV News were also hit. One review posted on Jamuna TV’s page, accused the TV channel of “promoting militants” and being involved in “3,000+ police killings”. The same comment appeared 30 times between 2:44 and 3:30 a.m. on February 3.
Prothom Alo’s lifestyle page Haal Fashion was also targeted. Between 9:18 and 11:20 p.m. on October 2, twenty-four identical Bangla-language reviews arrived, mocking the paper’s editor and calling for a boycott. Similar waves appeared earlier against SA TV and Star TV in late July and late August.
Political actors formed another large group of victims. BNP Media Cell received 41 negative reviews on January 7 and 25 more on August 5. Pages associated with Jamaat-e-Islami and Chhatra Shibir were also targeted, as were six pages linked to Rashtra Sangskar Andolon party and its publishing network, Rashtrochinta Prokashoni, on August 31.
Political figures such as Bobby Hajjaj (National Democratic Movement), Imran Imon (National Citizen Party), Rashed Khan, Faruk Hasan (Gono Odhikar Parishad), adviser Mahfuj Alam, public commentators Salimullah Khan, Mostofa Feroz, Zahed Ur Rahman and journalist Khaled Muhiuddin appeared repeatedly in the review records. Most had, at some point, criticised the Awami League or spoken publicly about its abuses.

State institutions were not immune. Bangladesh Shilpakala Academy, Press Institute of Bangladesh, Bangladesh Police and the Bangladesh Investment Development Authority were all targeted with coordinated negative reviews. Organizations connected to Chief Adviser Dr. Muhammad Yunus, including Grameen Foundation, and the association of families of victims of enforced disappearances, “Mayer Dak,” were attacked in the same way.
Publishing houses and online bookshops—including Oitijjhya, Ahsan Publication, Rokomari, Book Café and BiddanBD—saw the same pattern of clustered negative reviews. Pathshala South Asian Media Institute and several cultural spaces also appeared in the dataset. On one Rokomari-affiliated page that promotes political and historical books, there were 30 reviews with the same comment: “don’t fall for the looks. The books are poorly written and badly printed.”
The official Facebook pages of the U.S. Embassy Dhaka and the Pakistan High Commission Bangladesh also received negative reviews from the same actors, using slang and accusing the embassies of “promoting terrorism.”
Researchers manually reviewed the posting behaviour of each of the 1,019 accounts by visiting their public profiles, examining shared posts and engagement patterns to infer political alignment and verify indicators of support for the AL.
The same network also posts praise
Many of the same profiles that attacked media outlets and critics also posted positive recommendations on favorable pages, often in the same copy-paste style. In total, 97 unique positive review comments appeared 2,086 times across 165 pages.
These review-comments mostly promoted the ousted AL leadership or echoed its political narratives, praising party figures, criticizing the interim government or opposing groups, and using familiar liberation-war slogans such as “Joy Bangla, Joy Bangabandhu”. In several cases, reviews described these pages as “patriotic”, “defenders of Bangladesh”, or part of a “movement for justice”.

The same accounts reviewed negatively and positively depending on who they intended to boost or discredit. The profile Chakrapati Arpita, later changed to Orpita Chakroborthy, which criticized Dhaka-based news outlets, also posted a favorable review on ABP Ananda and Republic Bangla, both Kolkata-based news outlets. On ABP Ananda, Orpita writes: “It is an unbiased page that always brings us the right news. He is a fighter journalist and finds true news through many hurdles which really benefits us a lot.”
On the page of Zee 24 Ghanta, another India-based outlet, six profiles – including “Sheikh Rasel”, “Sosej” and “Sreya Islam” – posted the same reviews on July 17. It reads: “A strong page for all news and truth — Zee 24 Ghanta.” These same profiles had previously left negative reviews on Bangladeshi media pages.
Some of the same accounts posted supportive reviews on the Facebook page of West Bengal BJP leader Rajat Mukherjee. One comment, used repeatedly, described him as a “true friend of Bangladesh” and praised his stance against “anti-state propaganda”.
A smaller mirror network on the other side
On a smaller scale, opposition-aligned pages and profiles also used review-bombing tactics. The investigation identified seven negative review messages, each used more than 10 times, that were posted 250 times on 33 pages by 24 profiles opposing the Awami League. Some of these comments described the party using labels such as “গুজব লীগ” (“rumour league”) or “নাস্তিক লীগ” (“atheist league”).
These profiles also posted positive reviews on certain pages, warning them about what they described as “bot league” review-bombing. On pages like Abu Saeed, Mughda Shesh Hoyni Juddho and “ইনকিলাব জিন্দাবাদ – Inquilab Jindabad,” positive reviews included messages such as: “great page but dismiss review option so that bot league couldnt give fake reviews” and “turn off the review option so that crying bot bahini won’t mess up the page reach.”

Here, too, the timing was compressed. On July 17, ninety-nine negative reviews were posted on twenty-one AL-supporting Facebook pages on the same day. On June 20, sixty negative reviews were posted using the same playbook. On the page Bengal Empire 71, fourteen negative reviews appeared between 3:05 and 4:55 p.m. that day, while across seven pages, forty-nine reviews were posted in the same two-hour period. On July 17, sixty-eight negative reviews were posted across twelve pages in two bursts – one in the afternoon between 3:09 and 3:35 p.m., and another in the evening between 7:00 and 10:00 p.m.
This mirror network appears significantly smaller and less coordinated, but the scale requires further research since this dataset was built from tracing attacks against Dismislab.
Impact and why review-bombing matters
Coordinated review activity can influence how a Facebook page is perceived by ordinary users. When dozens of similar reviews arrive within hours, the platform’s interface displays them as authentic public sentiment. Pages under attack often see a sudden drop in their “recommended” rating, making them appear less credible to new visitors. In several cases observed in this dataset, pages later disabled the review feature entirely, losing a tool that normally helps build trust.
Another Bangladeshi Fact-checking group, FactWatch, has also experienced similar coordinated review attacks. Its founding editor and the Dean of the School of Social Sciences at ULAB, Dr. Suman Rahman, told Dismislab that sudden waves of identical negative reviews can create “an online mob,” discouraging ordinary users from engaging with a page at all.
He said the intention behind such reviews is clear: “The very purpose is to discredit. First impressions matter, and many users form opinions without checking the facts.” Dr. Rahman noted that the impact depends heavily on media literacy: “People with stronger media literacy may ignore it, but a large portion of users don’t. They see a negative review and immediately form a judgment.”
Meta’s Community Feedback Policy states that reviews must reflect real experiences and prohibits coordinated manipulation of ratings or recommendations. The company also restricts reviews that include misinformation, offensive feedback, harassment or attempts to artificially lower a page’s score.
Under Meta’s Coordinated Inauthentic Behaviour (CIB) framework, networks that use multiple accounts to mislead audiences about identity, intent or activity may be subject to enforcement. However, in several of the pages analyzed, negative reviews remained visible despite being repetitive, near-identical in wording and posted within minutes of one another. Whether Meta classifies this as policy-violating behavior depends on intent, coordination proof and reporting volume. The visibility of the reviews in this dataset suggests that the review tool currently leaves space for manipulation.
Methodology
Dismislab began by collecting all reviews posted on its own Facebook page in October. Sixty profiles were identified as having left 60 negative reviews. Researchers then opened the “reviews given” history for each of these 60 profiles to document where else they had posted. This generated a dataset of all pages, each profile reviewed, the text used in each review, and the frequency in which those texts appeared. From that list, every page where reviews were posted was logged, and the total number of review comments made by these accounts was counted.
In the second stage, Dismislab retrieved reviews from those same pages to build a larger network sample. Only pages where the review feature remained publicly visible were included. From these pages, all review comments were extracted and indexed by text pattern, repetition rate and posting account. Comments were then grouped by uniqueness – first counting distinct review scripts, then separating those that appeared more than ten times.
Generic greetings and non-meaningful repetitions were removed to ensure the dataset reflected only substantive review content. The final dataset was then used to measure repetition, overlap between accounts, and the spread of identical review scripts across Facebook.
This analysis is based on publicly available comments and reflects only the observable portion of network activity, meaning silent viewers, private interactions and removed posts remain outside the scope of measurement. Network size estimates are sensitive to threshold choices — such as ≥5 or ≥10 comments per account, and comment-text repetitions of ten or more — and different cut-offs would produce different scales. Copy-paste detection relies on exact textual matches, so paraphrased or slightly modified versions of the same message may be undercounted. High-frequency participation and repeated messaging indicate behavioral patterns, but cannot independently confirm coordination, intent or affiliation without additional temporal or cross-platform evidence.



