Partho Protim Das

Engagement editor, Dismislab
101 lies that resurface (almost) every year
This article is more than 2 months old

101 lies that resurface (almost) every year

Partho Protim Das

Engagement editor, Dismislab

There is a Bengali proverb that says, “When ten people say so, even God becomes a ghost,” closely echoing Joseph Goebbels, head of Nazi Germany’s Propaganda Ministry, who asserted, “Repeat a lie often enough and it becomes the truth.”

This adage may seem even more relevant in a digital age, when one looks at the trend of recurrent misinformation or the lies that repeat over time. Dismislab analyzed more than 2000 fact-check reports published in seven Bangla fact-checking sites in a one year period and identified 101 misinformation that persists on social media to resurface almost every year even though they were debunked. Some of these lies have circulated on different platforms for 12 to 15 years and encompass religious hatred, populist rhetoric, and harmful health disinformation among others.

This story was first published on 28 February in Dismislab’s monthly newsletter, “Navigating Narratives”, (February edition)

The analysis finds that despite debunking by fact-checking organizations, such misinformation persists, finding validation when mainstream media grants them space, often evading editorial scrutiny. They endure on social media even after flagging by third-party fact-checkers (in the case of Facebook), as platforms’ automated systems often struggle to detect such misinformation when they resurface.

Trends and effects

Of the 101 recurrent misinformation instances found in the analysis, 4 have persisted for more than 10 years, and about 33 have been ongoing for 5 to 10 years. While politics tops as a topic in all types of misinformation, there is a difference in the trends. In the recurrent segment, most false or misleading claims are about health and science, while in the non-recurrent segment religion, sports, and disasters are the most common after politics.

In politics, false information about the top leaders of the two main parties in Bangladesh, especially Sheikh Hasina in the case of the Awami League (AL) and Khaleda Zia and Tarique Rahman in the case of the Bangladesh Nationalist Party (BNP), tends to recur frequently. 

For example, two contradictory false claims about Sheikh Hasina – that she has been elected the world’s third most honest head of state and that she is the world’s worst prime minister – have been circulating on social media every year since 2017-18, despite multiple fact-check reports debunking them. Another false claim that BNP Acting Chairman Tarique Rahman had been hospitalized in London after a mob beating first appeared on social media before the 2018 parliamentary elections and resurfaced by the end of 2023, before another parliamentary election. Elections and political protests play an influential role in such recurrence.

In the health domain, several harmful claims, including cancer cure tips, have been found to resurface despite fact-checking. For example, claims such as women should not eat coconut, cucumber, and cold water during their menstruation period; avoiding sugar and drinking lemon juice in hot water can cure cancer; and boiling coconut pieces in water and drinking that hot water destroys cancer cells.

Every year since 2019, another false health claim has been made about dengue fever: “Dengue mosquitoes usually bite below the knee, so applying coconut oil on the lower part of the knee acts as an antidote.” However, the doctor who was falsely quoted in the fake news told the fact-checkers that “Coconut oil does not help at all in preventing dengue. In this case, you (the patient) need rational treatment.”

Various studies have shown that repeating the same information over and over increases people’s trust and acceptance of it. A study titled “The Effects of Repetition Frequency on the Illusory Truth Effect” states, “Why do beliefs in myths, misinformation and fake news persist, despite having been clearly disproven? One contributing factor is likely the fact that people have been exposed to this information repeatedly,” something known as the illusory truth effect. 

In a study on the role of the illusory truth in the spread of misinformation, 260 people were given new and previously shown information and asked which they would share on social media; The results showed that participants were more willing to share the information shown earlier.

Inconsistency in detection: Role of platforms in question

One way to control recurrent misinformation is through social media platforms’ internal systems. Facebook employs various approaches to detect misinformation, including community reporting, its AI-based systems and third-party fact-checking organizations. After fact-checking organizations flag a false or misleading information, the platform adds a warning label to that with a link to the fact-check report. However, the study found weaknesses and inconsistencies in the application of these measures to prevent recurrent misinformation.

On January 31, 2023, Boom Bangladesh, Facebook’s third-party fact-checking partner, debunked a fake photo claiming that Bangladeshi Prime Minister Sheikh Hasina wore vermilion during her visit to India. Hindu women applying vermilion on the forehead is a timeless tradition in the South Asian region. This fake picture of Sheikh Hasina with a religio-political connotation, claimed that while she is wearing vermillion like Hindu women in India, she is using Islam in politics in her own country. Facebook has since put a warning label on almost every fake photo posted since 2016, and labels were seen in similar disinformation published after January 2023. This indicates that Facebook was able to detect this photo when it resurfaced after the initial flagging.

However, when it comes to detection and labeling, this is not the case with all misinformation posted on Facebook. For example, in 2018, a fake video of Begum Khaleda Zia, the chairperson of Bangladesh’s main opposition political party, the Bangladesh Nationalist Party, went viral. Khaleda Zia can be heard making various comments about her family and political movement in the video, which was cut and spliced together from an old speech of hers. Three separate fact-checking organizations published at least four fact-check reports in 2021 (12), 2022, and 2023 about this manipulated video. However, it resurfaces every year on social media, and the study found no Facebook warning label on at least 80 identical fake videos posted before and after the fact-checking report was released. What is even more concerning, this fake video was circulated on Facebook as an advertisement in December 2023, ahead of the 12th National Assembly elections.

Facebook says it not only adds warning labels but also scales the work of human experts using Artificial Intelligence.  “Our AI tools both flag likely problems for review and automatically find new instances of previously identified misinformation. “

However, this analysis finds that when a third-party fact-checker flags a fake story, Facebook predominantly adds warning labels to instances of the same misinformation published in the past. Yet, it is unclear how efficiently it can detect and label a misinformation, once it’s reposted months or years after initially flagged as false. This study identified more than a dozen of cases where previously labeled misinformation went undetected when resurfaced.

In Bangladesh, a long-held myth about the soft drink company Pepsi is that it stands for “Pay Every Penny to Save Israel.” This claim with a religious connotation, has been debunked multiple times by various fact-check organizations. However, Rumor Scanner reports that it has surfaced on social media every year since 2009, persisting for at least 15 years. In November 2023, FactWatch, a third-party fact-checking organization, debunked and flagged it as misinformation. Despite this, the fake news continued to circulate in 2024 without any warning labels.

Similarly, a health hoax circulating on social media since 2017 involves a claim that doctors administer ‘cow injections’ to deceased patients in a hospital’s intensive care unit (ICU), making them appear alive to inflate medical bills. In December 2021, FactWatch published a fact-check report debunking the claim and Facebook added a label. However, one can still find many identical posts on the platform lacking warnings, as the fake news resurfaced in 2022, 2023, and 2024.

A Bangladeshi fact-checker, who worked with two third-party fact-checking organizations, explained the process: Fact-checkers first identify and report on a particular misinformation and subsequently, enter Facebook’s system to ‘rate’ the related posts. Facebook typically adds warning labels to those. In some cases, the platform’s AI system then identifies other similar posts on that topic and prompts third-party fact-checkers to select misinforming posts from the list. This rating process often detects past posts with the same misinformation, prompting Facebook to include warning labels.

However, according to the fact-checker, “this automated system cannot detect all fake news posted on the same topic in the past, and sometimes some posts are missed.” Moreover, the rating in the second stage is not mandatory. Fact-checkers may choose to perform this step or skip it based on time constraints, and this applies to past posts only. In the case of a previously rated or flagged post resurfacing, two fact-checkers working with separate third-party fact-checking organizations said that they are not sure about the automated system’s ability to detect and automatically add a warning label, as there aren’t enough precedents to recall.

In the case of Facebook, both fact-checkers mentioned that Facebook’s automated detection system must improve in Bangla language to tackle recurrent misinformation in a better way. One added, “there should be more scrutiny on the bias of its detection systems as well.” 

The case of YouTube is even worse. How YouTube tackles misinformation on its platform is so vague and inadequate that independent fact checkers around the world labeled it a ‘major conduit of online disinformation‘. About 80 groups in that joint letter said that it “is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others”.

When it makes its space in the mainstream media

For the past 14 years, a common and popular misinformation circulating on social media claims that “Bangla has been selected as the sweetest and most melodious language in the world in a UNESCO survey.” In 2010, this claim first surfaced on Twitter and was subsequently reported by the Times of India. Since then, it has been published in various media outlets in Bangladesh and has found its way into the speeches or posts of at least three ministers in Bangladesh.

Over the next few years, screenshots of news articles have been used to spread this fake news. Often, it gains traction on social media on February 21, International Mother Language Day, a national day in Bangladesh that commemorates the martyrs who sacrificed their lives during the language movement in 1952. There are at least 10 videos on YouTube with this misinformation, sometimes presented as news, educational information, or quizzes.

Since 2019, another misinformation regarding the Bangla language has been circulating in various media and social media platforms. It claims that Bengali has been recognized as the second official language in London. However, according to Rumor Scanner, there is no official announcement to support this claim, and research by the charity City Lit, which presented this information, was deemed flawed. Despite this, numerous media outlets, both large and small, published this misinformation as news without proper verification. In 2019, it was also shared by a then minister.

An example illustrating how a fake story circulates on social media over the years following a media report is a fabricated quote attributed to Pakistani politician Maulana Fazlur Rahman. The claim suggests that he stated earthquakes occur because women wear jeans. In reality, this statement was published as satire on a Pakistani site in 2015. The satire found its way to Bangladesh through some Indian media outlets. In the same year, Daily Kaler Kantho published a news report citing Maulana’s fictional quote, which then spread on social media. Presently, whenever a significant earthquake occurs worldwide, this fake news resurfaces through social media, often accompanied by a screenshot of the report published by Kaler Kantho.

According to the fact-check data, 17 percent of recurrent misinformation is sourced from traditional media. In an article published on the website of Princeton University’s Princeton Election Consortium, author Sam Wang said, “The risk that people will adopt this false belief is made worse by the fact that ostensibly credible news media sources do not always engage in fact checking.” Any successful strategy, according to him, to debunk a false claim should include a replacement statement that is true, thus giving readers and viewers something accurate to replace the incorrect information.