Dismislab

Official Desk
Misinformation on YouTube: High profits, low moderation
This article is more than 4 months old

Misinformation on YouTube: High profits, low moderation

Dismislab

Official Desk

Summary

YouTube, a dominant platform in Bangladesh, significantly influences news consumption and entertainment, but concerns about its role in spreading and monetizing misinformation persist. A study by Dismislab, Digitally Right’s disinformation research unit, identified 700 unique Bangla misinformation videos fact-checked by independent organizations and still present on YouTube as of March 2024. About 30% of these misinformation videos, excluding Shorts, displayed advertisements, thereby generating profit for the platform and posing reputational risks for the advertisers. These ads were seen on 165 videos, which accumulated 37 million views and featured ads from 83 different brands, one-third of which were foreign companies targeting the Bangladeshi audience. 16.5% of the channels posting these videos were YouTube-verified, including known media outlets, but mostly content creators across various genres like entertainment, education, and sports, often pretending to be news providers.

Misinformation primarily centered around political (25%), religious, sports, and disaster-related topics, with some channels repeatedly spreading false information. Researchers reported all 700 videos to YouTube, with only a fraction (25 out of 700) of reported videos receiving action, such as removal or age restrictions, highlighting gaps in YouTube’s enforcement of its own policies. Advertisers and experts, interviewed for this study, expressed frustration over ad placements on misinformation content, emphasizing the urgent need for YouTube to enhance its moderation capabilities and provide better transparency and control options.

Author: Tohidul Islam Raso; Data Analysis: Partho Protim Das; Research Assistants: Minhaj Aman, Ahamed Yaseer Abrar Ifaz, Steve Salgra Rema, Fatema Tabasum, Neeti Chakma; Reviewer: Md. Pizuar Hossain.


Introduction

YouTube is one of the most popular social media platforms in Bangladesh with more than 36 million monthly users as of April 2024. From news to entertainment, education to health, political activism to travel blogging, YouTube videos have become an integral part of life and a major source of information for many. The large user base and a growing consumer economy makes the platform a destination of advertisement from local and international companies. According to Data Reportal, YouTube ads reached 43.4 percent of Bangladesh’s internet users in January 2024.

While there are other sources like subscriptions, membership and chats, advertisements is the primary source of revenue for Youtube which also allows users to monetize their video content and earn. Youtube and other social media platforms transformed the content creation market in Bangladesh and have become a viable career opportunity for many

“No company has done more to create the online attention economy we’re all living in today,” writes Mark Bergen at the beginning of his book “Like, Comment, Subscribe,” which details the history of YouTube. The more content is watched, the more both the creators and the platform earn. In this attention economy, the race to convert attention into profit often allows misinformation and disinformation to survive and thrive on the internet.

YouTube has long been criticized by fact-checkers and researchers for its insufficient role in combating misinformation. In a 2022 letter 80 fact-checking groups said, “YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves. Current measures are proving insufficient.”

A recent study by the Center for Countering Digital Hate (CCDH) revealed that YouTube is generating millions of dollars in revenue from ads on channels that spread climate change misinformation, as content creators employ new tactics to evade the platform’s misinformation policies and monetize their misleading content. Similarly, previous research by Dismislab revealed that numerous fake news channels, often verified and spreading false information, are monetized on YouTube and creators turned fake videos of celebrity deaths into a lucrative business, exploiting the public’s fascination with celebrities to generate profit.

This research further investigates how YouTube allows the monetization of misinformation in Bangladesh while also benefiting financially from it. It examines how the platform moderates misinformation content after community reporting and highlights the concerns of brands whose ads are displayed alongside false and misleading content, with inadequate remedies in place.

Methodology

To determine what is misinformation, this research relied on fact-checks produced by independent fact-checking organizations. Researchers analyzed 2042 fact-check articles published between January 1, 2023, and September 30, 2023, on seven fact-checking websites covering Bangladesh including Dismislab, Rumor Scanner, Boom, Newschecker, Fact Crescendo, Fact-Watch, and AFP Fact Check. The titles of false or misleading claims from these articles were searched on YouTube to identify corresponding videos. In instances where multiple results were found for a specific claim, only the first instance was considered for further analysis. This search was carried out between November and December 2023.  

Each video was then reported to YouTube using six different user IDs between January and February in 2024. During submission, reports were categorized as misinformation in YouTube’s User Reporting system, with a short description of the misinformation and links to fact-check reports provided in the additional details section. After the reporting the team waited for four more months to analyze the findings on actions from the “Report History” where YouTube shows the user what action has been taken on the reported content. 

To assess whether these misinformation videos display advertisements, a team of researchers watched each video twice between February and March, 2024. The team documented each ad during the review, including the associated company or brand name.

Six key informants were interviewed representing advertising brands, digital marketing agencies and issue experts to get a better understanding of the challenges and needs in addressing monetization of misinformation on YouTube. 

Scholarly writings, relevant platform policies, reports from various organizations, and news articles were also studied in an extensive desk research. These resources provided valuable insights into policy updates, critical discourses regarding YouTube’s policy implementation, and existing knowledge about monetization on YouTube.

This section presents an analysis of the findings, organized into two sub-sections, each addressing different aspects, including trends in misinformation by types of channels, content, and topics.

Type of content and channels

– 700 unique misinformation content, already fact checked by independent fact checkers, have been alive and thriving on Youtube generating views and engagement till March 2024.

– Of the 700 fact-checked misinformation content, about 80 percent (558) were posted as videos and the remaining 20.3 percent (142) as short videos known as Shorts. 

– The long videos garnered about 149 million views, averaging around 267,000 per video and 142 Shorts received 212,052 reactions in total.

– These videos were posted by 541 YouTube channels and 16.5 percent (89) of those channels are verified by YouTube. While some of these channels represent mainstream media outlets, most are content creators of various nature including entertainment, education, sports and often pretending to be news providers.

– Among the channels analyzed, 64 were found to spread more than one misinformation video. 

– One channel (Sabai Sikhi) posted as many as 9 misinformation videos, and those videos were removed either by YouTube or the creator following a report published by Dismislab in March 2024.

Misinformation by topic

– 60 percent of the total videos analyzed spread misinformation across four topics: politics, religion, sports, and disasters. 

– One in four (25 percent) misinformation videos were about politics, highest among all topics, mainly triggered by the 12th parliamentary election of Bangladesh, held on January 7, 2024. 

– Major political misinformation narratives included false claims about elections, implying doubts about their fairness and suggesting undue influence by the United States (US) in election processes; misleading information regarding new visa policy of the US in May 2023, falsely alleging sanctions on Bangladeshi government figures; rumors of a military coup and takeover, falsely asserting imminent military control over the country; and various misleading narratives surrounding the death of different public figures.

– Out of top 20 channels that produced and shared the highest number of political misinformation, 18 are dubious in nature, often pretending as news channels and impersonating the mainstream outlets.

– Similar content creators are the top contributors to religious misinformation, comprising the second highest share at about 15 percent of the total.

– YouTube channels of mainstream media outlets mostly topped in spreading sports and disaster related misinformation.

Profiting from misinformation

YouTube removed the display of monetization status in the channel page code on November 17, 2023, limiting the ability of researchers and creators to examine who is allowed into the YouTube Partner Program to monetize content. While there are available tools to check if a video is monetized – that scans the source code of the video for “yt_ad”, “value”: “1” to determine if it plays ads – they often produce inconsistent results. To address the limitation the research team relied on a manual approach to watch each of the sample videos twice and document if those display any advertisement.

There are various formats of ads on YouTube, including skippable in-stream ads, non-skippable in-stream ads, in-feed video ads, bumper ads, outstream ads, and masthead ads. The research team documented only in-stream ads on videos, excluding Shorts, as Shorts do not typically show in-stream ads like longer YouTube videos do. Numerous factors, including demand and ad auctions, determine when an ad will be shown in a video. Therefore, if researchers did not see any ads on a misinformation video when they watched it, that video could still display ads at a different time depending on demand and other factors. Therefore, the true extent of monetization of misinformation on YouTube could be much higher than what is found in this research. 

Ads displayed on misinformation

– Out of the sample misinformation videos (excluding Shorts) found on YouTube, about 30 percent (165 videos) displayed advertisements from different companies or organizations.

– It is hard to tell if the creators earned from the ads but, YouTube, as a platform, definitely profited from these ads on misinformation.

– A total of 189 advertisements were displayed in these videos with a few showing more than one ad when researchers watched those.

– The 165 misinformation videos with ads accumulated 37 million views as of December 2023, when these ads were documented, averaging more than 224,000 views per video.

Brands with ads on misinformation

– Ads of 83 different brands were seen in videos containing false or misleading information, and one-third of those are from foreign companies targeting a Bangladeshi audience.

– Brands with ads in misinformation videos are mostly from the gaming, telecommunications, e-commerce, and consumer goods sectors.

– The highest number of ads on misinformation videos were from the gaming app Hero Wars, followed by Robi Axiata Limited, a Bangladeshi mobile network operator, and Sting Energy Drink, a product of PepsiCo, appeared in 16 videos each.

– Advertisement of a betting site called “22bet” was seen in two misinformation videos, whereas, according to YouTube’s own policy, Bangladesh is not on the approved list of countries where gambling ads can be allowed.

– Several ads from Hero Wars, Flight Expert, and Robi were seen in political disinformation videos, including those spreading false narratives like US sanctions on the Bangladeshi Prime Minister and her son’s arrest in the US.

How brands suffer

YouTube has a troubled history regarding ad placements for brands. In 2017, major consumer brands pulled their ads from the platform in protest over their ads appearing next to offensive content, including videos posted by terrorist-affiliated groups, an incident popularly known as the YouTube Adpocalypse. Similarly, in 2018, CNN revealed that ads from more than 300 companies and organizations were running on YouTube channels promoting extreme content, “including white nationalist and Nazi ideologies, pedophilia, conspiracy theories, and North Korean propaganda.”

Since the Adpocalypse, YouTube has implemented widespread changes to its content moderation and monetization policies. However, advertisers interviewed for this research argue the effectiveness of these measures today, as their ads continue to appear on misinformation content. When CNN published its investigation, many companies stated they were unaware of the fact that their ads had been placed on such videos, raising concerns about the algorithms used for ad placements. These concerns still remain among brands, leading to a range of challenges identified in this research.

– One of the major challenges identified is that brands have limited control over where YouTube places their ads. “It is never possible to see specifically in which video the advertisement is running… you cannot see in which specific video on that channel your ads will appear,” said a key informant representing a large fintech company in Bangladesh. 

– “If an advertisement runs on misinformation videos then it definitely portrays support for such content” said another executive representing a top consumer product brand in Bangladesh. “I think YouTube is adapting its policy as the world changes. But I [also] think YouTube should serve ads on authentic information only.”

– YouTube has introduced content exclusion settings that allow advertisers, in its language, additional control to help them exclude types of content that may not fit their brand or business. “While content exclusions are done to the best of our ability, we can’t guarantee that all related content will be excluded,” the platform says. Moreover, “these settings are also often hidden a few clicks away, with the default setting being some AI powered magic model – which platforms are heavily promoting as most effective,” as mentioned by a tech accountability advocate in her interview. 

– Google, that owns YouTube, claimed in its Video Ad Safety Promise, that it automatically excludes ads from appearing on content with profanity, nudity, terrorism, and other sensitive subjects, although it does not specifically address misinformation or disinformation, nor does it allow advertisers to proactively and directly exclude misinformation content with its Content Exclusion Settings.

– Advertisers can only see where their ad was placed after it has run on a video or channel and then report or exclude it from the campaign. “Advertising in misinformation videos is not desirable at all and if it ever happens, we report it to Google and get it removed,” said a brand executive interviewed. However, for companies that run ads at scale, it is time and resource-consuming to regularly monitor if their ad was placed on misinformation content and report it for action, he added.

– Brands can select specific YouTube videos or channels for ads but, according to advertisers and digital marketing experts, they face challenges such as limited reach, high competition and costs and the dynamic nature of YouTube’s content which makes consistent monitoring difficult. To reach a larger audience, they prefer to rely on audience targeting and leave the ad-placement to algorithms. According to a key informant, rather than leaving it up to the advertisers, “YouTube AI needs to be strengthened so that it can detect misinformation or fake videos on their own… [and] it works more effectively.”

Problematic moderation of misinformation

YouTube does not take action against all types of misinformation but specifically targets “certain types of misinformation that can cause real-world harm, technically manipulated content, or content interfering with democratic processes,” as outlined in its policies. The platform also has specific policies for medical and election misinformation and guidelines against fake engagement, impersonation, spam, deceptive practices, and scams, all relevant in its effort to tackle misinformation and disinformation.

YouTube detects or identifies content violations through three main methods: automated machine learning, the Priority Flagger Program, and community reporting. While machine learning scans content for potential violations, the Priority Flagger Program depends on government agencies and NGOs flagging content. Users can report videos by selecting from 11 categories, including misinformation.

Once content is flagged, YouTube uses automated systems and human reviewers – either or both – to review the content and decide whether to take any action. “When our systems have a high degree of confidence that content is violative, they may make an automated decision,” YouTube said in a help center post. In the majority of cases, systems simply flag content for evaluation by a trained human reviewer. “When a human reviewer checks potentially violative content, it means a trained human evaluates the content and makes a decision based on the relevant policy or law,” it added.

Problems with moderation and policies

– YouTube’s policies have limitations as they are often vague and inadequate. For instance, during the 2024 Global Fact Summit, a panel of experts questioned what constitutes “real-world, egregious harm” as many disinformation narratives do not directly cause physical harm but instead undermine trust in institutions and democracy and target individuals in various ways. According to one expert, in such cases, “YouTube would do absolutely nothing.”

– The policies include examples but often state that violations are not limited to these examples without detailing what is not allowable, making the moderation process non-transparent. This ambiguity creates confusion among both content creators and users about what content is acceptable, potentially allowing harmful misinformation to persist. In a 2022 letter to the platform, fact-checkers demanded YouTube “publish its full moderation policy regarding disinformation and misinformation, including the use of artificial intelligence and the data that powers it.”

– It is not that all misinformation must be removed, but users should be alerted to potential falsehoods. Other platforms like Facebook and Twitter label misinformation based on user reports or third-party fact-checking organizations. YouTube does not do this widely. Instead, it takes action against certain types of misinformation when they violate their policies, which can include removal, restrictions, and warning, leaving a range of false or misleading narratives unaddressed. It limits users’ ability to discern false or misleading information while allowing creators and YouTube to profit from it. “Beyond removing content for legal compliance, YouTube’s focus should be on providing context and offering debunks, clearly superimposed on videos or as additional video content,” stated in the letter from 80 fact-checking groups.

– YouTube’s automated systems often fail to detect a range of misinformation that violate its policies. Moreover, these systems are unable to consistently detect the same misinformation content on other channels, even after it has been restricted or removed following community reports as outlined in following sections. Various researches have identified serious problems in YouTube’s content moderation in the past. For example, in 2022, researchers found 719 videos from 27 channels that misinformed audiences about the COVID-19 pandemic, of which 24 channels were successfully monetized through the YouTube Partner Program (YPP), effectively evading the platform’s content moderation.

YouTube’s response to community reporting

With the ambiguity in policies and limitations in how YouTube moderates misinformation content, the research team reported all 700 sample misinformation videos using its reporting mechanism between February and March, 2024. In the next four months (March to June), the research team observed what actions were taken against those reported content by YouTube. While many videos explicitly violate policies and others are unclear, the reporting was carried out to mainly understand how YouTube reviews user-reported content.

Inadequate actions:

– Out of 700 reported videos, 32 are missing from the report history of the user, possibly because they were flagged by multiple users and are already under review by YouTube. Among these 32 videos 21 videos are not available anymore, 3 were voluntarily removed by the uploader and 8 are still accessible on YouTube.

– Of the 668 reported content available in the users’ report history, YouTube took action on only 25, of which five are shorts and the remaining are videos.

– 22 videos were entirely removed and 19 associated channels were terminated entirely after reporting. In one case, the video was removed for violating YouTube’s policy on harassment and bullying.

– 3 videos had viewers’ age restrictions applied. Two of them have been restricted based on community guidelines and one of them is restricted as per the request of the uploader of the video, as seen in the report history of the users who reported them.

– When YouTube took action on a reported video, other videos with the same false or misleading claims were left unaffected. Of the 25 videos that were restricted, and removed, in 17 cases, the platform failed to address multiple unreported (by researchers) videos carrying the same misinformation.

Inconsistencies and inactions:

As discussed in early sections, YouTube’s misinformation policy is often vague and broadly on “egregious harm” and often overlooks the subtler, yet significant, societal impacts of misinformation. In certain cases, it offers a few examples of violations and leaves users to avoid posting content that “might” violate its policies. Due to these limitations, this research adopted a narrow approach, focusing on three specific examples from YouTube’s policies: videos manipulated to falsely show a government official’s death, old footage misrepresented as current events, and deceptive use of titles, thumbnails, and descriptions. By concentrating on these clear-cut cases, the research aimed to evaluate YouTube’s moderation of misinformation, albeit in a limited way.

Even with this narrow approach, the research finds: 

– At least eight videos directly violate YouTube’s misinformation policy, as well as its spam, deceptive practices, and scams policies, by containing false claims on different issues, yet YouTube did not take any action.

– At least in three videos, where the content was technically manipulated containing false claims of deaths of public officials, including the prime minister and a cabinet minister in an unrelated video. All the videos were made by blending various footage and pictures from different incidents and remain available even after reporting.

– Two incidents of false claims, where the content was manipulated using technology to show the military carrying out attacks on protesters, were reported, but the platform did not take any action on those videos.

– Three videos violated YouTube’s Spam, Deceptive Practices, and Scams policies by using misleading titles, thumbnails, and descriptions to deceive users, yet they remained available after reporting.

The need to demonetize misinformation

The scope of monetizing or profiting from misinformation extends beyond what is allowable on a platform according to their policies. As a tech accountability advocate pointed out in her interview, “In theory, the requirements for ad revenue sharing should be stricter than the standards for content moderation. It’s one thing not to remove misinformation, but a whole different thing to reward that content financially.”

Over the last few years, the advertising industry, ad-tech watchdogs, and NGOs have negotiated brand safety and suitability standards with platforms to define what content should not be associated with advertising. Misinformation is among the types of content that advertisers have clearly stated they do not want to be associated with.

In June 2022, The Global Alliance for Responsible Media (GARM), an industry-first effort that unites marketers, media agencies, media platforms, industry associations, and advertising technology solutions providers, updated its “Brand Safety Floor + Suitability Framework” to include misinformation as a category of content not appropriate for any advertising support. It defines misinformation as “the presence of verifiably false or willfully misleading content that is directly connected to user or societal harm” and states, “Platforms will leverage their community standards and monetization policies to uphold the GARM brand safety floor.”

However, despite YouTube being a member of this alliance, there is no visible reflection of this update in Google’s Video Ad Safety Promise. Neither it includes misinformation in its content exclusions for video campaigns, nor misinformation is mentioned in its advertiser-friendly content guidelines which set the standard for which videos are eligible for ads. Moreover, while platforms like Facebook and Instagram bar ads from appearing alongside labeled misinformation, YouTube lacks this labeling feature entirely.

The United Nations (UN) has recently published its Global Principles for Information Integrity, pushing for addressing the harm caused by the online misinformation and its monetization. It urges platforms to “take measures to address content that violates platform community standards and undermines human rights, such as limiting algorithmic amplification, labeling, and demonetization” and emphasizes platforms “to establish, publicize, and enforce clear and robust policies on advertising and the monetization of content.”

“By rewarding misinformation, even when it’s not immediately harmful, YouTube is setting harmful expectations around content monetization – incentivizing volume over quality – and subsidizing the development of the skills and technical infrastructure which is fueling the deterioration of our information environment,” said an expert interviewed.

The biggest victims of the monetization of misinformation are ultimately the brands that pay for these ads. According to a 2020 study by the Interactive Advertising Bureau, “the majority of U.S. consumers (81%) find it annoying when a brand appears next to low-quality content,” with 52% feeling less favorably toward a brand that does this. The report adds that “62% will stop using the brand altogether if its ads appear adjacent to low-quality content.”

According to the tech accountability advocate, “What this research shows is that YouTube might be charging their advertising clients for defective ad placements – which do not meet agreed standards and carry reputational risk.”

However, the impact of monetizing or profiting from misinformation on the information ecosystem is multifaceted. It not only poses risks to brand safety and helps misinformation actors to fundraise, but also harms the trusted information providers and independent media.

“While mainstream media spend hours fact-checking to produce accurate content, a misinformation video can get millions of views and earn hefty money during that time,” according to an expert who manages digital operations at a leading newspaper in Bangladesh. “Sadly, a large portion of the audience believes these videos to be true. As a result, they no longer visit mainstream media’s YouTube channels, causing financial losses for the mainstream outlets,” he said.

The UN principles also recommend to “advertise with media outlets and platforms that bolster information integrity, including public interest journalism, through methods such as inclusion and exclusion lists, ad verification tools, and manual vetting.” However, this research identifies that many advertisers have limited understanding of how exclusion works and how to respond if their ads show on misinformation, highlighting the need for more investment in raising awareness and capacity of advertisers and digital marketing agencies in combating the monetization of misinformation and protecting their brands from the risk of misplacement, particularly in countries like Bangladesh.

Conclusion

In the grand marketplace of attention that is YouTube, the stakes are high, and the cost of inaction is even higher. This is particularly true for countries like Bangladesh and for non-English languages like Bangla, which are often underrepresented and ignored in the global discourse on ad safety and the monetization of misinformation. The lack of tech accountability organizations, adequate evidence-based research, and academic interest in these areas exacerbates the problem.

This research aims to document the nature of the problem from the perspective of a non-English content market. By highlighting the extent of misinformation and its monetization on YouTube in Bangladesh, it seeks to contribute to a broader understanding among relevant stakeholders, including platforms, civil society, and advertisers. The goal is to inspire essential discussions and effective actions to mitigate the risks associated with incentivizing misinformation through advertisements.

While the study intentionally refrains from offering specific recommendations, it aligns with existing efforts and frameworks that address ad safety and the monetization of harmful content including of ad-tech watchdogs and alliances like GARM. By raising awareness, it underscores the urgent need for robust, transparent, and consistent content moderation practices. For YouTube, this entails refining its policies and algorithms for effective moderation, increased advertiser control and committing to greater transparency and accountability. For advertisers, it means adopting a proactive stance to ensure their brands do not inadvertently support harmful content.

As misinformation continues to proliferate, driven by the same economic incentives that fuel legitimate content creation, the need for accountability in YouTube’s ad ecosystem becomes ever more critical. Only through vigilant, collective efforts can we hope to safeguard the integrity of our information environment and uphold the principles of truth and transparency in the digital age.