Tamara Yesmin Toma

Researcher, Dismislab
AI-Generated Trump arrest images spark outrage on US democracy
This article is more than 10 months old

AI-Generated Trump arrest images spark outrage on US democracy

Tamara Yesmin Toma

Researcher, Dismislab

Recently, multiple images depicting former US President Donald Trump being detained by law enforcement officers have been circulating on social media in Bangladesh. These images (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) are accompanied by claims that Trump was forcibly arrested by the United States’ law enforcement forces, fueling criticism against US democracy and law enforcement. However, Dismislab has verified that these images were created using artificial intelligence (AI) tools.

One Facebook post posting these photos reads, “A glimpse of democracy in the United States! Take a look at how the country’s law enforcement handles a former President! How could they still want to teach democracy? The rest of the world views America as the worst nation that promotes terrorism. The country that suppresses democracy wears a democratic mask, which the international community is already aware of. America has an innate tendency to interfere in the domestic affairs of other countries. But the democracy they preach about is not present in their own country.”

Another post states, “America’s democracy! They apprehended a former President with police force! In Bangladesh, a convicted criminal like Begum Khaleda Zia, who has been given special privileges from the Prime Minister, now resides in her home under Section 401 of the Criminal Procedure Code. Yet they come to teach democracy in our country. Truly astonishing hypocrisy.”

Another individual has stated, “#Imperialist #America, before you start talking about our democracy, take a moment to reflect on your own behavior!” Your lectures about our democracy will never be accepted by the dignified people of this country.”

Both posts feature the same image, one showing Trump being forcibly restrained by several police officers, and the other showing him falling on the ground, surrounded by law enforcement personnel.

However, upon investigation, it has been discovered that these images are artificial creations, generated with the help of the AI image generator software, Midjourney, developed by San Francisco-based independent research lab Midjourney Inc. Notably, this AI technology has been used to create viral images of other public figures, including Pope Francis wearing a puffer jacket and fake images of Elon Musk with a robot wife.

The person responsible for creating and posting these AI-generated images of Trump is Eliot Higgins, the founder of the open-source investigative platform Bellingcat. Higgins admitted to creating the pictures as a joke and did not expect them to gain traction. He requested Midjourney to create images of “Donald Trump falling during arrest”.

He then tweeted those pictures with a caption that read, “Making pictures of Trump getting arrested while waiting for Trump’s arrest.” However, these images sparked widespread speculation and generated considerable buzz.

Within moments, the news of Donald Trump’s potential arrest in Washington, D.C. began to spread, fueled by these provocative visuals. Multiple media outlets (1, 2, 3) took it upon themselves to fact-check the images and reached a consensus that they were indeed AI-generated.

Later, in an exclusive interview with The Washington Post, Eliot Higgins revealed, “I was just mucking about. I thought maybe five people would retweet it.”

In recent times, there has been a growing trend of spreading disinformation through AI-generated images across various political landscapes in different countries.

Just last May, an image went viral depicting former Prime Minister of Pakistan, Imran Khan, supposedly imprisoned just a day after his arrest. Fact-checking revealed that the image was created using AI software Midjourney. Similarly, within the same month, several more images circulated during a protest in support of Imran Khan, purportedly AI-generated as well, according to multiple media sources.

Not only images, but recently there has been a surge of spreading deepfake videos through social media channels. In a noteworthy incident this past January, a deepfake video emerged featuring President Joe Biden. This video used voice simulation technology to make it appear as if Biden was instructing an attack on the transgender community. It was widely disseminated across social media platforms.

 In the wake of these recent events, experts believe that the use of artificially generated disinformation poses a significant threat during elections.

At a congressional hearing in Washington, Sam Altman, CEO of OpenAI, the company that developed ChatGPT, stated that the models behind the most recent generation of AI technology could manipulate consumers.

The head of foundation AI research at UK’s Alan Turing Institute, Prof. Michael Wooldridge told the Guardian that his biggest concern with the technology was AI-driven disinformation.

He also said, “Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. But we now know that generative AI can produce disinformation on an industrial scale.”