EXCLUSIVE: Brands post ads on Twitter next to child pornography accounts

Sept 28 (Reuters) – Some major advertisers including Dyson, Mazda, Forbes and PBS Kids have halted their marketing campaign or removed their ads from parts of Twitter because their promotional ads appeared alongside tweets urging child pornography, the companies told Reuters.

Brands ranging from The Walt Disney Company (DIS.N)NBCUniversal (CMCSA.O) The Coca-Cola Company (KO.N) Children’s Hospital was among more than 30 advertisers who appeared on the profile pages of Twitter accounts promoting links to exploitative material, according to a Reuters review of accounts identified in new research on online child sexual abuse from cybersecurity group Ghost Data.

A Reuters review found that some of the tweets included keywords related to “rape” and “teenagers” and appeared alongside promoted tweets from advertisers. In one example, a tweet promoting footwear and accessories brand Cole Haan appeared next to a tweet in which a user said he was “trading teen/kids content”.

Register now to get free unlimited access to Reuters.com

“We’re terrified,” Cole Haan head of brand David Maddox told Reuters after being notified that the company’s ads had appeared alongside such tweets. “Either Twitter will fix this, or we will fix it by any means possible, including not buying Twitter ads.”

In another example, a user tweeted for “Yung girls ONLY, NO Boys” content, which was immediately followed by a tweet promoted by Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not respond to multiple requests for comment.

In a statement, Twitter spokeswoman Celeste Carswell said the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring new positions to write policy and implement solutions.

It added that Twitter is working closely with its customers and advertising partners to investigate and take steps to prevent the situation from happening again.

Twitter’s challenges in identifying child abuse content were first reported in Investigation By technology news site The Verge in late August. The emerging opposition from advertisers that is critical to Twitter’s revenue stream is first reported here.

Like all social media platforms, Twitter prohibits depictions of child sexual exploitation, which is illegal in most countries. But it does allow adult content in general and is home to a thriving exchange of porn, which makes up about 13% of all content on Twitter, according to an internal company document seen by Reuters.

Twitter declined to comment on the amount of adult content on the platform.

Ghost Data identified more than 500 accounts that publicly shared or requested child sexual abuse material during a 20-day period this month. Twitter failed to remove more than 70% of accounts during the study period, according to the group that shared the results exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s findings in full, but it reviewed dozens of accounts that remained online and were requesting material for “13+” and “young nudes”.

After Reuters shared a sample of 20 Twitter accounts last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others remained on the site the next day, according to Ghost Data and a Reuters review.

Twitter’s Carswell said on Tuesday that Reuters on Monday released the full list of more than 500 accounts submitted by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules.

In an email to advertisers on Wednesday morning, before this story was published, Twitter said it had “discovered that ads were being shown within profiles that were involved in the public sale or incitement of child sexual abuse material.”

Andrea Strupba, founder of Ghost Data, said the study was an attempt to assess Twitter’s ability to remove material. He said he personally funded the research after receiving information on the subject.

Twitter’s transparency reports on its website show that it suspended more than 1 million accounts last year for child sexual exploitation.

It has submitted about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded nonprofit that facilitates information sharing with law enforcement, according to that organization’s annual report.

A Forbes spokesperson said: “Twitter needs to fix this issue ASAP, and until they do, we will stop any other paid activity on Twitter.”

“There is no place for this type of online content,” a spokesman for the automaker Mazda USA said in a statement to Reuters, adding that in response the company is now preventing its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they were “redoubling our efforts to ensure that the digital platforms we advertise, and the media buyers we use, step up their efforts to prevent such mistakes from being repeated.”

A spokesperson for Coca-Cola, whose promoted tweet appeared on an account tracked by researchers, said it did not condone the material associated with its trademark, and said “any breach of these standards is unacceptable and taken seriously.”

NBCUniversal said it has asked Twitter to remove ads associated with inappropriate content.

Code words

Twitter is not alone in grappling with the failures of moderation related to children’s online safety. Child care advocates say the number of known child sexual abuse images has risen from thousands to tens of millions in recent years, as fraudsters have used social networks including Meta Facebook and Instagram to groom victims and share explicit images.

For the accounts identified by Ghost Data, nearly all child sexual abuse material merchants marketed the material on Twitter, then instructed buyers to reach them via messaging services such as Discord and Telegram in order to complete payment and receive files that were stored on cloud storage services such as Mega. New Zealand and US-based Dropbox, according to the group’s report.

A Discord spokesperson said the company has banned one server and one user for violating its rules against sharing links or content that makes children sexually active.

Mega said the link referred to in the Ghost Data report was created in early August and shortly afterwards deleted by the user, who declined to be identified. Mega said it permanently closed the user’s account after two days.

Dropbox and Telegram said they use a variety of tools to modify content but did not provide additional details about how they responded to the report.

Advertisers backlash continues to pose a risk to Twitter’s business, which earns more than 90% of its revenue by selling digital ad placements to brands seeking to market products to the service’s 237 million daily active users.

Twitter is also fighting in court with Tesla CEO and billionaire Elon Musk, who is trying to undo a $44 billion deal to buy the social media company over complaints about the spread of spam accounts and their impact on business.

A team of Twitter employees concluded in a report dated February 2021 that the company needs more investment to identify and remove child exploitation material on a large scale, noting that the company has a number of backlogs to review for potential reporting to law enforcement.

According to the report, which was prepared by an internal team to provide an overview of the situation of child exploitation material on Twitter and to receive legal advice on the proposed strategies.

“Recent reports on Twitter provide an outdated, momentary glimpse into just one aspect of our work in the field, and are not an accurate reflection of where we are today,” Carswell said.

Smugglers often use codewords like “cp” to child pornography and are “intentionally as vague as possible” to avoid detection, according to internal documents. The documents said that the more Twitter narrows it down to certain keywords, the more users are prompted to use ambiguous text, which “tends to be more difficult for[Twitter]to deal with automatically.”

Ghost Data’s Stroppa said such scams would complicate efforts to track down the material, but noted that his small team of five researchers and lack of access to internal Twitter resources were able to find hundreds of accounts within 20 days.

Twitter did not respond to a request for further comment.

Register now to get free unlimited access to Reuters.com

(Additional reporting by Sheila Dang in New York and Katie Paul in Palo Alto; Additional reporting by Don Chmielowski in Los Angeles. Editing by Kenneth Lee and Edward Tobin

Our criteria: Thomson Reuters Trust Principles.

Leave a Comment