Deepfake Porn App Clothoff Claims to be Donating to Help AI Victims

Warning: This article discusses non-consensual sexually explicit content and Child Sexual Abuse Material (CSAM).

Clothoff, one of the most notorious apps for non-consensual deepfake pornographic material, claims that it is donating funds to “support those affected by AI”, highlighting its collaboration with an organisation named ASU Label that says it aims to “protect your rights in the age of AI”.

But it is unclear who exactly is behind ASU Label and why – given its stated aims – it would choose to work with an organisation like Clothoff that runs a “free undress AI and clothes remover” app.

There is no information detailing the individuals or other organisations involved with ASU Label on its website.

Bellingcat first noticed a reference to this organisation in December 2024, when the following lines were added to some of Clothoff’s sites: “We are working with Asulable and donating funds to support those affected by AI. If you have experienced problems related to AI, please visit asulable.com or contact them at team@asulabel.com.” 

This paragraph appeared on several of Clothoff’s network of websites, including their main site, Clothoff.io, which went offline in December 2024. They still operate multiple websites with similar domain names in their network. 

Screenshot from the bottom of a Clothoff page about working with “Asulable”. The red box, added by Bellingcat, obscures the name and address of a company mentioned on Clothoff’s websites

The website asulable.com could not be found. However, asulabel.com – the domain of the contact email address mentioned – goes to a site for an organisation that calls itself AisafeUse Label (ASU) or ASU Label. This website for “ASU Label” also features a logo on the top left-hand corner which matches a logo featured on Clothoff’s website, further indicating that this is the organisation Clothoff was referring to. 

Front page of Asulabel.com, with the same logo as featured on Clothoff’s websites

According to DomainTools, a tool that displays the domain registration information of websites, ASU Label’s domain was registered on Oct. 15, 2024. The Internet Archive’s Wayback Machine, a popular web archive, captured ASU Label’s site for the first time on Nov. 13, and Clothoff’s first mention of ASU Label was archived the following month in December. 

Whois record for asulabel.com. Source: DomainTools

ASU Label said their mission was “to assist individuals who have suffered due to the unsafe use of neural networks”. 

Nowhere on ASU Label’s website do they state how exactly they help those affected by any type of AI or link to resources for victims. When asked to provide specifics on what they do, ASU Label said they provide “direct support to victims” and “assist individuals” but did not specify what this support or assistance looked like. 

ASU Label told Bellingcat that it was registered as a non-profit organisation, but did not say where it was registered. Bellingcat’s searches on several international databases of non-profits and non-governmental organisations for “ASU”, “AisafeUse Label” and “ASU Label” did not return any relevant matches, although such databases may not be equally comprehensive or updated in every country. 

When asked for any evidence that they are a registered charity or non-profit, to clarify what country they operate from, or any evidence at all to prove that they are a legitimate organisation, ASU Label said their team had made a “collective decision not to disclose our legal documents” as “in recent times, we have encountered numerous adversaries whose sole intent is to hinder us from fulfilling our mission”. 

They did not specify who these adversaries were or how these encounters took place.  

ASU Label did not answer questions about the harms of deepfake pornography, which Clothoff’s platform creates. Nor did it address questions about who is behind ASU Label, beyond saying that they were founded by “a group of professionals from the fields of AI, law, and public advocacy”, or reveal any other organisations it works with to achieve its aims.

The organisation said it was not owned or managed by anyone including Clothoff. “We are a team of like-minded individuals focused on charity,” it said.

Clothoff’s website lists a contact email for ASU Label, which is how Bellingcat reached them, but this email address does not appear on ASU Label’s own site and does not come up anywhere else in a Google search. ASU Label’s website does not include any way to contact them except a pop-up contact form for those wanting to become a member of the organisation or for those affected by AI. 

Since Bellingcat could not find any link or reference to ASU Label beyond its own website and the mention on Clothoff’s sites, we asked both organisations about their association with each other.

Clothoff did not respond to Bellingcat’s request for any evidence of donations to ASU Label, but said that it collaborates with ASU Label “from time to time”. 

“Occasionally, they approach us with requests for direct assistance to individuals or proposals for joint research initiatives. Whenever possible, we support their efforts, assist those affected, or provide analytical insights,” it said.

ASU Label also confirmed that they were collaborating with Clothoff: “In addition to donations, this organisation regularly participates in our research activities, providing analytical insights on improving legal frameworks in different regions to uphold human rights. For instance, we have recently been conducting a joint study on the spread of deepfakes in Japan.” 

Bellingcat could not find any record of ASU Label in the national non-profit database of Japan, but it is unclear if they are registered as a non-profit elsewhere. 

‘Attempts to Ban This Progress Are Futile’

In response to Bellingcat’s questions, Clothoff said it was “an adult-oriented platform designed for safe, consensual exploration of intimate desires” and “strictly prohibits illegal use”. 

This description contradicts the reality of the platform. Clothoff, like other “nudifying” apps, allows users to “undress” photos of anyone using AI without their consent. Women are more likely to be victims of deepfake porn, and victims have testified about harm including extreme psychological distress, in-person stalking and harassment, and reputational ruin. There have also been cases in the US and globally of minors having non-consensual images of themselves created and shared by classmates using Clothoff.

The Clothoff press team told Bellingcat that it believed that “there are more serious problems in the world than pictures on the internet”. 

“In time, society may even approach them with humour – playful April Fools’ jokes, for instance – turning potential tension into lighthearted interaction.”

However, in places like the UK, the US and a growing list of countries, the content that Clothoff produces could be illegal. For example, in the US the Take It Down Act, which criminalises non-consensual intimate images, recently passed in the Senate. A new law to be introduced in the UK – the first of its kind in the world– will also make it “illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison”.

Clothoff told Bellingcat that “AI evolution is inevitable” and “attempts to ban this progress are futile”.  

The app is secretive about its ownership, and none of its multiple sites contain any indication of the people who own or run them. During the investigation for this story, we reached out to a software development company whose name and address was listed in the footer of Clothoff’s websites without any other explanation, which normally implies that the company owns, manages or is otherwise closely affiliated with these websites. 

Bellingcat had a video call with the CEO of that company, who appeared genuinely surprised to hear that they were listed on the websites and said they had no relationship or any prior communication with Clothoff. The CEO said he subsequently contacted Clothoff to remove their name from their sites and shared a screenshot of their response: “Good day! We have removed your company’s address from the site. The confirmation is attached.”

What followed was a screenshot of the page with yet another company name and address – this time an AI-focused investment company – listed in the same place. This is at least the fourth company that Bellingcat has noticed in the footer of Clothoff’s websites since 2023. We have chosen not to name these companies as there is no evidence to indicate they own or operate Clothoff. 

When asked about the string of companies they listed on their websites, Clothoff stated that “our holding company oversees multiple businesses”, but did not confirm or deny any official relationship with the business entities they listed on their sites despite being pressed multiple times for a response. 

Clothoff said its holding company was owned by “a group of engineer-enthusiasts” but that it could not disclose their identities “due to non-disclosure agreements”.

A previous Bellingcat investigation linked several companies to Clothoff, while a Guardian investigation revealed other names tied to the deepfake porn app, including a brother and sister in Belarus.

AI-Generated Help for AI Victims?

ASU Label’s website lists several types of harm from AI, including misinformation spread, job displacement, bias in decision making, and unsafe advice. In an article describing what deepfakes are, it also mentions as an example “celebrity deepfakes, where people’s faces were superimposed onto adult content, leading to reputational damage”. 

But their claim that they are actively collaborating on research with Clothoff, an organisation known for non-consensual deepfake pornography, appears to directly contradict their stated goals of helping victims of AI harm and safeguarding the rights of individuals.

Interestingly, several AI detection tools indicate that ASU Label’s text could itself be AI-generated. We ran the front page text through three such tools, GPTZero, Quillbot, and ZeroGPT, which all resulted in a 90 to 100 percent probability rate of the text being AI-generated. Subsequent pages we checked, like ASU Label’s articles on AI harms and spotting deepfakes, ranged in AI-text probability between 75 and 100 percent across these three tools. 

When asked about this, ASU Label did not deny using AI and said they “see no issue in utilising such tools for structuring our website”.

But in an article in December, the organisation warned about “AI-based deception”.

“AI-generated visuals, deepfakes, and even AI-written articles can spread false information or create misleading narratives,” it said in the article. “These manipulations are often indistinguishable from real content.”

Screenshot of the front page of ASU Label (top); Screenshot of results from GPTZero, showing the same text rated as having a 100 percent probability of being AI-generated (bottom)

Main image: Merel Zoet/Bellingcat

If you have been affected by image-based sexual abuse, you can find an international list of resources for survivors and victims here. Established organisations you can reach out to for support include the Cyber Civil Rights Initiative in the US and the Revenge Porn Helpline in the UK.  

Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.