Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

AI 'Nudify' Sites Rake in Millions, Relying on Big Tech Infrastructure

3:02 PM   |   14 July 2025

AI 'Nudify' Sites Rake in Millions, Relying on Big Tech Infrastructure

The Disturbing Rise of AI 'Nudify' Websites: Millions in Revenue, Millions of Victims

For years, the digital landscape has been grappling with the proliferation of AI-powered tools capable of generating explicit imagery. Among the most insidious of these are the so-called "nudify" or "undress" websites and applications. These platforms allow users to upload photographs of individuals and, with just a few clicks, utilize artificial intelligence to create fabricated "nude" or sexually explicit images of the subjects. The output is almost always nonconsensual and represents a severe form of digital abuse and harassment, disproportionately targeting women and girls. Despite growing awareness and some initial efforts by lawmakers and technology companies to curb their spread, these harmful services continue to thrive online. New research sheds a stark light on the scale of this problem, revealing that millions of people access these sites every month, and the operators behind them may be generating millions of dollars annually.

A Lucrative and Harmful Ecosystem

An in-depth analysis conducted by Indicator, a publication dedicated to investigating digital deception, examined 85 prominent nudify and undress websites. The findings paint a disturbing picture of a sophisticated and profitable ecosystem. According to the research, these 85 sites collectively averaged 18.5 million visitors per month over a recent six-month period. Based on calculations involving subscription costs, estimated conversion rates, and traffic directed to payment providers, the researchers conservatively estimate that 18 of these websites alone generated between $2.6 million and $18.4 million in the past six months. This could translate to a staggering $36 million per year for this subset of sites, and the total revenue across the entire ecosystem, including transactions occurring off-site (such as via platforms like Telegram), is likely even higher.

Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, described the nudifier ecosystem as a "lucrative business." He argues that this situation has been allowed to persist, in part, due to what he characterizes as "Silicon Valley’s laissez-faire approach to generative AI." Mantzarlis contends that tech companies should have ceased providing services to these platforms once their primary use case became clear: sexual harassment and the creation of nonconsensual imagery. The legal landscape is slowly catching up, with the creation or sharing of explicit deepfakes becoming increasingly illegal in various jurisdictions.

The Uncomfortable Reliance on Big Tech

Perhaps one of the most striking revelations from the Indicator research is the extent to which these harmful websites rely on the infrastructure and services provided by some of the world's largest technology companies. The analysis found that:

  • Amazon Web Services (AWS) and Cloudflare provide hosting or content delivery network (CDN) services for 62 of the 85 websites studied. These services are fundamental to keeping the websites online and accessible to a global audience.
  • Google's single sign-on system has been utilized on 54 of the websites. This system allows users to quickly create accounts using their existing Google credentials, lowering the barrier to entry for potential users.

Beyond these core services, the nudify websites also integrate various other services from mainstream companies, including payment processing systems, domain name registration, and webmaster tools. This reliance highlights a critical dependency that, if addressed by service providers, could significantly disrupt the operations of these illicit businesses.

Representatives from some of the implicated tech companies have responded to these findings. Ryan Walsh, a spokesperson for Amazon Web Services, stated that AWS has clear terms of service requiring customers to comply with "applicable" laws. He added that AWS acts quickly to review reports of potential violations and takes steps to disable prohibited content, encouraging people to report issues to their safety teams.

Similarly, Google spokesperson Karl Ryan acknowledged that "Some of these sites violate our terms." He stated that Google's teams are taking action to address these violations and are working on longer-term solutions. Ryan noted that Google's sign-in system requires developers to agree to policies prohibiting illegal content and content that harasses others. At the time of the original reporting, Cloudflare had not publicly responded to requests for comment.

The fact that these harmful sites can operate and scale using services from reputable, publicly traded companies raises significant questions about the effectiveness of existing terms of service, content moderation policies, and enforcement mechanisms. While these companies have policies against illegal and harmful content, the Indicator research suggests that enforcement against these specific types of services has been insufficient to prevent their widespread operation and profitability.

From Deepfakes to Nudifiers: An Evolution of Abuse

The emergence and proliferation of nudify and undress websites are not isolated phenomena. They represent an evolution of the tools and techniques first used to create explicit "deepfakes," which gained prominence around 2019. Initially, creating convincing deepfakes required significant technical skill and computational resources. However, as AI technology advanced and became more accessible, the process was simplified, leading to the development of user-friendly apps and websites specifically designed for generating nonconsensual explicit imagery from standard photographs.

Investigative reporting, particularly by outlets like Bellingcat and 404 Media, has revealed a network of interconnected companies operating globally to provide this technology and profit from it. These operations often function like typical online businesses, selling credits or subscriptions to users who want to generate images. The recent wave of powerful generative AI image models has further supercharged these capabilities, making the output more realistic and easier to produce.

The Devastating Human Cost

The technical and financial aspects of the nudifier ecosystem, while important for understanding its scale, pale in comparison to the devastating impact on victims. The creation and distribution of nonconsensual explicit images constitute a severe violation of privacy and a form of sexual harassment and abuse. Images are often stolen from social media profiles or other online sources without the subject's knowledge or consent.

The consequences for victims are profound and long-lasting. They can experience severe psychological distress, including anxiety, depression, and trauma. Their reputations can be irrevocably damaged, affecting their personal relationships, professional lives, and sense of safety. In a particularly disturbing trend, these tools have been used in schools as a new form of cyberbullying and abuse, with teenage boys creating and sharing fabricated images of their classmates. This form of abuse is not only emotionally scarring but also incredibly difficult to combat, as the images can spread rapidly online and are notoriously hard to scrub from the internet entirely.

The experience of victims highlights the urgent need for effective mechanisms to prevent the creation and distribution of these images and to support those who have been targeted. The ease with which these images can be generated and shared underscores the failure of existing systems to protect individuals from this specific form of digital harm.

Tactics of Evasion and the Challenge for Enforcement

As awareness of the harm caused by nudifier sites has grown and some enforcement actions have been taken, the operators of these platforms have adapted their tactics to evade detection and continue their operations. The Indicator report notes that while many developer accounts using single sign-on systems from companies like Google, Apple, and Discord were disabled following previous reporting, the use of Google's sign-in persists on many sites.

To circumvent detection, website creators have reportedly employed strategies such as using "intermediary sites." These sites can pose as different URLs during the registration process, obscuring the true nature of the service from the tech provider whose sign-on system is being used. This tactic demonstrates a deliberate effort to bypass safety measures and continue operating under the radar.

Furthermore, the nudifier industry is adopting marketing strategies common in the adult content space. This includes using paid affiliate and referral programs to drive traffic and customer acquisition. Some sites have even engaged in sponsored content with adult entertainers to promote their services. This indicates a strategic effort to normalize and integrate these harmful tools into existing online ecosystems, making them harder to isolate and shut down.

Investigative researcher Santiago Lakatos noted that the behavior of nudifier operators strongly suggests a desire to build and entrench themselves within a niche of the adult industry. He warned that they would likely continue attempting to intermingle their operations with adult content spaces, a trend that requires countering not only by mainstream tech companies but also by the adult industry itself.

A Slow but Growing Regulatory Response

While the response to abusive deepfakes and nudifiers has been criticized as slow since their initial appearance, there has been some recent momentum on the legal and regulatory fronts. These actions, though potentially impactful, face the challenge of keeping pace with the rapidly evolving technology and the adaptive nature of the illicit businesses.

  • In the United States, the city attorney of San Francisco has taken legal action, suing 16 services involved in generating nonconsensual images. This represents a direct legal challenge to the operators of these platforms.
  • Microsoft has publicly identified developers it claims have evaded its AI guardrails, specifically those involved in creating celebrity deepfakes. This highlights efforts by some tech companies to identify and call out malicious actors using their platforms.
  • Meta, the parent company of Facebook and Instagram, has filed a lawsuit against a company allegedly operating a nudify app that repeatedly advertised on Meta's platforms. This legal action targets the business entities behind these apps.
  • On the legislative front, the controversial Take It Down Act was signed into law in the US in May. This law places requirements on tech companies to remove nonconsensual image abuse quickly upon notification, aiming to provide victims with a faster path to recourse.
  • In the United Kingdom, the government is making it explicitly illegal to create explicit deepfakes, establishing a clear legal prohibition against the creation of this type of content.

These actions represent steps towards holding operators accountable and compelling platforms to take more decisive action. However, the scale of the problem, as highlighted by the Indicator research, suggests that current efforts may only "chip away" at the edges of the industry. More comprehensive and coordinated crackdowns are needed to significantly slow the growth and reach of these harmful services.

The Path Forward: Stricter Enforcement and Collaboration

The Indicator research underscores a critical point: the ability of nudifier sites to reach millions of users and generate substantial revenue is directly linked to their access to mainstream tech infrastructure and services. While these services may eventually migrate to less regulated corners of the internet if major platforms crack down, making them harder to discover, access, and use is a crucial step in reducing their audience and profitability.

Henry Ajder, an expert on AI and deepfakes who has tracked the growth of the nudification ecosystem since 2020, emphasizes that these apps have evolved from low-quality side projects into a "cottage industry of professionalized illicit businesses with millions of users." He argues that meaningful progress will only occur when businesses that facilitate the "perverse customer journey" of these apps take targeted action. This includes not just the AI developers but also hosting providers, payment processors, domain registrars, and advertising platforms.

Stricter enforcement of existing terms of service is paramount. Tech companies need to be more proactive in identifying and terminating services provided to platforms whose primary function is the creation and distribution of nonconsensual explicit imagery. This requires investing in sophisticated detection mechanisms and dedicating resources to investigating reports of abuse.

Furthermore, there is a need for greater collaboration across the tech industry and with law enforcement and victim support organizations. Sharing information about known illicit sites, developing industry-wide best practices for identifying and blocking harmful content, and working with authorities to pursue legal action against operators are all necessary components of a comprehensive strategy.

While it may be impossible to entirely eradicate this toxic byproduct of the generative AI era, the Indicator research provides compelling evidence that its scale and profitability are directly tied to its integration with mainstream online infrastructure. By disrupting these connections and making it significantly harder for these sites to operate and reach users, the scope of this harmful industry can be drastically reduced, offering some measure of protection to potential victims.