Study: Websites using AI to undress individuals soar in usage
share on
As the darker side of artificial intelligence (AI) becomes more prevalent, websites that create and disseminate synthetic non-consensual intimate imagery (NCII) have received over 24 million unique visitors as of September this year.
NCII, also known as “undressing” images, see existing photos and video footage of real individuals being manipulated to make these individuals appear nude without their consent.
Don't miss: Study: AI-powered deepfakes see 1530% increased usage posing a threat to cyber security
These were the results from intelligence company Graphika’s latest report that identified 34 synthetic NCII providers and utilised data from web traffic analysis firm Similarweb. The report also revealed that the volume of referral link spam for these services has increased by more than 2000% on platforms such as Reddit and X since the beginning of 2023.
NCII providers use social media platforms to market their services and drive traffic to affiliate links with some providers being more covert in their practices by masquerading as AI art services or web3 photo galleries. Synthetic NCII services also leverage influencer marketing to promote their products.
While many of the accounts engaged in this activity show signs of automation and have previously engaged in similar spam-like behaviors, some also appear to be authentic users, it said.
All the services identified offer incentives that give users additional “credits” to generate more images when someone uses their referral link.
Using data provided by Meltwater, it measured the number of comments and posts on Reddit and X containing referral links to 34 websites and 52 Telegram channels providing synthetic NCII services. These totaled 1,280 in 2022 compared to over 32,100 so far this year, representing a 2,408% increase in volume year-on-year.
The primary drivers of the surge in these services can be attributed to the increasing capability and accessibility of open-source AI image diffusion models that allow a larger number of providers to easily and cheaply create photorealistic NCII at scale.
As synthetic NCII services grow in scale and accessibility, these services may lead to instances of online harm that include targeted harassment campaigns, sextortion and the generation of child sexual abuse material to name a few.
Additionally, many synthetic NCII services are monetised and operate on a freemium model that offers users a small number of free generations with paywalled additional generations and enhanced services. Prices for generations range from USD1.99 for one credit to USD299 for API access and other added features, according to the report.
News of the increased use of synthetic NCII services comes as AI-generated deepfakes increase in use in the region. Deepfakes in the APAC region has grown by an average of 1530% from last year, posing a threat to cyber security if the technology is misused, according to a new study.
It also found that the Philippines saw the largest increase in deepfakes at 4500%, while Hong Kong experienced a 1300% increase and Malaysia, along with Singapore, experiencing a 1000% and 500% increase respectively.
Related articles:
73% of consumers trust content by generative AI: Here's why they shouldn't
MOSTI to create AI code of ethics for all sectors in MY
Study: Data ethics is a priority for CMOs but half don't know what that truly means
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window