Get Your Biometric Hack Here: Just $5

The rise of AI has allowed cybercriminals to access deepfake images, synthetic identities, cloned voices and even biometric datasets for as little as US$5. The industry is being fueled by technology developers specializing in creating deepfake solutions and selling them to  large-scale scam enterprises, according to new research from cybersecurity firm Group-IB.

Between January 2025 and August 2025, for instance, the company recorded 8,065 attempts to bypass one financial institution’s liveness checks for digital Know Your Customer (KYC) for loan applications with biometric injection attacks using AI-generated deepfake images, according to its Weaponized AI white paper.

Attacks such as this are a result of the rising Deepfake-as-a-Service industry: A technology developer will charge a cyberattacker between $10 and $50 for creating a deepfake image service, while a ready-to-use synthetic identity sells for up to $15. Tools such as these give even unskilled criminals ways to bypass KYC.

The Deepfake-as-a-Service model of cybercriminal business was noted by iProov last June in a report on the threat actor “Grey Nickel.”

Some companies, including China-based companies such as Haotian AI and Chenxin AI, also rent face-swapping software to criminals for a price tag between $1,000 and $10,000, allowing them to scale.

The technology developers advertise their wares to cyberattackers through online channels such as Telegram or the dark web. Between 2022 and September 2025, Group-IB collected more than 300 such posts referencing keywords such as “deepfake” and “KYC.”

“Cybercrime’s business models are evolving,” Dmitry Volkov, Group-IB
 CEO and co-founder, writes in the report’s preface. “The effect is profound: cybercrime is getting cheaper, faster, and more scalable.”

Cyberattackers also have an arsenal of other AI tools, such as AI-enhanced malware, chatbots that rely on manipulated Large Language Models (LLMs), AKA Dark LLM, and voice impersonation services.

Similar to deepfake imagery, voice cloning also has off-the-shelf services that cost less than US$10 a month, lowering the barrier to entry for scammers. Voice impersonation scams have now evolved to include scam call center platforms that use generative AI to scale and optimize their operations.

Group-IB
 says that the white paper is based on the company’s research, including real-world cybercrime investigations, as well as insights gathered from its Threat Intelligence and Fraud Protection solutions. The company uses proprietary tools to trawl through dark web forums, dedicated leak sites (DLS) and underground marketplaces.

The Singapore-based cybersecurity firm suggests that companies introduce layered defences that combine biometric verification, device and session analysis and behavioral risk scoring, as traditional verification methods, such as voice recognition, document checks and transaction monitoring, may be undermined by deepfakes and synthetic identities.

“Deepfake voices, synthetic media, and AI-personalized lures exploit trust in ways that traditional safeguards cannot catch,” the report notes. “Behavioral biometrics, device fingerprinting, and AI-driven risk scoring are essential to re-establish trust in identity and transactions.”

The report, however, also warns that AI poses a difficult challenge for cybersecurity companies as these types of attacks are hard to trace. Cybercriminals often rely on legitimate tools for generating synthetic voices.

“Unlike traditional malware, which often carries recognizable code signatures or infrastructure footprints, AI-generated artifacts, whether phishing lures or deepfake audio, can be produced on demand and leave little forensic trace,” says the paper.

© 2026 Aligned Trust. All rights reserved. Use of any material herein without express permission is prohibited.