AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI-Powered Vishing Platform Exposed by Researchers

▼ Summary

– A vishing-as-a-service platform called p1bot is misusing ElevenLabs’ AI text-to-speech technology to power automated “press 1” phone scams.
– In these scams, fraudsters spoof trusted institutions’ numbers and use AI-generated voice messages to scare victims into pressing a key, which connects them to a live scammer.
– The p1bot platform streamlines attacks by offering a subscription-based dashboard where operators can spoof numbers, generate voices, place calls, and manage interactions.
– Researchers discovered the platform’s details because its client-side JavaScript was not obfuscated, revealing its integrations and embedded credentials.
– ElevenLabs responded to the abuse report by investigating and banning the accounts, demonstrating how vendor-researcher cooperation can disrupt such criminal activity.

Security researchers have uncovered a sophisticated vishing-as-a-service platform that is leveraging advanced AI voice technology to automate fraudulent phone scams. Known as P1, this subscription-based service is reportedly misusing text-to-speech capabilities from the company ElevenLabs, enabling criminals to conduct highly convincing “press 1” scams with minimal technical skill. This development highlights a troubling trend where commercial AI tools are being weaponized to lower the barrier for large-scale social engineering attacks.

In a typical “press 1” scam, fraudsters impersonate trusted institutions like banks. They use spoofed phone numbers to call potential victims, playing a pre-recorded message that warns of account compromises or fraudulent transactions. The message instructs the listener to press “1” on their keypad to speak with a representative. Those who comply are connected to a live scammer who poses as an employee, aiming to extract sensitive personal or financial information. The P1 platform automates the initial, critical stage of this deception by using AI-generated voices that sound remarkably human, eliminating the need for a scammer to be on the line from the start.

According to Mirage Security CEO Ross Lazerowitz, this platform represents a significant evolution from previous tools. While open-source kits and academic projects have demonstrated similar concepts, P1 is a polished, commercial product. ElevenLabs’ voice technology is deeply integrated as a core feature, complete with a curated catalog of voices in English, French, and Spanish. This seamless integration creates a streamlined workflow designed for anyone willing to pay the subscription fee, effectively democratizing access to powerful vishing capabilities.

The operational process for using P1 is straightforward. Would-be scammers register through a Telegram bot and pay a monthly fee of $399 via a cryptocurrency gateway. They then gain access to a web-based dashboard that functions as a browser softphone. This interface allows operators to spoof caller IDs, generate AI voice prompts, place calls over the internet, and capture the tones made when a victim presses phone keys. Crucially, operators can create and save fake interactive voice response (IVR) clips using the platform’s “Generate TTS” page, playing them during live calls to mimic legitimate automated banking or customer service systems.

Mirage Security’s investigation was aided by the platform’s lack of basic security obfuscation. The client-side JavaScript code was not hidden or access-restricted, allowing researchers to analyze the application’s logic, API integrations, and even embedded credentials without ever creating an account. Diagnostic logs left active in the production version also suggested the platform may have been developed with the assistance of AI coding tools. This analysis revealed that criminals are not building novel AI systems but are instead subscribing to the same commercial services available to legitimate users and developers.

Lazerowitz emphasized that the situation also demonstrates a path for effective countermeasures. Upon receiving the findings, ElevenLabs’ security team acted swiftly, investigating the abusive accounts and taking action. The company has built traceability features into its platform and actively monitors for misuse, banning accounts and reporting them to authorities when necessary. This incident shows that collaboration between security researchers and technology vendors can lead to meaningful disruption of these criminal operations. While ElevenLabs declined to comment on this specific case, their established policies indicate a commitment to combating the malicious use of their technology.

(Source: HelpNet Security)

Topics

vishing scams 95% ai voice technology 90% elevenlabs misuse 88% press 1 scams 87% p1bot platform 85% cybersecurity research 83% spoofed phone numbers 80% webrtc technology 78% cryptocurrency payments 75% telegram integration 73%