Humanoid Robot Hacked via Bluetooth, Data Sent to China

▼ Summary
– The Unitree G1 robot can be exploited via Bluetooth to gain root access, allowing attackers to inject commands and take control using a universal hardcoded encryption key.
– Weak encryption in the robot’s configuration files uses predictable algorithms and a shared key, enabling anyone to decrypt and access sensitive data across all units.
– The robot automatically transmits data including audio, video, and sensor information to servers in China without user consent, violating privacy regulations like GDPR.
– Multiple communication systems are unsecured, with unencrypted local traffic and disabled TLS checks, creating various attack paths for espionage or network intrusion.
– Researchers demonstrated the robot can function as a surveillance tool or launch cyber attacks, highlighting the need for adaptive security and regulatory oversight in robotics.
A recent security analysis of the Unitree G1 humanoid robot has uncovered multiple vulnerabilities that could allow unauthorized individuals to take control of the device and use it for espionage or cyberattacks. Researchers found that the robot’s Bluetooth setup process can be exploited by anyone within range, granting them root access without the owner’s knowledge.
The issue stems from the way the robot handles its initial configuration using Bluetooth Low Energy (BLE). During Wi-Fi setup, the G1 uses BLE to receive network credentials, but this channel does not properly validate incoming data. Every Unitree G1 and other models from the same manufacturer share the same hardcoded AES encryption key, making it possible for attackers to inject commands and achieve remote code execution with elevated privileges. According to the research team, exploitation only requires proximity via Bluetooth and awareness of these universal credentials. The vulnerability appears across multiple firmware versions, and once access is gained, an attacker can maintain control by altering login details or creating new remote accounts.
Further investigation revealed serious weaknesses in the encryption protecting the robot’s configuration files. A two-layer security system is in place, but both layers are fundamentally flawed. The outer layer uses the Blowfish algorithm in a basic, repetitive mode that is widely considered insecure. Since all G1 units use the same 128-bit encryption key, decrypting one device means all others can be decrypted as well, this key was extracted directly from the robot’s own software. The inner layer relies on a Linear Congruential Generator (LCG), a predictable mathematical sequence often used for simple random number generation. Although the exact seed value for each robot is not publicly known, the 32-bit seed space is small enough to make brute-force attacks practical. Together, these flaws allow anyone to decrypt configuration files containing service settings, process names, and network information, leaving the entire fleet exposed to reverse engineering.
Perhaps even more alarming is the robot’s data transmission behavior. Network traffic analysis confirmed that the G1 continuously sends information to servers located in China. This data includes battery status, joint torque, motion state, and sensor inputs from cameras, microphones, and other internal services. Every five minutes, the robot transmits JSON packets to two specific IP addresses on port 17883. If the connection drops, it automatically reestablishes contact. A separate process maintains a persistent WebSocket session with a third server, using an SSL channel that does not verify certificates, enabling ongoing exchange of text or audio data. Users are not informed about these data transfers, and there are no visible indicators or consent options. In Europe, this practice likely violates GDPR Articles 6 and 13, while in the United States, it conflicts with California privacy laws that require an opt-out mechanism for data tracking.
Internally, the robot relies on several communication protocols, some of which are inadequately secured. Systems like DDS and RTPS manage messages between sensors and actuators, while MQTT and WebRTC connect to cloud services for updates and remote operation. Researchers noted that DDS traffic is sent without encryption, allowing anyone on the same local network to eavesdrop. The WebRTC client also has TLS certificate checks disabled, making it possible for attackers to impersonate legitimate services. Combined with the Bluetooth vulnerability and weak file encryption, these design flaws create multiple pathways for attackers to move laterally within the system.
To illustrate the risks, researchers presented two realistic scenarios. In the first, the G1 functions as a covert surveillance tool. From the moment it powers on, the robot automatically connects to telemetry servers and begins transmitting audio from its microphones, video from its cameras, and spatial data from LIDAR and GPS modules. This capability could be exploited for unauthorized monitoring, facility mapping, or corporate espionage, a robot placed in an office or laboratory could silently gather sensitive information and send it overseas.
In the second scenario, the robot was used as an attack platform. Researchers installed a Cybersecurity AI (CAI) framework on the device, which performed reconnaissance, vulnerability scanning, and attack planning. The CAI identified open communication channels and confirmed that it could inject commands through the Bluetooth flaw. It explored MQTT and WebRTC pathways and found methods to manipulate over-the-air update systems. Although the team halted actual attack execution for ethical reasons, the experiment confirmed that a compromised robot could transition from data collection to launching intrusions against other networked systems.
The study concludes that humanoid robots like the G1 represent a unique cybersecurity threat due to their dual potential as surveillance devices and attack vectors. Researchers are urging the robotics industry to adopt a new approach to security, moving beyond static defenses and manual audits. They recommend adaptive security systems powered by Cybersecurity AI that can automatically detect and respond to threats. As one expert noted, these findings highlight a future in which data-hungry robots enter homes, factories, and public spaces, posing risks to privacy and fundamental rights unless verifiable corrections and regulatory oversight are implemented without delay.
(Source: HelpNet Security)