AI Toy Leaked 50,000 Kids’ Chats to Any Gmail User

▼ Summary
– Security researchers discovered that Bondu’s web portal, intended for parents and staff, allowed anyone with a Gmail account to access children’s private chat transcripts and personal data without hacking.
– The exposed data included children’s names, birth dates, family details, and intimate conversation histories from over 50,000 chats with the AI-enabled stuffed dinosaur toy.
– Upon being alerted, Bondu quickly took down the portal and implemented proper security fixes, claiming no evidence of access beyond the researchers.
– The company stated it has communicated with users, strengthened its security protocols, and hired a firm to monitor its systems going forward.
– The incident serves as a broader warning about the privacy risks of AI chat toys for kids, which store detailed conversational histories to inform interactions.
When a security expert was asked about a popular new AI toy, a brief investigation revealed a shocking privacy breach that exposed tens of thousands of children’s intimate conversations to the public internet. This incident highlights the critical vulnerabilities that can emerge when rapidly developing AI products are brought to market without rigorous security safeguards, putting sensitive user data at immense risk.
A neighbor’s casual question about a pre-ordered Bondus stuffed dinosaur led researcher Joseph Thacker to examine the toy’s web portal. Designed for parental oversight and company monitoring, the portal required only a Google account to access. Within minutes, Thacker and fellow researcher Joel Margolis found they could view an extensive trove of private data belonging to the toy’s young users.
The exposed information was deeply personal. The researchers accessed children’s full names, birth dates, and the names of family members. They saw the “objectives” parents had set for their children and, most alarmingly, detailed summaries and complete transcripts of every prior chat between a child and their AI companion. These conversations revealed pet names for the toy, favorite snacks, dance moves, and other intimate details shared in what was meant to be a private, imaginative space. The company later confirmed that over 50,000 chat transcripts were accessible through this unprotected console, representing nearly every conversation the toys had ever had.
“It felt pretty intrusive and really weird to know these things,” Thacker remarked, describing the experience as a massive violation of children’s privacy. The researchers emphasized they conducted no hacking; the data was openly available to anyone with a Gmail address who knew where to look.
Upon being alerted, Bondus acted quickly to take the console offline, relaunching it the following day with proper authentication. In a statement, CEO Fateen Anam Rafid said security fixes were completed within hours, followed by a broader review. The company stated it found no evidence of access beyond the researchers involved and is committed to strengthening its systems, including hiring an external security firm. The researchers confirmed they did not retain any sensitive data, only sharing limited evidence with media to verify their findings.
While this specific leak is now sealed, the episode serves as a stark warning. The researchers’ access to Bondus’s backend revealed how detailed profiles are built and stored to inform the AI’s interactions, creating a rich target for data exposure. This case underscores the pressing need for robust security protocols to be a foundational component of any connected child’s toy, especially those leveraging artificial intelligence to foster personal conversation.
(Source: Wired)





