Artificial IntelligenceBigTech CompaniesCybersecurityNewswire

Google AI Bug Bounty Offers $30K Rewards

▼ Summary

Google launched a new bug reward program on Monday specifically targeting vulnerabilities in its AI products.
– The program defines AI bugs as issues using large language models or generative AI to cause harm or exploit security loopholes, focusing on rogue actions.
– Bug hunters have earned over $430,000 in the past two years for identifying AI abuse avenues in Google’s products.
– Google clarified that AI content issues like hate speech generation should be reported via product feedback channels, not the bug program.
– The program offers up to $20,000 for finding bugs in flagship products, with potential bonuses increasing rewards to $30,000.

Google has launched a significant new bug bounty program focused exclusively on uncovering security flaws within its artificial intelligence systems, offering rewards as high as $30,000 for successful discoveries. This initiative clearly defines what qualifies as an AI vulnerability, specifically targeting issues where large language models or generative AI are manipulated to cause harm or exploit security weaknesses.

The company has provided concrete examples of the types of exploits it is seeking. These include scenarios like an indirectly injected prompt that could command a Google Home device to unlock a door, or a data exfiltration attack where a prompt injection summarizes a user’s entire email inbox and sends that information directly to an attacker. The program prioritizes identifying “rogue actions” that allow unauthorized modification of user accounts or data, potentially compromising security or performing unwanted activities. One previously exposed flaw, for instance, allowed a manipulated Google Calendar event to open smart shutters and switch off lights without permission.

Over the past two years, even before this formal program, Google has paid out more than $430,000 to security researchers who identified potential avenues for AI feature abuse in its products. The company emphasizes that simply making its Gemini model produce incorrect or “hallucinated” information is not sufficient for a reward. Problems related to AI-generated content, such as the production of hate speech or copyrighted material, should be reported through the product’s own feedback channel. This allows Google’s AI safety teams to diagnose the underlying model behavior and implement comprehensive, long-term safety improvements.

Alongside the bounty announcement, Google also introduced an AI agent named CodeMender, designed to automatically patch vulnerable code. The company reports that this tool, following human researcher vetting, has already been used to apply 72 security fixes to various open-source projects.

The top-tier reward of $20,000 is reserved for discovering rogue actions in Google’s most prominent products, including Search, Gemini Apps, and core Workspace applications like Gmail and Drive. The final payout can be increased through multipliers based on report quality and a novelty bonus, potentially reaching the maximum reward of $30,000. For bugs found in other Google products such as Jules or NotebookLM, or for less severe abuses like stealing secret model parameters, the reward amounts are correspondingly lower.

(Source: The Verge)

Topics

ai bug bounties 95% rogue actions 90% ai security 88% prompt injection 85% google products 82% bug hunting 80% ai safety 78% ai journalism 75% financial rewards 75% data exfiltration 72%