Topic: model extraction

  • Google: Attackers Made 100,000+ Attempts to Clone Gemini AI

    Google: Attackers Made 100,000+ Attempts to Clone Gemini AI

    Google reported over 100,000 attempts to extract its Gemini AI's capabilities, attributing them to actors aiming to train cheaper, competing models through a practice it calls "model extraction." The technique, known as "distillation," allows entities to bypass the high costs of original AI train...

    Read More »
  • Google: Hackers Use Gemini AI for Every Attack Phase

    Google: Hackers Use Gemini AI for Every Attack Phase

    State-sponsored hacking groups from China, Iran, North Korea, and Russia are using Google's Gemini AI to conduct reconnaissance, craft phishing messages, write malicious code, and plan sophisticated attacks. These actors integrate AI into core workflows, such as automating vulnerability analysis,...

    Read More »