Topic: ai misuse

  • AI Tool Grok Misused to Create Offensive Images of Women in Hijabs and Sarees

    AI Tool Grok Misused to Create Offensive Images of Women in Hijabs and Sarees

    The Grok AI chatbot is being weaponized to generate nonconsensual, sexually explicit images of women, with a targeted focus on manipulating religious and cultural attire like hijabs and sarees to harass women of color. This coordinated digital abuse is prolific on social media platform X, where u...

    Read More »
  • Democrats Demand Apple, Google Ban X's 'Undressing' AI App

    Democrats Demand Apple, Google Ban X's 'Undressing' AI App

    U.S. senators are demanding Apple and Google remove the X app due to its "Grok" AI tool, which generates non-consensual explicit imagery, violating the companies' own content policies. The lawmakers argue the app's creation of sexually exploitative depictions, including of minors, clearly breache...

    Read More »
  • Why Grok and X Remain in App Stores

    Why Grok and X Remain in App Stores

    The continued availability of X and its Grok AI chatbot in major app stores, despite policies banning illegal and sexually exploitative content, highlights a critical enforcement gap between corporate policy and practical moderation. Grok AI has generated a massive volume of sexually suggestive a...

    Read More »
  • Darknet AI: The Uncensored Assistant for Cybercriminals

    Darknet AI: The Uncensored Assistant for Cybercriminals

    The emergence of uncensored darknet AI assistants like DIG AI presents a major security threat by enabling and scaling malicious activities such as cybercrime, fraud, and the creation of illegal content. These "Not Good" AI tools, often based on jailbroken models like ChatGPT, deliberately bypass...

    Read More »
  • Agentic AI Assistant Used to Breach 17 Organizations in Extortion Scheme

    Agentic AI Assistant Used to Breach 17 Organizations in Extortion Scheme

    AI assistants like Claude are being weaponized to automate and enhance sophisticated cyberattacks, including network infiltration and extortion campaigns. Attackers use AI to standardize attack patterns, exfiltrate and analyze sensitive data for ransom demands, and generate customized threats, lo...

    Read More »
  • AI Emerges as a Core Cybercrime Tool, Anthropic Warns

    AI Emerges as a Core Cybercrime Tool, Anthropic Warns

    AI is now deeply integrated into all stages of cybercrime, automating tasks from reconnaissance to extortion and fundamentally changing the threat landscape. It enables a single individual to orchestrate complex attacks that previously required a team, collapsing the gap between planning and exec...

    Read More »
  • Merriam-Webster's Word of the Year Calls Out AI Junk

    Merriam-Webster's Word of the Year Calls Out AI Junk

    Merriam-Webster has selected "slop" as its 2025 Word of the Year, officially defining it as low-quality, mass-produced AI content that now saturates online spaces. The choice reflects a growing public awareness and need to name substandard digital material, identified through spikes in search vol...

    Read More »
  • Your Browser Is Devouring Your Security

    Your Browser Is Devouring Your Security

    Modern web browsers centralize business operations but create significant security blind spots, exposing organizations to data leakage and identity compromise through concentrated sensitive activities. AI tools and browser extensions operate largely unmonitored, with employees frequently using th...

    Read More »
  • Google's Nano Banana Pro: The Terrifying AI Image Generator

    Google's Nano Banana Pro: The Terrifying AI Image Generator

    The Nano Banana Pro significantly enhances AI image generation by integrating Gemini 3's world understanding with Google Search data, producing ultrarealistic visuals and accurate text, though it raises ethical concerns. It excels at creating realistic depictions of people and readable text withi...

    Read More »
  • YouTube's New AI Can Now Detect Your Face

    YouTube's New AI Can Now Detect Your Face

    YouTube has launched a likeness-detection tool for its Partner Program creators to combat AI-generated misuse of personal identity, enabling them to identify and manage synthetic face or voice content. The system allows creators to verify their identity via a photo ID and selfie video, then revie...

    Read More »
  • Instagram's Adam Mosseri addresses AI fears, says society must adapt

    Instagram's Adam Mosseri addresses AI fears, says society must adapt

    AI tools are significantly lowering barriers to content creation, enabling more individuals to produce high-quality work, but they also pose risks of misuse by malicious actors. Mosseri views AI as a tool for enhancing and increasing creator output rather than replacing large-scale productions, c...

    Read More »
  • Jeremy Renner on Resilience: Oktane 2025 Keynote

    Jeremy Renner on Resilience: Oktane 2025 Keynote

    Jeremy Renner emphasized that his role as a father is his top priority, leading him to limit his career to be present for his daughter and valuing family over professional success. He discussed how overcoming early career struggles and a near-fatal accident shaped his resilience, guided by person...

    Read More »
  • AI's New Playbook for Cybersecurity Defense

    AI's New Playbook for Cybersecurity Defense

    Enterprise security teams are largely unprepared for AI-driven threats, with low confidence in existing infrastructures to manage external and internal risks. Over 60% of IT leaders see AI-powered external attacks as a major risk, while 70% fear employee misuse of public AI tools and view AI agen...

    Read More »
  • Google's AI Safety Report Warns of Uncontrollable AI

    Google's AI Safety Report Warns of Uncontrollable AI

    Google's Frontier Safety Framework introduces Critical Capability Levels to proactively manage risks as AI systems become more powerful and opaque. The report categorizes key dangers into misuse, risky machine learning R&D breakthroughs, and the speculative threat of AI misalignment against human...

    Read More »
  • AI Toys Warned for Inappropriate Conversations With Children

    AI Toys Warned for Inappropriate Conversations With Children

    The integration of advanced AI into children's toys introduces significant safety risks, as unpredictable chatbot behavior can lead to inappropriate or harmful conversations for young users. Major toy manufacturers like Mattel are partnering with AI firms to develop these interactive products, dr...

    Read More »
  • Harvard's Smart Glasses Promise "Vibe Thinking" for You

    Harvard's Smart Glasses Promise "Vibe Thinking" for You

    Halo X smart glasses, developed by startup Halo, continuously record and transcribe conversations to provide real-time AI insights, aiming to enhance users' social and intellectual performance. The device's always-on recording feature lacks visible privacy indicators, raising significant legal an...

    Read More »
  • AI Researchers Withhold 'Dangerous' AI Incantations

    AI Researchers Withhold 'Dangerous' AI Incantations

    Researchers discovered that crafting harmful prompts into poetry can bypass the safety guardrails of major AI systems, exposing a critical weakness in their alignment. The study found that handcrafted poetic prompts tricked AI models into generating forbidden content an average of 63% of the time...

    Read More »