Topic: jailbreak attacks
-
Garak: Open-Source AI Security Scanner for LLMs
Garak is an open-source security scanner designed to identify vulnerabilities in large language models, such as unexpected outputs, sensitive data leaks, or responses to malicious prompts. It tests for weaknesses including prompt injection attacks, model jailbreaks, factual inaccuracies, and toxi...
Read More » -
Unmasking AI's Hidden Prompt Injection Threat
Modern LLMs have developed sophisticated defenses that neutralize hidden prompt injections, ensuring AI systems process information with integrity and prioritize legitimate user instructions over covert manipulation. Technical countermeasures like stricter system prompts, user input sandboxing, a...
Read More »