Topic: model compatibility

  • Garak: Open-Source AI Security Scanner for LLMs

    Garak: Open-Source AI Security Scanner for LLMs

    Garak is an open-source security scanner designed to identify vulnerabilities in large language models, such as unexpected outputs, sensitive data leaks, or responses to malicious prompts. It tests for weaknesses including prompt injection attacks, model jailbreaks, factual inaccuracies, and toxi...

    Read More »