Hackers Exploit Critical LiteLLM Pre-Auth SQLi Flaw

▼ Summary
– Hackers are exploiting a critical SQL injection vulnerability (CVE-2026-42208) in the LiteLLM open-source LLM gateway to target sensitive data without authentication.
– The flaw allows attackers to read and modify the proxy’s database, potentially gaining unauthorized access to managed credentials and API keys.
– A fix was released in LiteLLM version 1.83.7, which replaced string concatenation with parameterized queries.
– Active exploitation began approximately 36 hours after public disclosure, with targeted requests to specific database tables containing secrets, indicating precise attacker knowledge.
– Exposed LiteLLM instances still running vulnerable versions should be treated as potentially compromised, and all stored credentials should be rotated.
Hackers are actively exploiting a critical pre-authentication SQL injection vulnerability in LiteLLM, a popular open-source proxy gateway for large language models (LLMs), to steal sensitive credentials and API keys. The flaw, identified as CVE-2026-42208, allows attackers to bypass authentication entirely.
The vulnerability resides in LiteLLM’s proxy API key verification process. By sending a specially crafted `Authorization` header to any LLM API route, an unauthenticated attacker can inject malicious SQL queries into the proxy’s database. This enables both reading and modifying stored data. According to the project maintainers, exploitation could lead to “unauthorised access to the proxy and the credentials it manages,” including API keys, master keys, environment variables, and configuration secrets.
The fix was released in LiteLLM version 1.83.7, which replaces vulnerable string concatenation with parameterized queries. Users still running older versions are strongly urged to upgrade immediately.
LiteLLM acts as a middleware layer, allowing developers to call multiple AI models,from OpenAI, Anthropic, Bedrock, and others,through a single unified API. With over 45,000 GitHub stars and 7,600 forks, it is widely adopted in LLM application development. This is not the first security incident targeting the project; recently, a supply-chain attack by the TeamPCP group used malicious PyPI packages to deploy an infostealer, harvesting credentials and tokens from infected systems.
According to researchers at Sysdig, a cloud security firm, active exploitation of CVE-2026-42208 began approximately 36 hours after the vulnerability was publicly disclosed on April 24. The attacks were notably targeted and deliberate.
The researchers observed threat actors sending crafted requests to the `/chat/completions` endpoint with a malicious `Authorization: Bearer` header. These requests queried specific database tables containing API keys, provider credentials (OpenAI, Anthropic, Bedrock), environment data, and configuration files. Notably, the attackers avoided probing benign tables. “The operator went straight to where the secrets live,” Sysdig noted, indicating prior knowledge of the database schema.
In a second phase, the attacker switched IP addresses,likely for evasion,and reran the same SQL injection attempts. This time, they used fewer, more precise payloads, focusing on the correct table names and structures identified earlier.
While 36 hours is not as fast as the exploitation of a recent flaw in Marimo, Sysdig emphasized that these attacks were highly specific and targeted. The researchers warn that any internet-exposed LiteLLM instance still running a vulnerable version should be considered potentially compromised. All virtual API keys, master keys, and provider credentials stored in such instances should be rotated immediately.
For users who cannot upgrade to version 1.83.7, the maintainers recommend a temporary workaround: set `disableerrorlogs: true` under `general_settings` in the configuration. This blocks the path through which malicious inputs can reach the vulnerable query, reducing the risk of exploitation.
(Source: BleepingComputer)




