Speech & Audio

GitHub patches Copilot Chat flaw that could leak secrets • The Register

GitHub patches Copilot Chat flaw that could leak secrets • The Register


GitHub’s Copilot Chat, the chatbot meant to help developers code faster, could be helping attackers to steal code instead.

Researcher Omer Mayraz of Legit Security disclosed a critical vulnerability, dubbed CamoLeak, that could be used to trick Copilot Chat into exfiltrating secrets, private source code, and even descriptions of unpublished vulnerabilities from repositories. The flaw was scored 9.6 on the CVSS scale in the disclosure.

The root cause is simple. Copilot Chat runs with the permissions of the signed-in user and ingests contextual text that humans might not see. Mayraz demonstrated how an attacker can hide malicious prompts in GitHub’s “invisible” markdown comments inside pull requests or issues – content that doesn’t render in the standard web UI but is still parsed by the chatbot. When a maintainer later asks Copilot to review or summarize the change, the assistant can obediently follow the buried instructions, searching the repo for API keys, tokens, source files, or other juicy material.

Exfiltrating the data required a crafty workaround. GitHub’s Content Security Policy (CSP) and its image-proxy service, Camo, are supposed to stop arbitrary outbound requests, but Mayraz “created a dictionary of all letters and symbols in the alphabet” and pre-generated corresponding Camo-proxied image URLs, effectively mapping each character to a distinct, legitimate Camo URL.

The poisoned prompt then instructed Copilot to render the discovered secret as a sequence of 1×1 pixel images. By observing which image endpoints were fetched and in what order, an attacker could reconstruct the secret character by character. That “pixel alphabet” turns innocuous image loads into a covert data channel with little visible trace.

Mayraz’s proof-of-concept pulled AWS keys, security tokens, and even the description of an undisclosed zero-day vulnerability stored inside a private issue on a private organization’s repo – the exact sort of unpublished exploit notes red teams and security researchers keep under wraps. In short, the bug could be used to steal not just credentials but unreleased bug details that attackers could weaponize.

Legit Security reported the flaw via HackerOne, and GitHub moved quickly to blunt the channel. According to the disclosure, GitHub disabled image rendering in Copilot Chat on August 14 and blocked the use of Camo to leak “sensitive victim user content,” closing the precise exfiltration route while a longer-term remediation is developed.

Microsoft-owned GitHub did not immediately respond to The Register’s questions. 

CamoLeak is a textbook demonstration of how adding AI into developer workflows expands the attack surface. Anything an assistant can read and act on becomes an input channel attackers can manipulate, including hidden comments that human reviewers never expect to be dangerous.

Turns out the assistant that finds your bugs is also pretty good at nicking your keys. ®

GitHub patches Copilot Chat flaw that could leak secrets • The Register

Source link