A security researcher has exposed a critical vulnerability in Anthropic’s Claude AI, allowing attackers to steal user data by exploiting the platform's own File API.
The flaw enables attackers to use hidden commands to hijack Claude’s Code Interpreter, tricking the AI into sending sensitive data, such as chat histories, directly to an attacker.
Attackers can exfiltrate user data via a chained exploit that abuses the platform's own File API.
Initially, Anthropic dismissed the report on October 25, but later acknowledged a “process hiccup” and reversed its decision on October 30.
Author's summary: Critical vulnerability in Claude AI allows data theft.