Hacker plants false memories in ChatGPT to steal user data in perpetuity

siteadmin September 24, 2024

Security researcher Johann Rehberger exposed a vulnerability in OpenAI’s ChatGPT that allowed attackers to store false information in a user’s long-term memory settings. The flaw, dismissed by OpenAI as a safety issue, was demonstrated by Rehberger via a proof-of-concept exploit which uses the vulnerability to extract all user input. Following this, OpenAI issued a partial fix. Despite this, untrusted content can still perform injections that store malicious long-term information, warned Rehberger.

Source: arstechnica.com - Read more