ChatGPT creates mutating malware that evades detection by EDR

siteadmin June 6, 2023

ChatGPT and other large language models (LLMs) can generate code that changes with each call, presenting cybersecurity risks because they can avoid endpoint detection. A developer at cybersecurity firm GitGuardian says that while the malware currently produced is not a major cause for concern, as these models evolve, they may create malware that can only be detected by other AI systems. Bypassing content filters though “prompt engineering” allows the creation of potentially malicious code. Experts suggest it’s only a matter of time before content filters cannot be relied upon.