ChatGPT, Laden With Hacker-Inserted Malware, Is Writing Code for Companies
Companies using generative AI tools, like ChatGPT, for software development are increasingly at risk from hacker-inserted malware and design errors. These AI tools often learn to code by reading open-source software, which may contain harmful elements. To mitigate this risk, companies should inspect code written by generative AI thoroughly, using “static” behavioural scans and Software Composition Analysis to check for potential flaws.