Hacker Tricks ChatGPT to Get Details for Making Homemade Bombs
A hacker known as Amadon has reportedly bypassed the safety protocols of OpenAI’s AI chatbot ChatGPT, generating instructions for creating homemade explosives. The method, known as “jailbreaking,” highlights the security vulnerabilities and potential abuse of AI systems. The incident has prompted calls for stronger security protocols, ethical AI development, and public awareness to prevent misuse and ensure user safety.
Source: gbhackers.com
- Read more