Effects of Artificial Intelligence on the Landscape of Cyber Security

Morgan Phisher May 19, 2024

Hey fellow Bay Area folks, let’s chat about how Artificial Intelligence (AI) – the technology you’ve commonly heard to be revolutionizing industries, could potentially increase cyber threats. It’s not all rainbows and unicorns when it comes to innovation. The downside? Well, with AI becoming more accessible, we might see a rise in hacking attempts and scary scenarios where private AI models are misused. Sound like an episode of Black Mirror? It’s not far off.

A group of cybersecurity whizzes recently forecasted that as wrongdoers get more skilled with sophisticated AI algorithms, we could see a pick-up in zero-day exploits (those nasty, unpredictable security vulnerabilities) and deepfake scams (manipulated media content). They’ve felt that the more advanced threats aren’t entirely AI’s making just yet, but there’ll likely be a push in this direction.

One point of focus has been an emerging threat, quaintly called “prompt hacking”. Imagine this: someone feeds an AI model a prompt that veers it off its path, tricking it into performing tasks it wasn’t designed for. This could be exploited by both well-meaning users (maybe trying to test for vulnerabilities, like digital white hats) and those with more sinister motives. Think of it as a jailbreaker worming their way around the AI’s safety measures. Evidence of this is already apparent, with prompt injections topping the tier list of security vulnerabilities for AI applications.

Now, as germs give rise to stronger antibiotics and viruses trigger advanced vaccines, this ‘prompt hacking’ has led AI providers to constantly level up their safeguards. However, it’s a delicate balance as more safeguards may mean less user-friendly programs. Plus, if the AI models they’re seeking to protect are being turned against them, we could be looking at a never ending tech tussle.

Indeed, we must acknowledge that those with harmful intentions may already have or be working on high-level technologies, comparable or even superior to the ones ethical AI providers have. So, it’s basically an arms race – ethical versus unscrupulous applications of AI.

Consider Google’s Gemini; even the most novel AI models can be susceptible to prompt hacking. It’s akin to being the whack-a-mole champion, yet finding there’s always one more mole popping up!

There’s another threat we need to be aware of, folks. Certain creators have started building private AI modules with no safeguards, making it easy for those with ill intent to use them. Without protection, these could potentially empower inexperienced cyber criminals to carry out convincing phishing attacks or create malware. Terrifying right?

The report also suggests that ‘easy-to-use’ generation AI tools may lead to more security exploits in the future. These tools might enable threat actors to create sophisticated attacks more rapidly than traditionally possible. AI can also be the shield however, with many organizations seeing benefits of AI in detecting zero-day attacks and threats. Could this be silver lining?

Looking ahead, the report talks about a possible “rapid increase of zero-day exploits appearing in the wild”. Attackers may find innovative ways to further automate their cyber-attack processes. They’ve already noted a surge in intrusion activity recently, and warn of another potential increase in attacks.

You thought that was all? Nope. We also have something called “deepfakes” – realistic AI-generated fake content, which could soon be just a keystroke away. Ever received a weird call that sounds uncannily like your boss asking for sensitive information? That’s a voice deepfake. And they’re just the start.

A report even indicated that attempts at deepfake fraud skyrocketed recently, with face-swapping apps leading the way. An example of this involved a finance worker who was duped into transferring a chunk of change to a scammer posing as a senior officer on a video conference call.

The takeaway? Just as ethical providers carousel guardrails to limit abuse, the bad guys will find a way around them. Sure, the race between good and evil is nothing new. But as technology continues to evolve rapidly, we’ll need to stay alert to the cybersecurity risks that come hand in hand. In the Bay Area, and across the globe, it’s always the survival of not just the fittest but the most prepared!

by Morgan Phisher | HEAL Security