Hamburger Icon

Where AI has (and hasn't) changed the game for cybersecurity

While AI-powered exploit detection and malware will be disruptive, the principles of defense remain the same. Learn what AI has changed for cybersecurity.

Apr 17, 2025 • 4 Minute Read

Please set an alt value for this image...
  • Cybersecurity
  • AI & Data

For the last 12 months in cybersecurity, AI has sucked up a lot of the oxygen when it comes to conversations. In a field filled with professional critical thinkers, a natural question is how much have the basics of cybersecurity actually been altered, or are going to be significantly different in the near future. 

Do we have to throw out the playbook, or are the techniques we’re using still relevant for 2025?

The answer to that question isn’t binary. In some ways, yes, things have changed due to the advent of AI. However, it’s not like you have to retire everything you’ve learned.

1. AI has changed exploit one-day vulnerabilities

In 2025, a paper was released sharing how OpenAI’s GPT-4 large language model could autonomously exploit vulnerabilities in real-world systems. If you just gave it a CVE advisory describing the flaw, it could exploit 87% of these vulnerabilities.

The moment this news came out, it was clear that we weren’t in Kansas anymore. With just the code and the CVE, you could automate a large part of finding exploits. That was several months ago, and the industry has been hard at work building progressively more powerful models since then.

This signalled a massive paradigm shift, because it’s not possible to patch in two hours. However, you can control the blast radius. So at the same time, the solution has become an actual return to other well-known best practices, such as fine segmentation, zero trust, and using just-in-time authentication before access to network resources. You assume that, yes, people will be able to breach us, but making sure you’ve got good detections in place helps reduce the damage.

2. AI has added new tools to the defense arsenal

There are now some really good tools that have appeared on the market in the last twelve months. This is the “fighting-fire-with-fire” approach of AI, and seems like a natural approach to the problem.

One example is there are now tools that allow you to use AI to map your systems and see who can talk to whom, which are advantageous if you run them before the bad guys can get in your system. They can do some very good microsegmentation, and when they notice unusual access to a resource you’ve never tried to access before, then it asks you to use phishing-resistant multifactor authentication, and under the zero-trust model your activity is being monitored to see if your behavior is being normal or abnormal.

3. AI has not changed the endpoints you’re dealing with

There are still inputs, outputs, and algorithms, and so many of the concepts are the same, and testing is the same. While AI has introduced other challenges, like understanding how LLMs work and how to secure them, these fundamentals haven’t been altered.

4. AI has also not changed the actual types of attacks, just the sophistication and volume

In the past, you could rely on phishing emails having spelling mistakes in them as signs to detect a potential scam. AI has removed this historic tell, but at the same time, the attack still remains the same—it’s an email that’s trying to get you to click on something, or it’s a deepfake voicemail or video trying to convince a human to do something that harms security.

Again, with AI-powered malware, they might be hiding in memory or using polymorphic code, but all of these are things that security experts have seen before. 

Even though it’s easier for adversaries to conduct attacks at scale, the actual methods haven’t changed, which means many existing defense strategies also remain the same—with the exception of having to deal with that change.

5. AI has lowered  the barrier for entry

Back in 2023, I talked about how we’re going to have attackers from every level—from script kiddies to nation states—using AI to perfect and improve their attacks. In 2023, only 21% of hackers believed AI technologies were going to enhance the value of hacking. One year later, that figure rose to 71%, and 77% were reporting using generative AI solutions.

5. For companies that have adopted AI, it’s changed the attack footprint

In Pluralsight’s 2025 AI Skills Report, 86% of organizations said they were either formally deploying AI-related tech and tools or planning to. Naturally, when you adopt AI you increase the cybersecurity attack surface, such as having opportunities for vulnerabilities in AI models, data poisoning, model inversion attacks, increased complexity, and so on.

6. AI has changed some of the fundamentals you need to know

I’ve talked about it recently in my article “Does everyone in cybersecurity need to be an AI expert?”, but one way AI has changed things is that having baseline knowledge of AI now is a must. 

You don’t need to be an expert (unless you want to), but it’s not really possible to do your job as a cybersecurity expert without understanding the basics of AI and how they have changed the landscape for better or for worse. And it’s not possible to talk to your colleagues about the risks and controls needed in an AI world if you don’t understand how they, and the organization, uses AI.

Conclusion: The cybersecurity playbook isn’t dead, just tweaked

The first step to adapting to all the changes that AI has brought is to make sure you have an understanding of what you’re working with. From there, you can work it into your everyday role, adapt accordingly to the present, and keep on top of any future changes that occur.

If you’re interested in learning more about the basics of AI, check out Pluralsight’s Artificial Intelligence: Foundations or Generative AI for Security Professionals learning paths.

John Elliott

John E.

John Elliott is a respected cyber security, payments, risk and privacy specialist. He helps organizations balance risk and regulation with business needs. He was a member of the technical working groups of the PCI Security Standards Council and actively contributed to the development of many PCI standards including PCI DSS. John is particularly interested in how organizations or regulators assess trust in the cyber security and privacy posture between relying parties. A passionate and innovative communicator, he frequently presents at conferences, online and in boardrooms

More about this author
OSZAR »