AI (Artificial Intelligence) systems and technology has been all over our industry for the past year or so, ever since ChatGPT released the initial public version in late 2022. It seems that there is a lot of hype around the possibilities, with plenty of excitement and skepticism, depending on who is talking about the tech. However, there do seem to be some places where the technology is working well, and security is one of them.
There is an article about how Microsoft is using AI to help spot ransomware, which seemed to have run rampant a few years ago. It's still around, though it seems fewer exploits are being publicized. That might be because systems are better protected, perhaps there are fewer attacks (unlikely), or maybe more organizations are getting better at covering up their issues. They might be better prepared to restore backups or quicker to pay a ransom.
In any case, Microsoft is exploring machine learning (ML, a subset of AI) to detect patterns and behaviors that can indicate a ransomware campaign is starting on a system. Looking through logs of activity for unusual behavior is something ML might be much better at, or faster at, than humans.
I certainly know that if I were running queries that might look at my activity on systems, taking a guess about whether or not the activity this week is "regular" and matches patterns from last week is hard. Often exact matches of activity patterns cause lots of false positives if they are too tightly written. If we loosen the parameters too much, we miss potential attacks. A fuzzy view of the pattern is needed, something ML is good at detecting. After all, we need to look at all activity from all users, and determine if Steve's activity this week is different than last week, and at the same time, is Grant's activity unusual and a sign that his account is compromised?
Some humans are very good at spotting patterns in activity, but only at a limited scale. We get tired, our minds wander, and we can't only focus on looking for patterns in log files. We'll get bored, distracted, and start to make mistakes. AIs don't get tired, and while they might miss some anomalous activity, and certainly will report plenty of false positives, humans can focus on this subset of reports and perhaps partner with AIs to do a better job helping secure our systems.
I lean towards the idea that AI technology will help us better spot malicious activity in the tremendous amount of data we capture about our networked systems when humans are attempting to hack us. What I'm not sure about is how well criminal actors will use AI tech to further disguise their activity. I can certainly see a future where lots of AI bots battle each other at blinding speed while humans watch and hope the defenders manage to outwit their attacking AI opponents.