AI’s Great, but It Still Takes Humans to Enforce Cybersecurity

When it comes to protecting computers and information systems from cyber attack, artificial intelligence and machine learning can help — but they’re no cure-all for a growing problem.

Notwithstanding the current excitement over AI and its increasing ability to best humans on numerous fronts, it’s no magic bullet for shoring up cybersecurity, says Randy Watkins, chief technology officer with CRITICALSTART.

AI excels at managing massive amounts of data, including alerts about possible security breaches. The problem lies in how it interprets that information.

Alerts are addressed in the order in which they arrive. Then they’re prioritized and assessed for the appropriate level of threat. Human analysts, with deep knowledge and experience of the business, are good at placing each alert in its proper context. Machines, not so much. An AI-driven system can detect anomalous user activity, but it’s less effective in determining whether the event involves malicious intent.

“I am not a naysayer of everything AI,” Watkins says, “but AI and machine learning don’t have the capability to apply an abundance of reason to what they’re doing.”

Machines aren’t especially good at minimizing false positives. Take Microsoft’s PowerShell, a popular framework for task automation. A machine can’t accurately determine whether a given user of that tool should be executing a command at a particular time. The anomaly may or may not be the result of a malicious attack.

The term “machine learning” implies that the system gets better with experience, but Watkins says that ability is limited. Training the algorithm to respond in the proper manner requires feeding in large numbers of previous examples, both good and bad. And it still doesn’t solve the problem of false negatives — actual attacks that the system misses. “You have to be able to strip back the outliers that are going to skew your data,” Watkins says.

Figuring out whether or not an event is malicious doesn’t always amount to a yes-or-no answer. For one thing, companies must determine how sensitive they want the system to be. Should it raise the alarm for 100% of seemingly anomalous events? How about 80%? Too much, and you’re inundated with alerts and potential system shutdowns. Too little, and breaches are likely to slip by undetected.

“When you introduce more variables, you require additional data sets, more context about the subject and the behavior [of the system],” Watkins notes. “Once you start to introduce those questions, the machine falls apart.”

Effective detection of cyberattacks depends on cumulative risk scoring, something that humans do well. “Every time we look at an event, we’re deciding whether it’s suspicious,” Watkins says. “But you can also apply reason and previous knowledge about security that algorithms don’t have.

“A machine can crawl through tremendous amounts of data quickly,” he continues. “But give it an abstract concept like least privilege and apply it to the alert set — is it going to recognize a privilege escalation? There’s a lot of benign activity that looks malicious.”

There’s no doubt that machine learning will evolve, even as cyber thieves come up with new ways of avoiding detection. Microsoft has made strides toward improving the sophistication of automated detection systems, as has Palo Alto Networks, a global leader in cybersecurity. “But at the end of the day,” Watkins says, “you still need a human to say, ‘Yes, knock this domain controller offline.’” Companies strive constantly to minimize the cost of system downtime caused by erroneous alerts.

That said, there aren’t enough human experts to fill the need for cybersecurity across all sectors. “There’s definitely a lack of talent in the industry,” says Watkins. Hence the turn toward outside support, in the form of managed detection and response.

The talent shortage isn’t new. “It has existed since security has existed,” says Watkins. Only in the last 10 years have companies and universities begun to awaken to the need for better training and education of future cybersecurity experts.

Both humans and machines have a ways to go if they’re to collaborate in securing vital systems against the ever-growing threat of cyber attack. “We started at zero when we needed to be at 60,” Watkins says. “Now we need to be at 90, and we’re at 60.”

Featured in SupplyChainBrain | March 30, 2020

Newsletter Signup

Stay up-to-date on the latest resources and news from CRITICALSTART.
Secure the Future of Cyber in an AI World. Upcoming Webinar - December 12
This is default text for notification bar