In cybersecurity, humans will always have a role to play. Security incidents need critical decision-making factors that require human analytics. CRITCALSTART’s Jordan Mauriello and Michael Balboni, President of Redland Strategies, and former Senator, assemblyman, advisor to Homeland Security, have some thoughts on the role of humans and where machines fit into cybersecurity.
—
Hey guys, Jordan Mauriello with CRITICALSTART here, Senior Vice President of Managed Services. Today I have with me Michael Balboni, President of Redland Strategies, former Senator, Assemblyman, advisor to Homeland Security. Honored to have him here with us today. We’ve been doing some awesome discussions about things that we’re doing at CRITICALSTART and working with Redland Strategies.
Today we wanted to take an opportunity just to talk to Michael about some general cybersecurity issues. He’s a major influencer in our community. I know many of you already know who he is, and has had a major impact even on some of the legislature that we’ve seen around our industry too as well. We want to take the time to get some thoughts from him on some of the direction the industry’s going and the impact that some of the changes we see in cyber in general are having on national defense, the role of Senate and Congress, and where that’s going from a legislature perspective.
We’re going to open up and have a nice, fun conversation here about some of these issues. Thank you so much for being with us, Michael.
Thanks for having me Jordan, and thanks for your service to the country in the military.
Thank you very much, sir. I appreciate your support.
—
You definitely see this massive difference in organizations and capability and there’s not a standard set maturity. We have some customers that we walk in and we can’t believe how mature they are. Great leadership and they really understand the problems, they know the industry and generally, what we see are guys who have the right experience, come from backgrounds that we’ve kind of expect with the technical understanding and knowledge.
A lot of times they’re from government cybersecurity roles and they have the understanding of how to come and identify and begin to protect a threat. But, we don’t have a standard that we’re committing to either as well. Even though we have some great ones. The NIST cybersecurity framework is fantastic, but how many organizations are actually committing to and doing that today? It’s very, very few.
It’s amazing how many corporations that I’ve seen from a $25 million to a $100 million capitalization that don’t even have a Chief Information Security Officer. It’s their IT guy and their physical security guy and never the twain shall meet that. It’s still that divide. I think that obviously one of the key requirements under NIST is to have a Chief Information Security Officer, have one person who’s responsible for not only setting up the different security architectures, but then monitoring the network, devising a response protocol and being able to call out to different vendors so that you can bring people in and say, “First of all, let’s do a penetration test on my network.” Really, really crucial. Let’s have a vulnerably assessment, periodically. Let’s have training of staff. We always say that defense should be totally, completely automated. Not always. There is a human interface. It’s very important.
It depends upon where in the kill chain, that is the Lockheed Martin developed set of steps that an attacker has to take to make something a weapon, and insert it into a network. Where along the kill chain can a human interface come and recognize the threat and stop it? We’ve kind of lost it. We want to say, as they say, the Terminator kind, machine versus machine. There should be no humans involved. Well, no. That’s not exactly correct. Humans have a role to play. It’s just perhaps further down the line when a threat has been identified and is a resolution to that threat, then the human can play a role in that.
Yeah, I completely agree. I think the importance of artificial intelligence and machine learning has been massively overplayed in our space right now. They become buzzwords and everybody wants to throw them in their technology and say they have ML or AI. The reality of it is it’s still a very limited technology set. You cannot do causation with AI or ML today.
Explain that to me. What do you mean by causation?
An AI and ML cannot answer why something happened for you. When you’re getting to what should be the human interface point in security, maybe you’re using some sort of analytic engine to distill down data. You get an incident that requires analysis and you want to ask, “Why did this happen? What’s the root cause of this incident and can I prove that known good or known bad?” That requires a human. It’s a critical decision making factor that requires human analytics and the machine can’t do that for us yet. Nobody in the world can with AI yet.
Now that’s not to say it won’t. In fact, I think one of the fathers of modern machine learning and AI, Judea Pearl, wrote a great book recently and I highly recommend, called “The Book of Why”. He’s talking specifically about this problem. We can’t do causation yet with AI and ML. We still need to understand that it’s use case-specific, that it applies to certain things, that it can do detection for us, that it can produce maybe some visibility mechanisms we didn’t have before. The end of it, when you’re answering, “Why did that happen and what do I need to do to respond,” that’s a human and it still needs to be a human.
Thanks for signing up!