2 minute read time

Humans and Machines Podcast: Insider Threat AI, in the Real World, featuring Jim Fitzsimmons

by   in Cybersecurity

Despite all the buzz around Artificial Intelligence (AI) and Machine Learning (ML), it’s not a silver bullet. As with all tools, AI has areas that it struggles with, but also use cases that are uniquely amenable to AI-treatment. One such area is insider threat, where the bad guy is inside your network, and is an authorized user, with authorized access, and running authorized programs, but is still creating risk and damage to your organization. 

Jim Fitzsimmons Insider threat is amenable to AI, but even here, AI alone is not complete. I was excited to talk with Jim Fitzsimmons from Control Risks about insider threat AI in the real world in the latest episode of our Humans and Machines Podcast. Jim has spent over 25 years implementing security systems all over the world, and so was uniquely qualified to chat with me about what it really takes – above and beyond cool technology – to operationalize an insider threat AI system. 

Jim describes the unique characteristics of insider threat on the episode: “The insider has knowledge of what the information is, where it is, who might be interested in it, its value, and who has access to it.” These characteristics, along with the massive amount of data generated around insider movement and behaviors, make it the perfect environment for AI and anomaly detection. 

From a risk management perspective, the anomaly detection approach makes a lot of sense. Jim however makes the point that collecting that data, and figuring out what ‘normal’ looks like, so that ‘abnormal’ can be detected requires tooling like User and Entity Behavior Analytics (UEBA) to be automated effective. 

Humans and Machines Podcast Insider Threat AI, in the Real WorldHowever, UEBA is only part of the story, and it is only one component of an insider threat program. AI can raise the red flag, but that is only the beginning. Jim and I talk about what happens after the AI detects a potential bad actor, when the investigation begins. Jim describes a structured approach for the investigation team, where we ask questions like:

  • Do we know what’s important?
  • Do we know what the risk is?
  • Do we know what systems may be impacted?
  • What are we worried about?
  • What are the conditions that would trigger an investigation? 

“Investigatory skills are very different than technology skills,” points out Jim. He advocates a very structured, meditative, and risk-based approach on the investigative side of the insider threat process. We even touch on the ethical, regulatory, and compliance aspects of insider threat and technology regulation. 

It was great to chat with Jim, and remind me that, as big of a fan as I am of math and AI, it is never just math and AI in the real world. It’s people, process, and technology, ultimately all working together, that makes any program successful. 

Links and Show Notes 


Join our Community | What is Artificial Intelligence? | What is Machine Learning? | What is an Insider Threat?