Humans and Machines Podcast: Top 3 Cybersecurity AI Areas and Challenges with Dr. Nur Zincir-Heywood

by in CyberRes

In 2022’s first episode of Humans and Machines, I had the pleasure of reviewing the state of CyberSecurity AI research with Dr. Nur Zincir-Heywood from Dalhousie University, Canada. I could think of no one better than Nur to help us list the top three highlights and challenges from 2021: Nur is a leading cybersecurity AI researcher. She has published over 200 fully reviewed papers and has received multiple best paper awards. Nur also serves as an Associate Editor of the IEEE Transactions on Network and Service Management and Wiley International Journal of Network Management, and is even a tech columnist for CBC Information Morning radio show!

Top 3 Exciting Trends

Top 3 Cybersecurity AI Areas and Challenges with Dr. Nur Zincir-HeywoodNur listed the following three most exciting trends from 2021 in AI Research for cyber: 

  1. Increasing appetite to explore AI for cyber;
  2. More opportunities for AI to make a difference; and
  3. Willingness from the AI research community to take on cybersecurity. 

The increasing appetite to explore AI approaches such as machine learning in cybersecurity wasn’t always the case. The typical CISO is skeptical by nature and hesitant to have data analyzed by a black box or outside of their network – and rightly so! However, I have seen a transformation across the industry, aided by early and growing successful detections using machine learning methods, often in air-gapped environments that do not require data transmission outside the network. Now, AI-based technologies appear as budget line items for the SOC! 

At the same time, the rapid pace of innovation in AI research is creating more opportunities for AI to make a difference. There has been a true renaissance in AI research in the past decade, particularly in high volume and messy data, which gives academia a powerful toolkit to make a real-world difference. 

This ability to make a real difference is behind the willingness of more and more university research labs to take on cybersecurity. Nur’s lab at Dalhousie University was prescient and has been researching cybersecurity AI for over twenty years! But now, I am seeing more cybersecurity AI labs and research and even courses at universities and colleges worldwide. 

Together, these three trends are creating a perfect storm for cybersecurity AI research and a ton of opportunity for the community. 

Top 3 Challenges

Nur lists the following three challenges for the cybersecurity AI research community: 

  1. The data is messy and problematic;
  2. The tools are inaccessible; and
  3. The results are difficult to explain. 

Nur’s #1 challenge is (sadly) familiar to any data scientist: real-world security cybersecurity data that is clean and usable is typically the tallest pole in the tent for any AI initiative. Often there is missing ground truth, datasets are full of noise, and there are seldom any labels. Having ground truth, noise-free, and well-labeled data are all technical requirements for good machine learning, so missing any of these aspects requires creative workarounds to pull off a successful AI project. 

Nur and I also discussed the challenges of bias, a particular type of noise that can creep into a data set, and the associated ethical challenges with bias. Dealing with biased data sets is currently a considerable research effort within the AI community. 

Nur states that we have all these powerful tools, but that doesn’t mean that anyone can use these tools. I completely agree with this concern! The AI tools often take us into what Drew Conway describes as the ‘Danger Zone’ in his Venn Diagram of Data Science. In this Danger Zone, we have the intersection of a cybersecurity expert (‘Substantive Expertise’) with a good developer (‘Hacking Skills’) but lacks ‘Math & Statistics Knowledge.’ This is dangerous in the context of cybersecurity, mainly because you end up with an AI system that might look good in a lab environment but generates noise and false positives when deployed in the field.

 Drew Conway’s ‘Danger Zone’ in his Venn Diagram of Data Science

Of course, the third challenge – the explainability of AI model output – is a significant factor in this. Many of the best AI models require very specialized skills in machine learning to interpret. Without explainable AI, it isn’t easy to trust the underlying system, never mind taking action. There has been good progress in the field of explainable AI, but this remains a nascent challenge within the research community. 

Looking Ahead

Cybersecurity AI research is, in my opinion, one of the most exciting areas of development today. The technology is advancing at a tremendous pace, and that’s fantastic because with attacks also growing at an unprecedented rate, the stakes are higher than ever before. In the review of the past year with Nur, I remain as hopeful and excited as I’ve ever been for AI progress to help address some of the most critical problems in the world. 

Happy New Year! 

Links and Show Notes 

Labels:

UEBA
Anonymous