13 min read time

Insider Threat: A Problem too Advanced for Machine Alone

by   in Cybersecurity

When you think of cookies, warm, gooey, freshly baked sugar delivery mechanisms, the next thing that goes through your mind is probably a glass of milk and not corporate espionage and data exfiltration. But for Crumbl Cookie, it might be.

Known for their soft pink branding and weekly availability of just six of their 200+ exotic flavors (such as Eggo waffle, key lime pie, pumpkin roll, and this authors favorite Lucky Charm), Crumbl Cookie has grown to over 500 locations across all 50 US states in just 5 years, according to their CEO’s tweet. A massive feat for a company that almost exclusively sells giant calorie dense cookies.

With explosive growth it was only a matter of time until a competitor showed up. Enter Dirty Dough. As of writing, Dirty Dough has five locations with one in Arizona and four in Utah, literally 1/100th the size of Crumbl yet with on point marketing and potentially more nefarious tactics, has shown up ready to play.

After an initial lawsuit filed by Crumbl claiming trademark infringement and a counter marketing campaign by Dirty Dough, the CEO of Crumbl is now claiming Dirty Dough stole trade secrets from the company’s internal database. Posted on LinkedIn, Jason McGowan, Co-Founder and CEO of Crumbl Cookies shares the following August 29th.

Crumbl Cookies

“Dirty Dough has stolen trade secrets from Crumbl’s internal database. An ex-employee has turned over at least 643.7MB of information that Dirty Dough had in their possession:

  • 66 Crumbl recipes
  • Building schematics
  • Processes
  • Store-level statistics
  • Cookie calendar
  • Training videos
  • And other proprietary information

We have confirmed through voicemails and other proof that Dirty Dough planned to leverage these materials to develop their copycat concept.”

We may be tempted to laugh at the admittedly absurd sounding complaint that “Dirty Dough has stolen 66 cookie recipes” With a name like Dirty Dough you almost expect it, but picture a similar situation at any other company. Proprietary source code copied, client or financial information sent to a competitor, designs and prototypes whisked away in the night.

Risks like this are known as insider threats and they are becoming all too common for companies of every shape and size.

The Growing Threat and Cost of Insiders

Insider threats are individuals or users who have access to your digital or physical organization and are able to cause harm with their level of access.  Employees, contractors, janitors, or an outsider who has guessed the company Drobox password are all potential insider threats. It is generally accepted that there are three types of insider threat that an organization deals with all with their own challenges and averages costs. The three types of insider threats are:

  • Malicious Insiders
  • Careless (Negligent) Insiders
  • Mole or Credential Thief

The malicious insider is the type of person we typically associate with insider threats. They are often an employee who has something against the company and want to cause harm to their employer or get personal gain. Malicious insiders may offer to sell data to a competitor, provide someone access to the system, or simply destroy records before they quit. It is the intent that matters and a malicious insider is intent on harming the company. Since they are intent on harming the company, they will likely try and hide their true intent. According to the 2022 Ponemon Cost of Insider Threats Global Report, malicious insiders (criminal insiders in their report) account for 26% of insider threat attacks and cost an average of $648k to mitigate.

As the name implies, careless insiders are employees with access to the company that don’t realize that their actions are putting the organization at risk. Using weak passwords, opening suspicious email attachments, and falling for phishing campaigns are common goofs of a careless insider making them prime targets for outside threat actors. If you have ever watched professional social engineers work you will see just how easy it can be for a careless insider to be manipulated into sharing privileged information. Careless insiders are the most common type of insider threat coming in at 56% of all insider attacks according to the Ponemon report with an average cost of $484k per event.

Last of the insider threats is something we like to call a mole. These are external threat actors that have gained access to your system. They may have manipulated a careless insider to giving up their password or paid a malicious insider to grant them access. It’s possible the mole has found vulnerabilities in your network, accessed the system and found a foot hold where they can study your network, collect data and exfiltrate information, hopefully without being detected. MITRE ATT&CK framework anyone? These types of attacks pose a greater risk to the company than either of the previous types. While the least common with around 18% of insider attacks, each mole or credential theft attack costs companies an average of $804k to resolve.

While we hope that companies rarely have to deal with insider threats, research has shown that 67% of companies will deal with more than 20 insider threat incidents in a year costing an average of $15.4M. On top of that, the average time to contain the events is around 84 days. If your company hasn’t had any insider threats this year count yourself lucky, or maybe see if you are missing something.

How Do We Stop Insider Threats?

It is important to pause here and remember that stopping insider threats is not an easy task. For one, how does a SOC determine if someone is an insider threat or normal employee? Activities that may be normal under some circumstances could be the start of an insider threat when initiated by a different user. Take for example a user who moves 422GB of data to an external hard drive. Are they stealing copious amounts of company data or backing up 4k video files and renders? When a user accesses the company accounting software are they running quarterly reports or taking screenshots to send to competitors?

Such is the nature of insider threats. While establishing a robust set of rules is a great start, chances are something will be missed, or your threat hunters will be swamped with false alerts. Neither of these are acceptable. The answer to dealing with insider threats lies in a relationship between both humans and machines.

Engaging Human Resources

Insider threat prevention requires the support from employees across the organization. When addressing how to detect insider threats The Cybersecurity & Infrastructure Security Agency (CISA) of the US Government first talks about the individual’s role:

“An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and co-workers. People within the organization will often understand an individual’s life events and related stressors, and may be able to put concerning behaviors into context.”

Research by psychologists have show that for malicious insiders there are human indicators that indicate a heightened risk of an attack. In an academic article titled “Application of the Critical-Path Method to Evaluate Insider Risks” the author lays out four indicators that, when observed over time suggest a high risk of an insider threat. They are personal predisposition (physical or mental health issues, previous rule violations, strong ties to competitors), stressors (persona, professional, or financial), concerning behavior (financial, interpersonal, security), and problematic organization response (inattention, no assessments or investigations, summary dismissal).

Even though clear risk indicators are known, employees hesitate to report on risks that seem too personal, or intimate:

“…research suggested that participants are reluctant to report these behaviors because they cannot see a link between that type of behavior and security; in other words, they are unlikely to be convinced of the security relevance of personal problems. This finding indicates that security awareness programs need to work harder to get coworkers and managers to understand the links between psychological issues and stressors and IP theft risk.”

Behavioral Risk Indicators of Malicious Insider Theft of Intellectual Property: Misreading the Writing on the Wall

While we have the indicators, the first step to using humans in the fight against insider threats is promoting a healthy culture at a team and company level. Teams that don’t work well together, have a boss that manages poorly, or are generally disconnected may be at a higher risk of insider threats in general. Along with promoting standard security training (much of which will help reduce careless insiders) promote connection and relationship building between employees whenever possible.

When managers hear about a personal struggle or notice a decrease in effort, increase in policy breaking, or other concerning trends, they should keep an extra eye out for an insider threat. Encourage team members to reach out if they see anything fishy and have managers to reach out to security if they are worried an insider may become a threat to the company. With the right security tools checking if someone is a risk should be simple.

Company culture also comes through visibility of the security team. A malicious insiders attack may be thwarted when they realize the security team is active and watching them. More on this in a bit.  

For a more detailed list of ideas on how to manage inside threats check visit CISA.

Getting the Right tool in Place

Despite what we might hope, good management, strong company culture, and practical training won’t stop every insider threat attack. Even if we were able to stop all malicious and careless attacks, threats from outsiders are still likely. This is where technology comes into play.

Traditional security tools are known for using advanced rule sets to stop security threats. Rules might include:

  • Block access to users not connected via the VPN
  • Only this user group has access to this file system
  • If a user fails their log-in attempt 20 time in 5 seconds, do this…

Hard coded rules are great when you have well defined and unchanging security requirements but fall short against novel attack methods and vectors. An insider threat may trigger multiple rules during their attack but due to misconfigured or outdated thresholds, analysts may see hundreds of false positives triggered every day and have since stopped investigating. False positives and alert fatigue are real problems in an and industry already strapped for talent. Thankfully with the rise of machine learning and artificial intelligence there are tools to help security teams detect what rules alone can’t.

User Entity Behavior Analytics (UEBA) is that answer. UEBA is a type of solution that uses machine learning to process raw data events and detect anomalies. Unlike security rules, UEBA doesn’t know what it is looking for when it is given the data, it is just looking for anomalies. Even when it finds an anomaly UEBA tools don’t know if the anomaly is a threat or just strange behavior. For example, if a user logs in to their work machine at 8am every morning and logs off at 5:15pm every night for a month, a login event at 2am could be considered an anomaly and would be flagged for review. There could be any number of practical reasons why this employee has logged in at 2am regardless, its suspicious and needs to be reviewed.

This is where UEBA tools struggle sometimes struggle. Since the tool doesn’t know if an event is a security threat a human has to be brought in to review the anomaly, to many low-level events being tagged for review is no better than a system throwing false positives. To prevent false positive UEBA solutions don’t just focus on individual anomalies, they instead zoom out and look for anomalies at the user and company level. While we could explain away our 2am login as an out of country conference or insomnia, if the same user, then runs a program that no one in the company has ever run the probability of them being an attacker or insider threat goes way up.

Higher probability, or higher risk score as it is often called is how UEBA solutions enable threat hunters to detect unknown or hidden threats in their system. The higher the risk score the more likely it is for a threat hunter to find an active attack that they can stop. No more digging through false positives or using hypothesis searching, when done right, UEBA tools find the anomalies, assign useful risk scores and help threat hunters stop attacks.

UEBA tools pick up where our human defenses leave off. A mole or credential thief insider will almost always show up with a high-risk score in a UEBA tool because they are doing things that most employees will never do. External attackers will scan for vulnerabilities, run new programs on the system or log in at times or from places that no other employee has logged in from. For the employees that do become insider threats, at some point they will do something anomalous, at some point they will access a new folder, access a strange website, or send an email to a personal account and UEBA will be there to catch them.

How Insider Threats Can be Stopped

With some thoughts about stopping insider threats with at the human level and a discussion about how UEBA solutions can detect insider threats at a technology level, we can look at how best to stop all three types of insider threats

  • Careless or negligent insiders:
    • Focus on effective training,
    • Improving company culture,
    • Establishing basic IT rules.
  • Malicious or criminal insider:
    • Have managers alert IT if someone seems unusually distant or disengaged,
    • Be aware of hardships making employees susceptible
    • Implement a UEBA
  • Moles or credential thieves:
    • Invest in a robust UEBA solution,
    • Review rules and tighten up security wherever possible.

While what I’ve said may make sense in theory, it is even more compelling when you’ve seen it in action like we have.

Real World Success

We have seen firsthand how a company’s insider threat security posture changes with the right mix of technology and people. Our threat hunting team recently received an email from a CISO using our ArcSight Intelligence product. ArcSight Intelligence is a User Entity Behavior Analytics (UEBA) tool built on unsupervised machine learning and designed to establish a “unique normal” for every entity (user, machine, etc.) in your organization. By understanding what normal looks like for each entity, Intelligence can detect anomalous behavior and send high quality leads to threat hunters for follow up.

The customer had recently purchased Intelligence and had integrated it into the security team’s work flow. Whenever an anomaly was detected on a user’s account or machine the security team would reach out to the individual. With quick phone calls, emails, and instant messages, the security team would let the individual know they noticed an anomaly and ask questions to see if the event was expected.

These quick and simple conversations quickly shifted the insider threat culture of the company. Employees realized that the security team wasn’t some group hidden away in a dark room with monitors blanketing the walls silently monitoring traffic and sending out phishing tests. It became clear that IT was monitoring everything and would call them out if something suspicious was going on. When this became apparent, something even crazier happened. Employees started messaging the security team.

Managers were contacting security to let them know they were onboarding a new consultant and would be sending data to an external email address. VP, SVP, and C-Suite individuals were emailing security to let them know about upcoming trips, unique working hours, or even new programs they were installing just so security would be aware and not worry when an anomaly was detected.

Our customer was shocked to see such a massive culture change by simply pairing an advanced security tool with a human touch. While this CISO may not be able to stop every insider threat, the time to detect and time to resolve has been greatly reduced and the overall security posture of the company has been heightened.

Join Us in Highlighting National Insider Threat Awareness Month

September is National Insider Threat Awareness Month, sponsored by The National Counterintelligence and Security Center (Their decree). To learn more about the serious risks posed by insider threats and how to recognize and report anomalous/threatening activities to enable early intervention visit the National Insider Threat Awareness Month website.

More Resources:


Join our Community | ArcSight User Discussion Forum | ArcSight Idea Exchange | What is an Insider Threat?


Security Operations