3 min read time

The Human Element: Mitigating Risks with Behavior Analytics and Unsupervised Machine Learning

by   in Cybersecurity

The 2023 Verizon Data Breach Investigations Report (DBIR) was recently released, providing invaluable insights into the evolving landscape of cybersecurity. Among the many interesting findings and unique tidbits shared in the 2023 DBIR, two stood out as markers of a concerning trend. 

Acknowledgment 

Before we delve into the findings, a huge shoutout goes to the ArcSight Intelligence team. This year marks the 9th consecutive year of ArcSight Intelligence's contribution to the report. Collaboration among industry experts, organizations, and researchers in collectively addressing the challenges posed by constantly evolving threats is fundamental to improving our collective security. You can read the full report at www.verizon.com/dbir. 

“To err is human…” – Alexander Pope 

The Human Element in Breaches 

According to the DBIR, an astounding 74% of breaches involve the human element. This encompasses a wide range of factors, including social engineering attacks, human errors, and account/system misuse. 

Pretexting on the Rise 

A significant revelation from the DBIR is the alarming increase in pretexting, which accounts for 50% of all social engineering attacks, doubling from the previous year. Pretexting involves the use of fabricated identities or scenarios to deceive individuals into divulging sensitive information or granting unauthorized access. 

These statistics may not come as a surprise. Targeting the human element of an organization often yields results for persistent threat actors willing to invest time and effort into crafting convincing pretexting campaigns. However, the rise of human element attacks in the last year may just be the beginning. The data cutoff for the DBIR 2023 was October 31st of the previous year. This is significant because a few months later, Chat-GPT and generative AI really took off, opening the door for faster, more advanced attacks. 

In the case of pretexting, generative AI could potentially make such attacks more effective and difficult to detect. Attackers can leverage generative AI tools to create sophisticated and highly convincing personas, emails, or even voice recordings that mimic legitimate individuals or organizations. Generated content can be tailored to manipulate emotions, exploit psychological vulnerabilities, and increase the success rate of pretexting attempts. The ability of generative AI to produce deceptive content that closely resembles genuine communication makes it challenging for individuals and security systems to distinguish between legitimate requests and malicious intent. It is safe to speculate that the 2024 report will show an even greater increase in human element attacks, resulting in major breaches. 

A Wolf in Sheep’s Clothing 

Attacks centered on the human element are notoriously difficult for organizations to detect. Once a human is compromised, the attacker can masquerade as a legitimate user, hiding malicious intent within legitimate business operations—a wolf in sheep's clothing, if you will. While rules and thresholds may detect a subset of suspicious activity from a compromised user, it is well-known that human element attacks, or insider threats as they are also known, spend an average of three months in the system before being detected. This provides ample time for an attacker to collect, stage, and exfiltrate proprietary company data. 

Leverage the Power of Machine Learning 

This is where user entity behavior analytics (UEBA) comes in. Behavior analytics focuses on monitoring user behavior patterns, establishing baselines of normal activity, and identifying anomalies. By continuously analyzing factors such as login times, resource access patterns, and data transfers, behavior analytics can detect deviations that may indicate potential threats. For example, if an employee suddenly exhibits unusual behavior, such as accessing unauthorized resources or transferring data outside of their regular work hours, the system can trigger an alert. This proactive approach enables security teams to investigate and respond promptly, reducing the risk of breaches associated with the human element. 

A robust UEBA solution should be built on unsupervised machine learning. Unsupervised machine learning provides advanced capabilities in detecting and preventing pretexting attacks. These techniques leverage vast amounts of data to uncover hidden patterns and relationships, enabling the identification of anomalies in communication patterns and request scenarios. By continuously learning from evolving attack patterns, unsupervised machine learning enhances the organization's ability to combat pretexting and reduces the likelihood of falling victim to social engineering attacks. 

Incorporating behavior analytics into a cybersecurity strategy empowers organizations to proactively address the human side of cybersecurity. By leveraging these technologies, organizations can detect and respond to anomalies, mitigate the risks associated with social engineering attacks and human errors, and strengthen their overall security posture. 

Behavior Analytics from ArcSight Intelligence 

To further explore the capabilities of behavior analytics and learn more about ArcSight Intelligence's advanced solutions, please visit https://www.microfocus.com/en-us/cyberres/secops/arcsight-intelligence. Discover how these innovative technologies can help protect your organization from evolving cyber threats and safeguard your valuable assets. 

Labels:

Security Operations