4 min read time

How U.S. Federal Agencies Can Prepare for Data Discovery and Classification

by   in Cybersecurity

The May 2021 Cybersecurity Executive Order requires U.S. Federal agencies to “Evaluate the types and sensitivity of agency’s unclassified data, with prioritization of the unclassified data considered by the agency to be the most sensitive and under the greatest threat, and appropriate processing and storage solutions for those data.” 

How U.S. Federal Agencies Can Prepare for Data Discovery and Classification Section 3(a) of the executive order highlights how the Federal government needs to take steps to refresh its approach to cybersecurity by increasing its visibility into threats, while also safeguarding privacy and civil liberties. In addition to issuing a call to action, the section charges Federal employees to seek out security best practices that “centralize and streamline access to cybersecurity data” in the hopes of securing said data from bad actors. 

Current state

Public sector agencies hold so much diverse data, including Controlled Unclassified Information (CUI), that it makes them a prime target for cybercrime and nation state actors. CUI is an umbrella term that encompasses many different markings to identify information that is not classified, but which should be protected. Some examples you may be familiar with include:  

  • Personally Identifiable Information (PII) 
  • Sensitive Personally Identifiable Information (SPII) 
  • Proprietary Business Information (PBI) or currently known within EPA as Confidential Business Information (CBI) 
  • Unclassified Controlled Technical Information (UCTI) 
  • Sensitive but Unclassified (SBU) 
  • For Official Use Only (FOUO) 
  • Law Enforcement Sensitive (LES), and others. 

For most agencies, data proliferation and their network boundaries have expanded over time. Agencies have greater responsibilities but may lack sufficient staff to adequately govern the data under their control. This is especially true for agencies that are still using old-school, manual tactics. 

The Federal Edition of the 2021 Thales Data Threat Report showed that most agencies don’t have a solid grasp of what data they have or where it is located. In fact, just over one-fourth (28%) of Federal respondents have full knowledge of where their data is stored, and just one-third (33%) claimed to be able to fully classify their data. When using inadequate data discovery and classification methods, agencies can miss data or misclassify data that is under the agency’s purview. This can ultimately lead to a false sense of security, non-compliance, and data breaches. 

The IBM Cost of a Data Breach Report 2020 found breaches in the public sector averaged a cost of $1.6M per breach. And although that’s not the highest compared with other industries, each dollar spent is taxpayer money that could be better used.

Agencies need to modernize their data protection and discovery toolsets

Agencies cannot protect sensitive data if they do not know where it is. It is essential to dedicate time and resources to discover and classify data in order to apply relevant measures to protect it. 

With the executive order, Federal agencies should feel encouraged to take a second look and reassess the role of agency-wide data discovery and classification tools. However, why should agencies that already have licenses for legacy environments or specific subsets/environments concern themselves with agency-wide data discovery and classification tools? The reason is simple: improving overall data protection

To address gaps in data discovery and classification, agencies will be seeking out policy-driven, automated and repeatable solutions. These solutions will need to have the ability to search for data across a multitude of different data sources: structured, unstructured and even cloud data warehouses (CDWs). 

How can CyberRes help?

CyberRes can help agencies move onto new practices with modern technologies that can help supercharge data discovery and classification. This should fit within a more comprehensive data governance policies  to ensure data can move seamlessly from discovery, classification, to protection. 

We have solutions that can automate and accelerate discovery, analysis, classification, and protection of sensitive data. For unstructured data, we have Voltage File Analysis Suite (FAS). Key features and differentiators FAS provides include:

  • Connects to a wide variety of data sources and file types including cloud repositories
  • Key protective actions to take on data specific to data discovery use cases (deletion, encrypt, declare as record, legal hold, migrate)
  • Sensitive data analytics, research workspaces and dashboards
  • Detailed risk assessment, data subject analysis
  • Flexible consumption models (SaaS or Private cloud — hosted by a Partner or customer) 

For structured data, we have Voltage Structured Data Manager which includes these features:

  • Connect to all types of environments and manage them all in one interface
  • Fast scanning performances
  • Risk index is calculated within the table and is customizable
  • Grammars are all managed through the UI 

Learn more:

Join our Voltage Data Privacy and Protection Community. Have technical questions about Data Security and Encryption? Visit the Data Security User Discussion Forum. Keep up with the latest product announcements and Tips & Info about Data Security and Encryption. We’d love to hear your thoughts on this blog. Log in or register to comment below.

Labels:

Data Privacy and Protection