Back to resources

Reliance on Legacy Security Measures in the AI Era can be a Ticking Time Bomb 

We’ve all heard the assessment that something is “good enough.” While that might sound reassuring, ask yourself how you would feel if a loved one underwent surgery and the doctor emerged from the O.R. to say the result was “good enough”? You might not feel very confident.

On the other hand, if the service technician said the pressure in your tires was “good enough,” you probably would be satisfied.

So, the definition of what’s “good enough” depends on the circumstances. Consider data security and GRC (governance, risk, and compliance). Let’s explore what qualifies as “good enough” in these critical areas, and what’s at risk if the bar is set too low.

Defining “Good Enough”

What counts as good enough depends on the unique circumstances of each enterprise. Broadly speaking, data-security and governance measures are good enough if the organization:

  • Maintains high data visibility through an accurate inventory of data (what type, where it’s stored, who has access, and how it flows)
  • Has policies, procedures, and infrastructure appropriate to the complexity and nature of the business
  • Regularly tests procedures and systems for critical functions such as backup and incident response
  • Actively manages third-party and supply-chain risks
  • Can satisfy all legal and regulatory obligations wherever the company operates
  • Calculates an acceptable level of potential costs for each category of sensitive data should a breach or leak occur (often called cyber risk quantification – CRQ), and has the agreement of senior management and the board to those assessments

Put more simply: a 250-employee regional medical clinic with 70,000 patients and a financial-services company with 80 million customers worldwide have very different risk exposures and resulting requirements for data security and governance.

No matter its size or purpose, every business must ask whether its security and governance posture is good enough in the best sense of the term, or whether good enough is a poorly supported assessment that may mask critical shortcomings.

Accurate judgment is important, because the potential costs of failure are great. The direct and collateral costs of data leakage, security breaches or compliance violations can run into the hundreds of millions of dollars. Not to mention reputational damage, loss of customers, diversion of internal resources, and the loss of strategically important intellectual property.

And the level of risk is even greater in the age of agentic AI.

AI’s Compounding Effect

Autonomous systems are scaling rapidly, often outpacing the capabilities provided by traditional security measures. Many security and GRC professionals are finding themselves overwhelmed by the rising wave of requests for data required by AI models. Businesses are pushing hard to move AI from POCs to production, and security/governance must keep up.

Let’s put this into context. The deputy CISO at a major regional health provider, whom I interviewed recently, observed:

“There is strong pressure to ‘innovate, innovate, innovate’, but security is still catching up in terms of controls and expertise. We’ve formed an AI committee to evaluate tools and uses cases, but governance is still evolving.”

Based on numerous discussions with CISOs in a range of industries, I’m hearing a common refrain: legacy tools and procedures for data security and governance lack the agility necessary to keep pace with the speed at which agentic AI is being adopted and operates. For one, traditional approaches assume humans pose the greatest threat to the safety of an organization’s data. But that’s no longer true once agents are unleashed.

Notes the CISO of a major financial-services provider:

“AI raises the stakes because weak permissions, poor labeling, or incomplete data inventories can allow a user or attacker to retrieve information much faster than before.”

The more important question is whether your team has the tools, resources, and capacity to scale—and to innovate fast enough to keep pace with vendors whose sole focus is understanding the market, the core challenges, and where competing offerings may fall short.

This paper will look at the mindset of “good enough” in three dimensions pertaining to data security and governance:

  • How the idea takes hold
  • The risks it poses
  • How it can be countered

How “Good Enough” Takes Hold

Data-security and GRC teams want to do the best job they can, but many factors can slow progress toward the ideal posture. These include:

Spending limitations

This is probably the most frequent cause. Everybody has a budget, and it seldom is enough to cover everything a department desires. In response, department managers frequently select a less-costly option – which often turns out to be a less-effective choice. Postponement is another common response, kicking the decision into the next quarter or two.

A common complaint among under-resourced managers is that other functions of the business receive a higher budget priority. As the CISO at a healthcare organization observed:

“I’ve heard one leader say cybersecurity mattered, but that the more immediate concerns were patients sitting in dirty linen and missing milk in the cafeteria. His point was that many healthcare organizations are focused on existential operational challenges, which pushes data management and protection further down the priority list.”

Overwhelming demand

Again, interviewing the CISOs, I heard that the rapid adoption of agentic AI is straining security and GRC teams due to increasing requests for data access with shorter timelines. Teams must focus on applying triage. And that means improvements in data protection, governance, and compliance are being delayed by short-term firefighting.

Security and governance systems are very complex

Data security and GRC are already complex, and agentic AI is making them even more challenging.

“My biggest concern is that adversaries are moving faster and using AI too,” the CISO of a diversified manufacturer observed. “Malware signatures change quickly, lateral movements are becoming more automated, and incidents progress in real-time without human intervention.”

Some organizations are still trying to figure out where all their data is, which can lead to blind spots that weaken AI results. With pressure from the business eager to get AI projects out the door, IT organizations are pushing ahead without adequate controls over who sees data, how they use it, and how it’s shared. This creates risk that threatens trust and business value.

As agentic AI expands the attack surface, many enterprises still assume the tools already in their IT environments are enough. That includes Microsoft and Azure shops using Purview for GRC and Defender for data protection, along with native tools from cloud, SaaS, and security vendors.

Another reason teams hesitate to make a change is that adding another security tool or vendor can increase complexity.

The final straw: complacency. If a security/governance disaster hasn’t occurred yet, things must be – what else – “good enough.”

The Ripple Effect of “Good Enough”

When an organization relies on “good enough” approaches and tools, the impacts can cascade.

Increased vulnerability

Bad actors look for any opening to reach protected data, and they never stop probing and scanning for holes. “Good enough” implies acceptance that everything isn’t always air-tight – a weak posture to assume. If security isn’t continually improved, what works today can quickly fall behind.

AI is moving faster than many legacy tools were built to handle, and meeting the minimum standard may no longer be enough.

The CISO of a large logistics company told me:

“The limitation is not just tool availability, but integration, tuning, and the fact that hybrid environments make it hard to get cloud, on-prem, endpoint, and data layers to work together cleanly.”

False sense of security

A common metaphor for data security is a wall. But in reality, it’s more like a chain – of data stores, transport methods, control devices, policies, and users. If every link in the chain is even 90% or 95% of optimal (good enough?), the cumulative impact can be dangerous. One weak link can potentially compromise the whole chain.

If security/governance procedures and tools are not improved continually, they will likely fall behind the threats that keep evolving.

Poor ROI

Like anything else in IT, security and governance measures cost money. But underinvesting in those areas can be a false economy.

In many instances, remediation can be far more costly than prevention.

The costs of a data breach or compliance failure can be huge. In some cases, they have run into the hundreds of millions of dollars. The costs come from many places, including investigation and remediation, customer restitution, regulatory penalties, and increased insurance premiums. Just as important, customers and partners may question doing business with a company after a major breach.

And, if you think security and governance procedures slow things down, consider the impact when the entire business stops because of a data breach or ransomware episode.

Even without a breach or violation, organizations with inadequate protection can pay a price.

Most “good enough” solutions tend to require more manual intervention, which is more costly and less reliable than automated processes.

There’s A Better Way

Moving beyond “good enough” requires an approach to data security and governance tailored to meet the challenges of agentic AI. This approach:

  • Operates and adapts at the speed of business in the AI era
  • Stops being perceived as a constraint and is recognized as a vital enabling capability and competitive asset
  • Allows the organization to move faster and employ AI effectively, while maintaining the highest level of security and compliance
  • Frees human resources to take on higher-value roles

One certainty is that governance and security measures must be part of the operational architecture of an enterprise – not a bolt-on measure.

“The old static model is not enough. Tools now have to be ‘think proof,’ support context-based decisions, and fit into environments where institutions are trying to standardize, normalize, and modernize at scale,” noted the CISO at a global financial-services firm.

Key attributes of this architecture include:

  • Embedded controls and policies – Security and governance measures are placed in the data layer, so sensitive information has value only to authorized parties
  • Persistent and consistent – The same rules and protections are applied no matter where data resides – when it’s moved, who handles it, and how it’s used
  • Dynamic and context-aware – Changes in the workflow and application environment do not cause disruptions or necessitate repetitive security measures
  • Traceable, provable, and auditable – All parties gain confidence that the outputs of AI models are reliable and trustworthy

The Protegrity architecture delivers on all of these requirements. Some of the world’s most sophisticated users of data – enterprises that know “good enough” isn’t – use Protegrity solutions for data protection and governance assurance.

These companies benefit from:

  • Superior ROI on security and compliance solutions
  • Greater assurance of data security and compliance
  • Lesser burdens on security and compliance teams
  • Enhanced competitive posture and ability to seize new business opportunities

So, the next time someone gives assurances that the existing architecture and tools for data security and governance are “good enough,” politely ask: “Are you sure?”