Blog Viewer

The AI Governance Risk Hiding in Plain Sight

By Elizabeth Suehr posted an hour ago

  

Please enjoy this blog post authored by Elizabeth Suehr, Director of Legal Risk Systems & Strategy, Jenner & Block LLP. 

ILTA's Security and Compliance Content Team spends time talking about the risks with managing AI programs — data security, vendor dependencies, and compliance. These are the right conversations. But there's a risk that keeps emerging that rarely makes the agenda, and it has nothing to do with the technology itself.

Your AI Strategy May Have a Blind Spot — And It’s Not the Technology

Most legal operations professionals know to guard against vendor lock-in: data portability risks, switching costs, integration dependencies. The conventional wisdom is sound: don’t put all your eggs in one basket. But most firms apply that logic only to the technology layer. There’s another dimension of lock-in playing out in how people work, and it doesn’t show up in any vendor contract.
 
It’s called cognitive lock-in: the gradual, often invisible process by which users become so habituated to a single AI system that they lose the willingness, or even the ability, to critically evaluate alternatives. It may be one of the more underappreciated governance risks in AI adoption today. 
 
The good news: a well-designed AI governance program can address both traditional vendor lock-in and its cognitive counterpart, if it’s built to look for both. And it’s exactly the kind of cross-functional challenge that the ILTA community is uniquely positioned to tackle together.

How It Happens
 
Some AI systems learn from your behavior. Over time, a system calibrates to your preferences, your tone, your framing, your workflow. The experience starts to feel intuitive, even personal — all by design.
 
The problem is that comfort becomes a gravitational pull. Users naturally gravitate toward the system that feels most like them, or that, as we often hear, “speaks their language.” Not necessarily because it’s the best tool for every job, but because alternatives feel foreign by comparison. 
 
This is a feedback loop baked into certain AI behaviors. The system mirrors you back to you, and over time, that reinforcement quietly limits your willingness to explore alternatives.

Why This Matters for Legal Organizations
 
This isn’t hypothetical. Firms that became deeply dependent on a single generative AI platform early in their adoption journey have already felt the consequences. When those platforms updated their underlying models, outputs shifted, sometimes subtly, sometimes not. Productivity dipped during the adjustment period, not because the tool was broken, but because the people using it had calibrated their expectations to a version that no longer existed. The system changed. The cognitive reliance hadn’t.
 
In a law firm context, multiply this effect across hundreds of attorneys and staff, all habituating to a single system’s approach to legal reasoning, document drafting, and research synthesis. Now imagine that vendor raises prices, gets acquired, or simply falls behind a competitor. You’re not just switching software. You’re asking your people to re-calibrate how they think with AI, and that could become a heavy lift, retraining on a new interface.
 
This is the AI version of concentration risk: when a firm routes all of its AI-assisted work through a single platform, it creates a single point of failure that is simultaneously technical, operational, and cognitive.

Where Governance Comes In
 
Much of the conversation around AI governance — rightly so — centers on the technology itself: model accuracy, data privacy, bias in outputs, hallucination rates. But cognitive lock-in is a reminder that some of the significant AI risks are fundamentally human ones. Human-centric AI design asks us to keep people at the center of how we build, deploy, and evaluate AI systems. That principle cuts both ways: AI should serve human judgment, not quietly displace it. When habitual reliance on a single system erodes our willingness to question, compare, or challenge AI outputs, the tool starts shaping the thinker.
 
Cognitive lock-in is not an isolated concern. It belongs to a broader category of human-side AI risks that many firms are already encountering. Shadow AI — where individuals adopt unsanctioned tools outside of governance oversight — is one visible example. When people lack confidence in approved tools or don’t understand what’s available to them, they find workarounds. Gaps in AI literacy compound the problem: users who haven’t been equipped to critically evaluate AI outputs are more likely to default to a single familiar system and less likely to recognize when that system is underperforming. Without foundational education on what AI can and cannot do and where its limitations lie, even well-intentioned users develop habits that create risk. These are not technology failures but are human ones that require human-centered governance responses.
 
The NIST AI Risk Management Framework (AI RMF) provides a useful anchor. Its core functions — Govern, Map, Measure, and Manage — are designed to address AI risks holistically, and cognitive lock-in fits squarely within that structure. This is reinforced across the ISO landscape as well: ISO/IEC 42001 establishes the management system for responsible AI governance, and ISO/IEC 23894 provides practical, lifecycle-based guidance for identifying and mitigating AI-specific risks. The common thread: governance must be flexible and agile by design. A framework too tightly coupled to a single vendor’s ecosystem becomes its own form of lock-in.

A Governance Response
 
Some degree of habituation is natural, even productive. The goal is to make diversification deliberate. Deliberate AI portfolio diversification should be part of any mature AI governance framework. Just as investment professionals diversify portfolios, law firms should cultivate a diversified AI ecosystem, governed by policies and practices that prevent over-reliance on any single tool. A strong governance program is what makes this intentional rather than accidental. In practice, that looks like:

  • Evaluation practices that reward cognitive flexibility. Don't just measure output quality. Measure whether teams are questioning AI framing, testing alternatives, and applying independent judgment.
  • Lock-in monitoring as a governance metric. Treat over-reliance on a single system as a risk indicator, the same way you'd monitor vendor concentration. Document it, review it, and set thresholds that trigger re-evaluation.
  • Governance-driven AI portfolio strategy. Embed multi-tool diversification as an explicit principle in your AI governance program. Maintain an approved AI tool inventory and review the portfolio periodically to ensure the firm isn't drifting toward single-vendor dependency.
  • A culture of boundaryless exploration. Diversification requires more than policy — it requires a mindset shift. Cultivate a culture where experimentation with new AI tools is encouraged, not penalized. That means balancing curiosity with guardrails — giving people psychological safety to explore while governance keeps it productive. When people feel enabled to push past the familiar, cognitive lock-in loses its grip.
  • Practical AI education and usage guidance. None of this works without education. Firms should invest in clear, accessible AI literacy programs that go beyond tool-specific training. That includes publishing guides helping users understand where AI adds value and where human judgment is non-negotiable, building fluency that equips people to move between tools with confidence rather than clinging to one out of uncertainty.

The governance program is the mechanism that keeps the portfolio balanced.

The Real Risk Is Standing Still
 
If this is a new concept for your firm, start with a simple diagnostic. Survey a cross-section of users: How many AI tools do they use regularly? Do they feel confident using more than one? Have they compared outputs across systems recently? The answers will tell you more about your firm’s cognitive lock-in exposure than any vendor audit. Then look at the governance layer: does your AI policy explicitly address tool diversification? Does your approved tool inventory include multiple options for key workflows? If the answer to either is no, that’s your starting point.
 
The firms that will thrive in the AI era won’t be the ones that adopted the fastest or spent the most. They’ll be the ones that built the organizational muscle to adapt: the governance infrastructure to manage risk, the cultural openness to explore, and the literacy to use AI as a thinking partner rather than a crutch. The antidote to cognitive lock-in isn’t more technology. It’s more intentionality — in your governance, in your culture, and in your willingness to keep learning. Bring this conversation to your ILTA colleagues. The best governance ideas in legal technology have always come from practitioners willing to share what’s working and what isn’t, and cognitive lock-in is exactly the kind of emerging risk that benefits from the collective intelligence of this community — because AI resilience isn't built alone.

 


#ArtificialIntelligence
#InformationGovernanceorCompliance
#Security
#SecurityProfessionals
#Just-in-Time
0 comments
6 views

Permalink