There's a pattern I keep seeing in enterprise organizations rushing to deploy AI: executive excitement about the technology, aggressive timelines, talented teams building impressive systems — and a data foundation held together with duct tape and good intentions.

The AI model works beautifully in the demo. In production, it surfaces wrong numbers, pulls from stale data, and produces outputs that nobody trusts. Three months later, users are back in Excel doing the same thing they were doing before.

This isn't a technology failure. It's a foundation failure. And it's happening everywhere.

The Foundation Problem

Here's the uncomfortable truth most AI strategies skip over: if you didn't care enough about your data to catalog it, govern it, and clarify who owns what — you're not ready for AI. Full stop.

AI doesn't fix bad data. It amplifies it. It takes every inconsistency, every undocumented transformation, every "we'll clean that up later" decision and scales it across the organization at speed. The mess doesn't go away. It gets a megaphone.

I've worked in data architecture long enough to know what a healthy foundation looks like and what a neglected one looks like. The difference isn't always visible from the executive suite. The dashboards look fine. The pipelines run on schedule. But underneath, the people who maintain those systems know exactly where the cracks are.

They know which tables haven't been updated in months. They know which definitions are contested between departments. They know which data sources require a phone call to a specific person to interpret correctly. They know — and they've been raising the flag — but the organization was too busy shipping features to listen.

Now that same organization wants to build AI agents on top of those systems. And the people who know where the bodies are buried are being asked to make it work anyway.

Three Questions Before You Deploy AI

Before any AI initiative gets green-lit, there are three foundational questions that deserve honest answers.

First: Is your data governed? Not "do you have a data governance policy somewhere in Confluence." Is your data actively governed? Are there clear owners for critical datasets? Are definitions consistent across departments? Do you have lineage — can you trace where data comes from, how it's transformed, and who's responsible for its accuracy?

If the answer is "mostly" or "we're working on it," that's a no. And deploying AI on top of ungoverned data means your AI system will inherit every ambiguity, conflict, and gap in your data layer. Users will get different answers depending on which dataset the model pulls from. Trust erodes. Adoption dies.

Second: Do your people have AI literacy? Not "did we send a company-wide email about responsible AI use." Do the people who will interact with AI systems — building them, using their outputs, making decisions based on them — actually understand how these systems work, where they fail, and what the outputs mean?

AI literacy isn't a training checkbox. It's a capability. It means your finance team understands that the AI-generated forecast is a probability distribution, not a fact. It means your marketing team knows that the content suggestion engine has biases baked into its training data. It means your data engineers can distinguish between a model that's performing well and one that's confidently wrong.

Without this literacy, you get one of two failure modes: blind trust (people accept AI outputs without question and make bad decisions) or total rejection (people don't trust anything the AI produces and ignore it entirely). Neither gets you adoption.

Third: Do you understand your AI systems? All of them. Including the ones people are using without telling you.

Shadow AI is already in your organization. People are pasting sensitive data into ChatGPT. Teams are building automations with AI tools they found online. Departments are evaluating vendors with embedded AI capabilities that nobody in IT has reviewed.

You can't govern what you don't know exists. And you can't assess risk on systems you haven't inventoried. Before you build your shiny new AI platform, take inventory of what's already running. You might be surprised — and alarmed — by what you find.

The Vendor Evaluation Gap

This foundation problem extends to how organizations evaluate third-party AI vendors. Most vendor evaluation processes were designed for traditional software: Does it meet our technical requirements? Does it integrate with our stack? What's the pricing?

When AI is involved, whether the vendor is using AI for their core product, embedding it as a feature, or monitoring AI usage like security platforms, the evaluation criteria need to expand significantly.

Questions that most evaluation processes miss:

  • What data does this system capture from our users?

  • Is that data anonymized or disaggregated?

  • Is our data being used to train their models?

  • What's the data retention policy?

  • Where does the data reside?

  • Who has access?

  • What happens to our data if we terminate the contract?

These aren't edge cases. They're fundamental to understanding whether a vendor's AI capabilities align with your organization's data governance policies. The problem is that most organizations haven't connected their data governance policies to their vendor evaluation process. The policies exist in one silo, procurement lives in another, and the AI evaluation happens in a third — with nobody bridging the gaps.

The Same Principle, Applied Differently

Here's what I've come to understand: the principle behind good data governance and the principle behind good people leadership are the same.

You can't expect output if you haven't invested in the foundation.

With people: if you haven't invested in understanding who they are, what they need, and whether they're aligned — don't expect high performance. Invest in the person first. The output follows.

With data: if you haven't invested in cataloging, governing, and clarifying your data — don't expect AI to deliver value. Invest in the foundation first. The intelligence follows.

With AI adoption: if you haven't invested in literacy, governance, and understanding the human impact of AI on workflows — don't expect adoption. Invest in readiness first. The transformation follows.

Skip any of these foundations and you'll build something impressive that nobody uses, trusts, or benefits from. Sound familiar? That's the pattern.

What Actually Works

Organizations that succeed at AI adoption do something counterintuitive: they slow down before they speed up.

They audit their data landscape honestly — not the sanitized version for the board deck, but the real picture with all its gaps and inconsistencies. They invest in governance not as a compliance exercise but as a capability that makes everything downstream possible. They build AI literacy across the organization, not as a one-time training but as an ongoing practice embedded in how people work.

They treat AI readiness as a prerequisite, not a parallel workstream.

And most importantly, they connect the human side to the technical side. They understand that the people who will use AI systems, evaluate AI vendors, and make decisions based on AI outputs need to be prepared — not just technically, but in terms of trust, understanding, and capability.

This isn't the sexy part of AI strategy. Nobody gets promoted for saying "we spent six months fixing our data governance before building anything." But it's the difference between organizations that deploy AI successfully and organizations that demo AI impressively.

Fix the foundation. Then build. That's how AI adoption actually works.

Reply

Avatar

or to participate

Keep Reading