I need to talk about something that's been sitting wrong with me for a while.

There's a growing crowd of companies — mostly well-funded, mostly led by people who've never been on the wrong side of an algorithm — who've figured out that "AI ethics" and "responsible data" make excellent marketing copy.

They'll post about algorithmic bias on LinkedIn. They'll sponsor diversity initiatives. They'll use words like "empowerment" and "equity" in their pitch decks.

And behind the scenes? They're building the exact systems they claim to oppose.

Let me be clear about what I'm seeing:

Data capitalists masquerading as data activists.

Data colonizers performing as data advocates.

And honestly? It's not even clever anymore. It's just insulting.

Let's Define Our Terms

Because if we're going to call this out, we need to be precise about what we're actually seeing.

Data Capitalism

This is the extraction economy. Data as the new oil, except oil companies never pretended pumping crude was a form of social justice.

Data capitalists:

  • Treat your behavior as their asset

  • Monetize your participation without compensation

  • Control the infrastructure, then act surprised by the inequality

  • Build business models on asymmetric information and power

They know exactly what they're doing. And they're very good at it.

Data Colonization

Same playbook, new century.

Extract value from marginalized communities. Build systems trained on their data. Deploy those systems back onto those same communities — but now the profits flow elsewhere.

Take biometric databases harvested in Africa. Training data labeled by underpaid workers in the Global South. Genomic data collected from Indigenous populations "for research."

The pattern is consistent: one-way extraction, zero accountability, minimal benefit to the people who generated the data.

And when it goes wrong? When the models fail, when the predictions harm, when the surveillance kicks in?

The communities that built the dataset bear the consequences. The companies that profited walk away.

Data Weaponization

This is what happens when you pretend data is neutral.

Using data to:

  • Surveil and suppress dissent

  • Manipulate elections and public opinion

  • Automate discrimination at scale

  • Coerce behavior through social credit systems

  • Create digital prisons disguised as "personalization"

Here's the thing: you don't accidentally weaponize data. You make choices. About what to build. How to deploy it. Who to protect. Who to sacrifice.

And when companies build predictive policing tools trained on biased arrest data, when they create hiring algorithms that screen out women, when they design credit systems that redline by proxy?

They made those choices.

The Performance of Data Activism

Now here's where it gets really cynical.

Because some of these companies? They're smart.

They know the optics are bad. They know regulators are watching. They know "move fast and break things" doesn't play well when you're breaking people.

So they rebrand.

Suddenly everyone's an "AI ethics" company. Everyone has a "responsible AI" team. Everyone's published principles about fairness and transparency and accountability.

The language of data activism — borrowed, sanitized, stripped of any actual power redistribution — becomes the perfect shield.

You want to extract value from vulnerable populations? Call it "democratizing access."

You want to avoid regulation? Frame it as "innovation for good."

You want to build surveillance infrastructure? Market it as "safety and security."

This isn't advocacy. This is PR.

Real data activism makes power uncomfortable. It demands:

  • Community control over data collection

  • Transparent accountability for algorithmic harm

  • Equitable benefit-sharing when data creates value

  • The right to refuse, to be forgotten, to demand deletion

Real data advocacy asks uncomfortable questions:

  • Who owns this data?

  • Who profits from it?

  • Who bears the risk when it fails?

  • Who gets to decide what happens next?

And most importantly: Why should we trust you?

How to Spot the Performance

Here's my heuristic for separating the real ones from the extractors:

Red Flags (You're Looking at Data Capitalism)

1. "We're using AI to solve [social problem]" but:

  • The affected community wasn't involved in design

  • The business model depends on their data

  • They can't access or benefit from the system

  • There's no mechanism for recourse when it fails

2. "We take ethics seriously" but:

  • The ethics team reports to legal or PR, not product

  • Ethical concerns can be overridden by business needs

  • There's no independent oversight or accountability

  • They've never said no to a profitable but harmful use case

3. "We're committed to fairness" but:

  • They define fairness in ways that protect business interests

  • They don't measure or report disparate impact

  • They blame "biased data" while continuing to profit from it

  • Fairness is a feature, not a requirement

4. "We believe in transparency" but:

  • Their models are black boxes

  • Their data practices are hidden in ToS no one reads

  • They fight subpoenas and FOIA requests

  • Transparency means press releases, not documentation

Green Flags (Actual Data Advocacy)

1. Power redistribution:

  • Communities have decision-making authority

  • Benefit-sharing is built into the business model

  • Data sovereignty is respected

  • Exit and deletion are easy

2. Accountability with teeth:

  • Independent oversight that can halt deployment

  • Public documentation of failures and harms

  • Compensation mechanisms for those harmed

  • Willingness to shut down profitable but harmful systems

3. Structural humility:

  • Recognition of limits and risks

  • Investment in safety over speed

  • Refusal to deploy in high-stakes domains without proof

  • Acknowledgment that some problems shouldn't be solved with AI

Why This Matters Right Now

Because the stakes are getting higher.

We're in a moment where every company is racing to deploy AI. To integrate it into everything. To automate every decision.

And the companies leading this charge? Many of them are the same ones who:

  • Built surveillance capitalism into the internet's infrastructure

  • Enabled election manipulation at scale

  • Created algorithmic discrimination as a service

  • Extracted billions in value from communities without consent

Now they're telling us they've learned. That they're doing it differently this time. That they're the good guys.

I don't believe them.

Not when their business models still depend on extraction. Not when their incentives still reward growth over safety. Not when their governance still concentrates power at the top.

What Actual Data Advocacy Looks Like

Let me paint a different picture.

Real data advocacy builds systems WITH communities, not ON them.

It looks like:

  • Indigenous data sovereignty initiatives where tribes control data about their lands and people

  • Community-owned data cooperatives where members share in the value created

  • Privacy-preserving technologies that protect individuals by design, not policy

  • Algorithmic transparency that enables challenge and recourse

  • Impact assessments that center affected communities, not shareholders

It requires:

  • Slower deployment timelines (safety > speed)

  • Smaller profit margins (protection > extraction)

  • Distributed power (community control > corporate control)

  • Structural accountability (enforcement > aspiration)

It's harder. It's less profitable. It's less scalable.

And that's exactly why the extractors won't do it.

The Question We Should All Be Asking

When a company tells you they care about AI ethics, about responsible data, about fairness and equity and empowerment...

Ask them this:

"What have you sacrificed?"

What profitable market have you refused to enter? What harmful use case have you turned down? What power have you redistributed? What revenue have you left on the table?

Because if the answer is "nothing" — if their commitment to ethics has been cost-free, if every principle happens to align perfectly with profit maximization — then you're not looking at advocacy.

You're looking at marketing.

Here's Where I Stand

I work in this space. I help organizations build AI governance frameworks. I teach companies how to assess their execution capability, how to roadmap responsibly, how to operationalize ethics.

And I'm watching this performance with increasing frustration.

Because I actually want AI systems that work for people, not on them. I want governance that protects communities, not corporate liability. I want accountability that has teeth.

But we won't get there by letting data capitalists cosplay as activists.

We won't get there by accepting "ethics washing" as a substitute for structural change.

We won't get there by letting companies extract value, weaponize data, colonize communities — and then applauding them for using the right hashtags.

We need to start calling this what it is.

Not innovation. Not empowerment. Not advocacy.

Extraction with a rebrand.

And until we're willing to name it, to challenge it, to demand something different?

The performance continues. The harm scales. And the people who built the system keep telling us it's for our own good.

So What Do We Do?

I'll be honest: I don't have all the answers.

But I know this:

We stop giving credit for aspiration.

Principles without enforcement are just words. Ethics teams without authority are just theater. Commitments without sacrifice are just PR.

We start demanding structural change.

Not "ethics by design" as a checklist item.
Not "responsible AI" as a marketing campaign.
Real accountability. Real redistribution. Real power-sharing.

We support actual advocates.

The organizations doing the hard work of building alternatives. The communities fighting for data sovereignty. The researchers exposing harm even when it's inconvenient. The policymakers trying to regulate extraction even as lobbyists fight them.

And we stop pretending extraction is empowerment.

Because that's the lie that lets this continue.

The idea that maybe, just maybe, the next AI company will be the one that centers communities over profit. That this time the surveillance infrastructure will be used responsibly. That these extractors really are allies.

They're not.

They're capitalists.

And we should treat them as such.

If this resonates, share it. If it pisses you off, tell me why. And if you're building actual alternatives to extraction-based AI - I want to hear from you.

Because we're going to need each other for what comes next.

Reply

Avatar

or to participate

Keep Reading