Back to Home

AI Is an Accelerator. Be Careful What You Accelerate!

March 20, 2026 By Rob Vugts Read on LinkedIn
Illustration: AI accelerating both productivity and risk — trusted outputs versus confident errors

AI is already delivering real value, but it doesn’t just scale productivity — it also scales the risk of being confidently wrong.

Many organisations aren’t ready for that. Not because they lack the technology, but because they’re missing the foundations that make it trustworthy.

AI does not solve poor data quality. It industrialises it.

Right now, many organisations are accelerating problems they don’t fully understand. Fast. At scale. Quietly at first — and then very visibly.

The first impressive demo is easy. The hundredth reliable output is the hard part.

AI is not magic. It’s an accelerator. It makes good systems better — and bad systems worse.

I wrote down what that means in practice below.


The hype around AI is real. The potential is real. But there’s a pattern emerging that gets far less attention — and it’s the thing I keep coming back to in my research and conversations.

AI doesn’t remove complexity. It moves it. AI doesn’t eliminate work. It reshapes it. AI doesn’t replace expertise. It amplifies it — or exposes the lack of it.

And perhaps most importantly: you are not automating the work. You are automating the first draft of the work — and adding the work of validating it.

The data problem almost nobody is talking about

The single most underestimated challenge in AI adoption is not the model. It’s the data underneath it.

AI does not solve poor data quality. It industrialises it!

A system that generates confident-sounding outputs from stale, incomplete, or contested data doesn’t produce better results than a manual process. It produces worse ones — faster, at greater scale, and with an appearance of rigour that makes them harder to question.

Before AI can be trusted, you need to be able to answer: do we actually trust our data? Many organisations, if honest, cannot say yes.

The fire-and-forget illusion

AI systems are not fire-and-forget. They degrade. Quietly, at first.

The underlying model gets updated. The data sources it reads from change. The prompts that worked six months ago start producing slightly different outputs. Nobody notices until something goes visibly wrong — and by then, trust is already broken.

Rebuilding that trust is far harder than maintaining it. Which means explicit ownership, continuous monitoring, and feedback loops aren’t optional extras. They’re the cost of operating AI responsibly.

AI systems don’t fail loudly at the start. They degrade slowly — until trust breaks all at once.

The validation tax

Here’s something that almost certainly doesn’t appear in most AI ROI calculations: the cost of reviewing AI outputs.

Validation is not editing. It is investigation.

You’re not checking for typos. You’re looking for what’s missing — gaps that aren’t flagged, confidence that isn’t earned, conclusions that look right but aren’t. All in outputs that are designed to look finished.

“A confident wrong answer is more dangerous than an uncertain right one.”

That’s what makes this hard. The better the AI gets at producing polished output, the harder it becomes to scrutinise it. Fluency is not accuracy. And the instinct to trust what looks authoritative is exactly the instinct you need to override.

“AI does not know what it does not know. Neither will you — unless you design for it.”

That design work — building systems that make their own limitations visible — is where most of the real effort lives. And most implementations skip it entirely.

Growing up

None of this is a reason to slow down.

But it is a reason to grow up.

Much of the AI conversation is still adolescent — obsessed (and impressed) with capability, impatient with constraint, dismissive of the boring prerequisites. Data quality. Governance. Ownership. Validation. These aren’t obstacles to AI. They are the conditions under which AI is worth anything at all.

So before going all-in, the questions that matter:

  • Do we trust our data?
  • Do we know where AI can — and cannot — be trusted?
  • Who owns the output?
  • How do we detect when it’s wrong?
  • What happens when it is?

“An AI that never says ‘I don’t know’ is the most dangerous kind.”

Five principles for accelerating the right things

  1. Start with the data, not the model. If you don’t trust your data, you’re not ready for AI — you’re ready for a data governance programme.
  2. Design for uncertainty, not confidence. Systems that acknowledge what they don’t know are more trustworthy than systems that project uniform certainty.
  3. The technology should follow the problem, not lead it. “We want to deploy AI” is not a use case. “We want to reduce this specific, measurable failure mode” is.
  4. Own the output. AI is a tool. Someone still signs their name under the result.
  5. The first impressive demo is easy. Invest in the hundredth reliable output.

Read or discuss the original piece on LinkedIn.