
Datasparq has just turned six. For the first part of our life we were an AI company that helped our clients build solutions that didn’t get stuck in prototype but moved the needle with production-grade solutions that actually delivered ROI. Our first solution is still running - a production ML solution that’s core to our client’s daily operations and EBITDA, produces 5M+ predictions every day and has had 99.9% uptime for the last 6 years.
Building enterprise-scale solutions is still at the heart of what we do, but there’s also been a significant shift in what clients come to us for, one that can be traced directly back to OpenAI’s ChatGPT releases in late 2022 and early 2023.
As AI stopped being a niche capability for complex commercial and operational problems and became something available to everyone as an app, we saw the change that would come: Conversations from Board-level down would be permeated by an AI urgency - “what are we doing?”, “should we be using AI more?”, “what’s this going to do to our business - and our customers?” . Historically we had been sweating in the engine room with our clients, but soon we’d be up on deck, helping clients identify what was coming.
The answers to those AI questions that the PowerPoint pundits give will only get you so far. At the point at which ambition and reality collide you need to know your investments will deliver tangible success and how to execute with conviction. For that you need a strategy that is built upon battle-tested experience and a perspective on the future framed by what is happening on the ground now.
Whilst the AI landscape has been rapidly evolving we’ve been working side by side with our clients to help them keep a clear-eyed focus on what is happening and to define their response.
This is what those two years have taught us:
Like any technology shift (from the industrial revolution onwards) there are those that are energised by radical change and those who find it unsettling. Colleagues who have embraced AI as a tool for turbo-charged productivity, who understand where it works well and its limitations, who can apply complex reasoning or algorithmic optimisation to do more with less. And on the other side there are those who are yet to find a place for AI in what they do, who are nervous about its capability, or who proactively value not letting it seep into their day to day.
Then there are those in the middle; the enthusiastic adopters who are experimenting, utilising AI for all sorts of opportunities, laundering their emails, analysis, and authorship through chat interfaces and co-pilots. It is in this uncritical adoption in the middle of the spectrum where the real risk lies for businesses.
Without the right cultural framework, training, and organisational values critical thinking starts to lapse. Scrutiny disappears and AI outputs become worthless - divorced of considered human judgement, of company values, of customer differentiation their impact becomes diluted. And this isn’t just a problem of wasted efforts, it has a tangible cost as money and resources get squandered for low-value returns.
The organisations that are navigating this bear-trap well are intentionally pushing back on the AI slop. They’re calling out colleagues where they spot this, building a culture that preserves the value of analytical thinking, and treating AI as a tool to complement judgment, not outsource it.
Tech governance often has a tendency to manifest as a compliance checklist, a rulebook for what is allowed and how it can be used. This works for business systems tied to clear processes, role-based usage, and clear models of centralised vs federated ownership. But (especially Gen) AI tools don’t fit that mould - they’re more like Excel than the ERP system.
They’re increasingly pervasive, personal multi-purpose tools, used differently by everyone depending on their role. Nobody governed Excel with a change programme; people learned from each other's successes and failures, applied their judgment to what was sensible and what wasn’t, and protected the process of collaboration and scrutiny to operate safely (not that there’s never been a huge Excel cock-up).
That doesn’t mean AI tools should be ubiquitous without any control and care - governance is critical - but the key is to find a model that empowers people to use it safely, make the right choices, and create the conditions for federated success. The centre should be about monitoring that, owning the guardrails and building colleagues' confidence in the right way. Getting this wrong won’t stop AI use; it will just push it underground, creating shadow use and even greater risks.
The right kind of governance looks more like the Highway Code: it doesn't tell you where to drive, what route to take, or how to operate every feature of your car. It sets the rules of the road - what's dangerous, what's prohibited, and what responsible use looks like - and then trusts you to make your own decisions within that framework. If the “dos” and “don’ts” are clear, debate recedes and decisions happen faster.
In AI deployments, especially as we move to agentic models, the watch-word has been “human in the loop”. Ensuring people retain control and authority by validating at key moments, approving further action, acting as a check and balance by inserting ourselves into the flow of action to create intentional friction and manage risk.
But that approach creates a fundamental scale bottleneck - whilst the output effort diminishes the workload just shifts. Checking and validating outputs becomes tedious audit activity rather than a genuine exercise of judgement and, if anything, creativity and ingenuity fades as the temptation to “tick the box” comes as a result of disengagement.
People will need to move from being “in the loop” to “on the loop”; setting direction, defining standards, interpreting and improving outcomes by orchestrating how tasks are done, how agentic AI tools interact with each other, and by exercising the ability to dial up and dial down different elements of the system as needed.
This is a meaningful shift in both culture and operations - colleagues need to be able to exercise judgment over what AI is doing with confidence and understanding. The goal isn't to remove human judgement from the equation, it's to elevate where that judgement gets applied.
If our teams are going to be sitting “on the loop” what exactly is that we’re giving them to orchestrate?
We all know that the AI landscape is still going through rapid change - models leapfrogging each other with every release, AI capabilities being built into every piece of SaaS, feature-rich, AI-native software appearing overnight. Whilst the dust is still to settle the risk of backing the wrong horse feels very real, but we can’t afford to sit back and wait either.
All too frequently we’re seeing clients fall into the trap of feeling like they’re moving forward by delivering AI “experiments” as a way to make progress without locking in commitment. But experimentation isn’t the same as optionality - dressing up deferred decisions as action isn’t the same as intentionally preserving flexibility.
Deliberate optionality is about deciding where and why to design-in flexibility, understanding the compromises this introduces, and having a point of view about where you think things are heading. The risk you’re mitigating here isn’t whether to transform with AI, it’s how do I avoid being stuck on Betamax in a world of VHS.
Most organisations get stuck here because they conflate two decisions that should be made independently: what outcome do I want AI to deliver (which can and should be committed to) and how do I deliver it technically (where flexibility is genuinely worth preserving).
Ask yourself: if the tech or model we’re building this on disappeared tomorrow what would we lose? If the answer is “nothing we can’t switch out without eroding our ROI” then that’s OK. If the answer is “everything” then you’ve not preserved optionality. If the answer is “nothing important” then you’re probably still experimenting.
Your direction is set. Now the harder question: what exactly are you backing?
A cost-reduction or productivity play feels like the responsible choice. Margins are tight, costs are high, operational efficiency is a clear win. It has a numerator and denominator, the calculation is simple, the ROI defensible - you can literally put a number on it.
The differentiation play though - using AI to build capability your competitors can't easily replicate, offering something genuinely differentiating to customers - that feels more speculative. The upside depends on customer behaviour, competitor response, market timing. Nobody can model that with confidence, but it might just be the very thing that’s at the heart of your future. And if not yours, your competitors?
The uncomfortable truth is that the efficiency gains are increasingly table stakes. If you and your three closest competitors all automate the same back-office processes over the next 18 months, nobody has built an advantage, you've just all moved to a new cost baseline.
You can’t compete on safe bets alone, it’s going to require a combination of both. The question is what’s the ratio; how much of your AI investment sits in the defensible, efficiency column, and how much is on genuinely differentiated capability that could move the top line? 90:10? 80:20? 60:40? There's no universal right answer, it depends on your sector, your competitive position, and your margin. It depends on your risk appetite. It depends on your ambition.
None of this comes with a standard answer. That's the point.
The right answer is different for every organisation. The leaders we see navigating this well are the ones who are thinking clearly, asking harder questions of themselves and their teams, and making deliberate choices rather than letting momentum make the choices for them.
The landscape will keep shifting, the capability gap between those moving with purpose and those moving without it will keep widening, and the decisions that feel deferrable today will feel consequential in twelve months.
It’s no longer worth asking the question “are we using AI?” - the answer will be yes. The question should be “are we totally confident about how we’re using AI?”.
If the answer to that isn’t yes then there’s still work to do.
