AI Futures
Essay

The AI Arbitrage Window

There is a shrinking window of competitive advantage for businesses that move fast on AI. The parallels with high-frequency trading are striking.

8 min read

There is a technological arbitrage playing out right now between companies that are genuinely reinventing their businesses around generative AI and those sitting back, waiting for the big tech vendors to hand them something off the shelf.

The vendors, of course, have every incentive to drip-feed capability over time. Subscription by subscription. Feature by feature. Meanwhile, a small but growing cohort of businesses – and individuals – are building at a pace the incumbents cannot match.

I'd estimate this arbitrage window lasts eighteen months. Maybe two years. During that time, it is entirely possible for new businesses, radically rethought businesses, or even individuals armed with tools like Claude Code to make serious money at the expense of incumbents that are too slow or too structurally rigid to compete.

After that, the window closes. The tools become commoditised. The advantage shifts from those who moved first to those with the deepest pockets. And the opportunity to build something genuinely differentiated – quickly, cheaply, from a standing start – evaporates.

The road to averageosity

Last week, I joined a webinar hosted by Moore Kingston Smith entitled Selling Your Agency: Turning AI Capability into Agency Value. Learned speakers. Good intentions. But something felt off. The lens was too narrow.

There was plenty of talk about agentic transformation and AI's impact on creative production. Most of it boiled down to the same conversation I keep hearing – use AI to cut costs and speed up business as usual.

Yes, you can compress a ten-day content cycle to three days. Yes, you can automate website migrations. But if you are simply making the same processes faster and cheaper, you are not building value. You are eroding it.

The MKS Annual Survey data tells the story starkly. Average sector revenue growth is just 3.4%, with average operating margins of 10.2%. The gap between top and bottom quartile agencies is widening fast. A consulting firm that uses AI merely to reduce the cost of its own marketing has not addressed the real threat – that the consulting services themselves are being commoditised.

Worse, cut your margins and you cannot retain the people who actually make the work good. A recent Productive survey of 180 agencies globally found that 65% are seeing higher profits despite mounting pressure for "AI discounts" – but only because they are defending value rather than surrendering it.

Increase the value, not the velocity

The real opportunity is the opposite of cost reduction. Use AI to increase the value of what you deliver. Improve the quality of insight. Deepen the understanding of audiences. Create capability that did not exist before.

The valuation maths supports this. AI-native SaaS businesses currently command revenue multiples of 25–30x, while traditional services firms trade at low single digits. The premium goes to productised, defensible IP – not faster versions of the same old thing.

This is where the arbitrage becomes most visible. The companies that understand this distinction – value creation versus cost compression – are pulling away from the pack at extraordinary speed. They are not optimising existing workflows. They are building entirely new categories of offering.

The silence on synthetic research

Which brings me to what nobody at the MKS seminar mentioned. Synthetic research.

Not a word.

To my mind, this is the single most transformative application of generative AI for any consulting, research, or strategy business – and the room was silent on it. Yet 81% of market researchers surveyed by Columbia Business School say they already use or plan to use generative AI to create synthetic data. The International Journal of Research in Marketing, in collaboration with the Marketing Science Institute, has dedicated an entire special issue to the topic. This is not fringe thinking.

Here is what synthetic research does. It creates AI-generated respondents – synthetic personas – built from demographic, psychographic, and behavioural data. Each one is situated in a specific location, speaks appropriate language, holds culturally coherent opinions, and reflects the media consumption patterns you would expect of someone in that situation. They have political leanings, family circumstances, regional speech patterns. They are not real people. But they are remarkably useful approximations of how real people think and respond.

Now scale that to 300,000 synthetic respondents across a country. Different regions, ages, incomes, attitudes. You can test product concepts, policy proposals, campaign messaging – and get substantive, nuanced feedback in hours rather than weeks. ServiceNow replaced twelve months of traditional research with a 30-day synthetic sprint, generating campaign-ready personas, segmentation, and creative direction at a fraction of the usual timeline.

The limitations are real. The opportunity is bigger.

Can purists object that synthetics are not real people? Of course. It is a fair criticism – and the academic literature is clear that synthetic respondents are better for some applications than others. They excel at exploratory research, hypothesis generation, and directional testing. They are weaker on pricing research and the kind of emotional nuance that only comes from lived experience.

But anyone who has done genuine fieldwork knows the limitations of traditional research too. The people you most want to reach are the hardest to recruit. Response rates are declining. Online panels skew towards a self-selecting subset. Timelines are glacial.

Synthetic research does not replace traditional methods. It opens a door to something fundamentally different. Instant-on polling. Round-the-clock availability. The ability to search thousands of verbatim responses for a specific insight, find an individual respondent in a particular geography or demographic, and interrogate them in real time. In their own voice.

What the arbitrage looks like in practice

Per the MKS webinar's own framework, the characteristics that command premium valuations in professional services are scarcity, defensibility, and scalability. Synthetic research delivers all three.

It is scarce because very few firms have built the infrastructure to do it well. It is defensible because the quality depends on the sophistication of the persona modelling – not something you can replicate by buying a subscription. And it is scalable in a way that traditional research, with its dependence on human respondents, field teams, and recruitment pipelines, simply is not.

This is just one example. Across professional services, the pattern repeats. The firms that are building genuinely new capabilities – rather than applying AI as a cost-reduction tool – are creating the kind of asymmetric advantage that the high-frequency traders enjoyed in the early days of algorithmic markets.

The window will not stay open

The parallel with high-frequency trading is instructive. In the early 2000s, firms like Citadel and Renaissance Technologies built technology stacks that gave them a structural edge over traditional market participants. The advantage was enormous – and temporary. Within a decade, the tools became widely available, regulation caught up, and the edge narrowed dramatically.

The same dynamic is playing out with generative AI. Right now, the gap between those who understand how to build with these tools and those who are merely experimenting is vast. But it will not last. The models will improve. The interfaces will simplify. The consultancies will package it all up and sell it back to you at a premium.

If you are running an agency or consultancy and thinking about where AI creates genuine competitive advantage, stop optimising your processes and start reimagining your product.

The arbitrage window will not stay open forever.