
The AI noise in financial services is not coming from the technology. It is coming from the mismatch between how quickly ideas can be produced and how slowly institutions are allowed to change. In the same week, you can have a board asking why the firm isn’t moving faster, a CIO being told to standardise and reduce complexity, a COO trying to protect service levels, and Risk and Legal asking the only question that matters when something breaks: who owned the decision.
That contradiction is structural. Regulated institutions are built to be stable, legible and accountable. AI, as it is being sold, is none of those things by default. It is fast, probabilistic, and increasingly embedded in third party ecosystems. So the noise grows. Every team can generate a plausible use case. Every vendor can produce a demo. And the organisation starts to confuse activity with progress.
The clearest organisations I’ve seen are not trying to “do AI.” They are trying to reduce decision friction in specific places, without creating a new operational risk class. That starts with a framing shift. AI is not a programme. It is a portfolio of decisions. Each decision has a measurable outcome, a human owner, and a level of consequence if it fails. When you force that language, the conversation becomes calmer, because it stops being about capability and starts being about accountability.
It also exposes an uncomfortable truth. The commercial upside is often strongest in the most constrained parts of the institution. Credit, fraud, financial crime, complaints, servicing, claims, collections. High volume workflows where small accuracy gains and faster cycle times compound into real operating leverage. Those are also the areas where failure is visible, customer facing, and hard to reverse. That is why so many institutions default to safe experimentation. They pick low consequence use cases, measure success in pilots, and then struggle to scale anything that actually matters.
AI needs the same spine, but it has to be tighter. The guardrails are not optional, and they are not just about regulation. They are about execution capacity. Without a consistent way to classify use cases and apply controls, every initiative becomes a bespoke debate. Risk arrives late. Legal is asked to bless something already built. Technology teams become the connective tissue for decisions they do not own. You end up with a scattered portfolio of tools and a growing dependency footprint, without a clear inventory of what is running where, on which data, under whose accountability.
The noise will not disappear. The supply of ideas will keep rising. The institutions that cut through it will be the ones that treat AI as a controlled decision portfolio, with a governance spine strong enough to scale what matters and quiet enough to say not yet when it doesn’t. That posture doesn’t slow progress. It makes progress repeatable. And that is what starts to feel like momentum in an environment that cannot afford surprises.
I’ll be on the road over the coming months, largely around conversations where investment and value creation intersect. If our paths happen to cross, even briefly, it would be good to connect and talk about your transformation goals.
February 24-25: Malta
March 2-5: Lisbon
March 9–11: Speaking at MoneyLIVE, London
March 16–19: Speaking at Merchant Payment Ecosystem, Berlin
March 23-24: London / Dublin Placeholder
April 20-21: London
April 29-30: Zurich
June: Money20/20 Amsterdam
Tom C. Schapira
Founder and CEO
Imagine Capital Group
E: tom@imaginecapitalgroup.com
Website http://www.imaginecapitalgroup.com
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript