Roman Eloshvili is a founder and chief executive officer of XData Group, a B2B software development company. There, he directs the development of AI in banking while navigating investor relations and fostering business scalability. He is also the founder of ComplyControl, a RegTech UK-based startup specializing in cutting-edge technology solutions for banks.
Discover top fintech news and events!
Subscribe to FinTech Weekly's newsletter
Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more
Banks and fintechs across the whole world are looking for ways to use artificial intelligence in any number of ways: to speed up operations, cut costs, improve customer interactions, and more. And yet, when it comes to compliance — arguably, one of the most demanding and time-consuming parts of finance — most companies are still holding back.
A survey conducted earlier in 2025 found that only a tiny fraction of firms (less than 2%) have actually fully integrated AI into their workflows. Most of them, however, still remain in early stages of exploration and adoption. If indeed, they adopt it at all.
The pressure on companies to keep up with regulatory changes is still very much there, and rising. So why is compliance so slow to embrace AI when it could be of great help?
Let’s try to figure it out.
The Human Eye for Things Still Matters
Probably the first and most important thing we need to keep in mind here is that compliance isn’t just about following a checklist. It’s about making judgment calls in situations that often fall into grey areas. The world of financial decisions is rarely all black-and-white. Regulations differ across jurisdictions, and the interpretation of those rules is rarely straightforward.
AI is brilliant at crunching data at lightning speed and spotting anomalies. But while it can flag a transaction that looks suspicious based on pre-established patterns, that doesn’t mean it can clearly explain the “why” behind its conclusions. More importantly, it struggles with nuance. A human compliance officer can detect when a client’s behaviour, while unusual, is harmless. AI, on the other hand, is far more likely to simply raise an alarm without context.
This is why compliance leaders hesitate to hand over the reins here. Machines can certainly be of help, but most people are still far more likely to trust in a human’s ability to see the broader picture and judge accordingly.
Efficiency vs. Regulatory and Reputational Risks
An AI’s ability to analyse thousands of transactions in real time is something no compliance team could ever match while stuck in manual mode. So efficiency-wise, nobody can argue that it’s a great support tool, capable of reducing the workload so that human staff can focus on more strategic and nuanced tasks.
But compliance is not an area where speed alone wins. If an AI system makes an error in judgment, it could mean fines, reputational damage, or regulatory scrutiny. All of these things can be very harmful to a business — possibly even destructive. So is it any wonder that many wish to avoid inviting such complications on their heads?
Most regulators also agree that, when it comes to AI-based decision-making, someone must remain accountable. If an AI model mistakenly blocks a legitimate transaction or overlooks a fraudulent one, responsibility ultimately still lies with the company. And it’s the human compliance officers who need to take that responsibility.
This creates a natural sense of caution: compliance leaders have to weigh the benefits of faster monitoring against the risks of possible regulatory penalties. And until AI systems become more explainable and transparent, it is likely that many firms will be reluctant to let them make autonomous decisions.
How to Move With AI Adoption Responsibly
A very important lesson to carry out of everything above is that the hesitation of compliance leaders doesn’t mean they are anti-AI. In fact, many are optimistic about AI’s role in the future. The important thing is to find the right way forward.
As I see it, the most natural and promising course available to us is to adopt a hybrid model. A collaboration between humans and AI, where artificial intelligence does the heavy lifting — scanning transactions, flagging unusual activity, or generating reports. And when the end results are ready, humans can then review them, interpret the context of AI’s decisions, and make the final call.
But in order to prescribe to such a model, companies will need to make sure their AI systems are explainable. Compliance is not just about detecting risk; it’s about proving that decisions are fair. Which is why the market needs more AI tools that can explain their outputs in plain terms.
It’s Not About “Man vs. Machine”
Realistically speaking, I don’t see AI making compliance officers obsolete. Much more likely is that their roles are going to change — from doers to managers. Officers will spend less time performing checks themselves and instead double-check AI’s decisions, dealing with the grey zones where machines still fall short.
At its heart, compliance is a human business. And while AI can make compliance teams faster and more effective, it cannot handle the moral and regulatory responsibility that comes with it.
Which is why it’s my firm belief that the future of compliance will be less about “man versus machine” and more about “man with machine” — working together to keep financial systems safe and fair.