Roman Eloshvili is a founder of ComplyControl, an AI-powered compliance and fraud detection startup for financial institutions.
Discover top fintech news and events!
Subscribe to FinTech Weekly's newsletter
Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more
What AI in Compliance Is Actually Testing: Technology, or Us?
In financial services, compliance is no longer just a function. It’s an active pressure point—where regulation, risk, and operations collide. As AI technologies are introduced into this space, one question keeps resurfacing: how much can we really automate, and who remains accountable when things go wrong?
The appeal of AI in fraud detection and compliance is easy to understand. Financial institutions face mounting expectations to process vast amounts of data, respond to evolving threats, and comply with shifting regulations—all without compromising speed or accuracy. Automation, particularly when driven by machine learning, offers a way to reduce operational strain. But it also raises deeper concerns about governance, explainability, and control.
These tensions are not theoretical. They are playing out in real-time, as financial firms deploy AI models into roles traditionally filled by human analysts. Behind the scenes, new risks are emerging: false positives, audit blind spots, and algorithmic decisions that remain opaque to both users and regulators.
At the same time, compliance professionals are being asked to shift roles. Rather than manually inspecting every transaction, they are now overseeing the tools that do. This reframing—from executor to evaluator—requires not just new technical skills, but a stronger sense of ethical and procedural responsibility. AI can scale data analysis. It can flag inconsistencies. But it cannot fully explain intent, interpret context, or absorb blame.
Understanding these limits is critical. And few people are better positioned to explore them than Roman Eloshvili, founder of the UK-based compliance technology company ComplyControl. His work sits squarely at the intersection of risk, automation, and oversight—where algorithmic efficiency meets regulatory scrutiny.
With more than a decade in the field, Roman has seen firsthand how compliance teams are evolving and how AI is reshaping both their workflows and their responsibilities. He argues that the promise of AI lies not in eliminating human roles, but in reshaping them—bringing new clarity to what machines should handle, and what humans must still own.
This shift demands more than technical upgrades. It calls for a cultural realignment around accountability. Transparent systems, auditable processes, and clearly assigned human responsibility are no longer just features—they are the minimum standard. When AI is introduced into critical infrastructure, it doesn’t just solve problems. It introduces a new category of decisions that require active, strategic stewardship.
In this conversation for FinTech Weekly, Roman offers a grounded view of what it takes to integrate AI responsibly into compliance and fraud prevention. His perspective doesn’t frame automation as an inevitability, but as a choice—one that requires ongoing human judgment, operational clarity, and a willingness to ask hard questions about where trust really resides.
We’re pleased to share his insights at a time when many in fintech are asking not whether to adopt AI—but how to do it without losing sight of the standards that made financial systems work in the first place.
1. You’ve built a career at the crossroads of compliance and technology. Can you recall the moment when you realized that AI could fundamentally change the way risk management is done?
I wouldn’t say it was just one specific moment that changed everything. Rather, it was a spread-out process. I had spent a good portion of my career working with established European banks, and one thing I kept noticing is many of them were far behind when it came to digital banking solutions. The contrast was especially clear compared to more advanced fintech hubs.
Several years ago, when the topic of AI development started heating up again, I naturally grew curious and looked into it. And as I studied the tech and its workings, I realised that artificial intelligence had the potential to drastically change the way banks handle their compliance, putting them more on par with modern, more agile fintech players.
That’s what led me to launch my company in 2023. The complexity of compliance and risk management only keeps growing year by year. Faced with this reality, our mission is simple: to bring AI-powered solutions to financial companies and help them deal with such mounting challenges in a more effective manner.
2. From your professional perspective, how has the role of human specialists evolved as AI tools have become more advanced in compliance and fraud detection?
Before saying anything else, let me address one thing right out of the gate. There is a common worry across many fields whether AI is going to replace human workers. And as far as compliance and risk professionals go, my answer is no — at least, not anytime soon.
While artificial intelligence is already transforming our industry, it is far from being foolproof. As such, human involvement remains an essential factor. Compliance regulations change constantly, and someone has to be able to take responsibility when systems fail to keep up or make mistakes. At its current level of development, AI still struggles to explain its decisions clearly, so it’s not ready to be left on its own. Especially not in a field where trust and transparency are paramount.
That said, AI is actively making compliance processes easier. For example, depending on the configuration, AI systems can now flag suspicious transactions or even block them temporarily while requesting further verification. There is no need for real humans to comb through every single detail by hand, unless something genuinely stands out as odd. And as these systems evolve, they’ll continue to reduce the need for manual work, allowing teams to focus more on nuanced tasks that really need a human touch.
I believe that we’re going to see the rise of a hybrid model, where compliance experts will also become increasingly proficient at using AI tools. They’ll be the ones implementing and maintaining AI systems while AI itself will simplify their work by making sense of complex data and providing recommendations. The final judgment, however, will stay with the humans.
3. When working with AI in sensitive areas like financial compliance, how have you personally approached the challenge of maintaining trust and accountability in decision-making?
Of course. Like I already mentioned, when you’re using AI in compliance, trust is crucial.
That’s why we’ve built our AI systems to be fully transparent. They don’t operate like a “black box” — every recommendation the system makes is based on traceable rules and data. We keep a full audit trail of how each decision is made, so it can be fully explicable. This practice has already proven incredibly valuable when dealing with regulators.
The final call always rests with the compliance officer. AI simply offers a well-justified suggestion which the human can then easily check and make a decision whether to approve or reject it.
4. Your experience spans over 10 years. How has your mindset about automation and human oversight shifted throughout your career, especially now with AI becoming more autonomous?
Definitely. Speaking more broadly about the state of AI adoption, the further this technology progresses, the more autonomy we gradually allow it — so long as it’s thoroughly tested and continues to prove reliable.
But what’s changing even more is the part the human specialist plays in this equation. Instead of micromanaging every case, compliance officers are now increasingly playing the role of strategic overseers. They can review entire batches of similar cases in short order, validate system performance, and fine-tune models based on results.
In other words, the de facto role of compliance officers is transitioning from doing the work by hand towards managing AI systems as they do it for them.
5. Working in AI-driven risk management means navigating complex ethical questions. How have you personally developed a framework for making responsible choices when designing or implementing AI-based solutions?
We’ve built our approach around two key ideas: clear oversight and Responsible AI principles. Every model we use has someone assigned who is responsible for it. Risk assessments, performance reviews, and compliance checks are all done regularly.
We also make sure our systems are auditable. If the algorithm makes a decision, that process can be reviewed and verified. This transparency is a core part of our commitment to responsible AI development.
6. In your journey, what has been the most difficult professional lesson you’ve learned about the limits—or the risks—of relying too heavily on automation in critical fields like fraud prevention?
One lesson that we definitely need to keep in mind is that even well-trained models can still “hallucinate” — can get things wrong in subtle but serious ways.
AI can miss complex fraud schemes, or it might trigger too many false alerts. That’s exactly why pairing AI with human expertise is so important — humans bring with them fluid judgment and are better at assessing ethics and the overall context in ways that AI is not capable of.
The balance between the two promises better, more reliable results. AI can be used to cover the sheer volume of tasks and ease their complexity, while people, in turn, are used to maintain the appropriate level of accuracy and trust.
7. For young professionals entering compliance, risk management, or AI development today, what personal principles or habits would you advise them to cultivate to succeed and adapt in such a rapidly changing environment?
First and foremost: never stop learning. Technological progress has no “pause” button, and you need to keep up or be left behind. There’s no in-between here.
Second, think broadly. With AI advancement, the lines between roles are blurring — tech, finance, and regulation are becoming a mixed bag. I am convinced that having a wide skillset and an open mind will be the definitive traits for future professionals in the field.
Third — and a natural continuation of the previous two — be adaptable. Change is constant, and the ability to adjust quickly will be a major advantage for you.
And finally, develop strong communication skills and learn to be a team player. As we already covered, compliance sits at the crossroads of business, tech, and law. As such, being able to switch gears and talk to people from all these worlds will be a valuable skill to pick up.