Katharine Wooller, Chief Strategist in Banking and Finance at Softcat, has co-authored several published books on the topics of technology, financial services, and leadership and regularly appears as a commentator as a subject matter expert by BBC World News, BBC Breakfast, Deutsche Welle, Reuters and Techcrunch.
Discover top fintech news and events!
Subscribe to FinTech Weekly's newsletter
Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more
AI is heralded as the panacea for a wide range of woes including productivity issues, risk management, fraud detection, customer experience, operational efficiency and automation. As is typical of bleeding edge technology, and particularly in financial services, the adoption and regulation are, at best, patchy.
In my day job supporting over 2500 firms across a plethora of flavours of financial services in innovation with their infrastructure, hardware and software needs, I see a wide variety of uptake and appetite for artificial intelligence. Certainly, there are parts of our industry which are well ahead in adopting AI, in particular insurance and algorithmic trading.
However, a fractured vendor landscape and a lack of regulatory clarity is stunting uptake, which ultimately risks thwarting UK competitiveness both in AI and in fintech more broadly if the ecosystem feels hamstrung in embracing innovation.
UK Current Regulation
AI regulation in the UK is an evolving area. Unsurprisingly politicians rhetoric in embracing new technology, promising job creation is a clear vote winner, but the reality at policy level, and for participants seeking to embrace AI is somewhat murkier.
Since 2021, the government launched its National AI Strategy which looked to position AI as a global leader in AI innovation, baked up by the 2023 AI regulation whitepaper, with the UK’s Centre for Data Ethics and Innovation providing ongoing guidance around ethical and responsible AI use.
There are already numerous regulatory requirements to comply with consumer protections, for example for AI powered retail financial services such as robo-advisors, and for general data protection and privacy.
There are numerous stakeholders, with the FCA and Bank of England having specific AI initiatives, and a current call for evidence from the Parliamentary Committee on the use of AI in banking pensions and other financial services.
Clearly AI is a political hot potato, particularly with the recent events around DeepSeek showing the volatility and rapidly evolving nature of the AI market and its potential for harm, particularly in terms of systemic risk, increase cyber risks, and the need to safeguard financial consumers, particularly vulnerable ones that may be at risk of bias.
As is often the case with new technology, the EU is ahead of the UK, and many firms are looking to comply to EU standards, as many UK firms are working with EU entities, or see the standards as “best practice” and or future proofing against incoming regulation.
Recommended readings:
- EU's InvestAI Initiative: Can €200 Billion Close the AI Gap with the U.S. and China?
- The Geopolitical AI War: How We Got Here - FTW Sunday Editorial
- European Union Pledges €200 Billion to Strengthen AI Industry and Compete Globally
The EU AI Act, in my view, does indeed deserve praise, as the world’s first comprehensive AI law. It provides a risk-based approach, categorising AI into categories deemed unacceptable (for example biometric surveillance, social scoring) high risk (including financial services) and limited risk (chat bots).
The fines for non-compliance are serious, and companies violating high risk AI rules can face fines of up to €35 million or 7% of global turnover. For those firms using AI it provides strong consumer protections, clear guideline and standards for ethical AI development and usage.
Global inspiration for AI regulation
Further afield there is a huge variety in the attitude to AI regulation. China has adopted a policy of strict AI regulation focused on national control, at what some would see as the expense of freedom of expression – try asking DeepSeek some questions which do not align with the government socialist values.
Canada has adopted a finely tuned balance between regulation and innovation focusing on high impact AI, including financial services. Japan, intriguingly, has a flexible industry-led self-regulating system with no specific or new AI laws.
Of course, President Trump’s swathe of pro-AI executive order in his first few days of office cannot be ignored, including an AI Action Plan to be created within 180 days to enhance US dominance in AI, and invested $500bn in AI infrastructure. Significantly, it revoked Biden’s 2023 executive order which had aimed to provide “safe, secure and trustworthy development and use of AI” which aimed to mitigate risks in AI to consumers, workers and national security.
In the absence of any detail around checks and balances as replacement, the emphasis seems to be on innovation over regulation.
The journey to mass adoption is never linear, nor smooth. There are a number of other competing issues. Privacy is a huge barrier to adoption, and I see significant demand for private cloud AI. DORA whilst having a glaring omission of AI specific requirements does, quite rightly in my view, have high standards for operational resilience which will have significant overlap with deployment of AI.
The ESG agenda is rightly important, and indeed a regulatory requirement for disclosure in larger firms. AI and its associated infrastructure can potentially be environmentally damaging due to the huge data centres needed for AI servers, the power and water they consume and electronic waste they produce at end of life – plotting a course which balances the commercial potential of AI and the ESG concerns can be delicate.
Recommended reading:
AI’s Energy Crisis: Tech Giants Turn to Hydrogen and Nuclear Power for Sustainable Solutions
So, what of the Future?
I must confess to a personal guilty pleasure (and very niche interest!) of mine is using AI and big data crypto coins as a metric for the health of the AI industry.
At time of writing according to Coinmarketcap the total value of those coins is $32Bn, interestingly a huge decline from all time high of $69Bn in July last year. The race for adoption will be won by the specialist vendors that allow financial services businesses to deploy AI in a safe and compliant way.
Already great work is being done in the industry by companies such as Nvidia and HPE who can show real use cases that add value. Ultimately the “best” AI regulation for our industry and society at large depends on the balance between fostering innovation and ensuring ethical, safe, and responsible AI development.