Why Mainframes Still Matter in Banking’s Digital Era – Interview with Jennifer Nelson

header image

Despite the rush to cloud, mainframes continue to power global finance. Jennifer Nelson explains why modernization requires balance, not abandonment.

 

Jennifer Nelson is CEO of izzi Software.

 


 

Discover top fintech news and events!

Subscribe to FinTech Weekly's newsletter

Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more

 


 

In an industry obsessed with the newest wave of technology, it’s easy to forget that some of the strongest pillars in financial infrastructure have stood for decades. While fintech innovation is often framed as a race toward the future, the backbone of global banking quietly remains anchored in systems many wrongly dismiss as relics: the mainframe.

 

This isn’t just a matter of nostalgia or corporate inertia. Mainframes still process the bulk of the world’s financial transactions, with a reliability and scale unmatched by many newer platforms. Their ability to handle vast volumes of data in real time, without compromising security, has made them indispensable in a financial system that depends on both speed and trust.

Yet, for all their critical role, mainframes are often misunderstood. In today’s climate, where “cloud-first” is the default mantra, it can feel counterintuitive to defend older technologies. But calling the mainframe a legacy system oversimplifies a much more complex truth. To understand why, we need to examine the balance between heritage systems and the modern push toward hybrid infrastructures.

 

The Case for Modernization with Caution

Financial institutions are under relentless pressure to modernize. Investors, customers, and regulators expect seamless digital services, hardened security, and ever-faster performance. For many leaders, the temptation is to pursue change aggressively — to shed old systems and move wholesale to the cloud.

But modernization isn’t simply a technical project. It’s a strategic undertaking that carries risks when done hastily. Data that has lived securely inside a mainframe environment for decades becomes exposed the moment it is transferred elsewhere. Applications optimized for the mainframe may stumble when migrated, resulting in costly latency issues. These risks are more than hypothetical — they threaten daily operations, regulatory compliance, and even consumer trust.

The lesson is clear: true modernization isn’t about ripping out the old in favor of the new. It’s about integrating strengths, phasing updates carefully, and ensuring that the next step forward doesn’t destabilize what already works.

 

A Skills Gap with Real Consequences

Technology evolves faster than the expertise required to maintain it. Nowhere is this more apparent than in the mainframe space. For years, banks and financial institutions have relied on a pool of engineers with deep institutional knowledge of IBM Z systems and related platforms. As many of those experts retire, the next generation has yet to fully replace their skill set.

This creates a serious challenge. A shallow bench of expertise increases the risk of costly mistakes, even when protections are in place. The resilience of mainframes can’t fully compensate for the human factor. Until new engineers are trained and mentored, banks will face vulnerabilities not because of the technology itself, but because of the narrowing pool of professionals who know how to use it safely.

 

Security Is Still About People

When conversations about cybersecurity arise, much of the focus is on tools and defenses. Yet, time and again, the real weaknesses stem from human behavior. In the mainframe world, this often comes down to how permissions are granted, managed, and revoked.

Developers who don’t fully understand the implications of elevated permissions may leave doors open, not out of malice, but out of incomplete training or convenience. Companies that fail to update access when employees shift roles can expose sensitive data unnecessarily. Even with sophisticated technology, the basics of security hygiene remain essential — and too often overlooked.

 

Introducing Jennifer Nelson

To put these challenges and opportunities in context, we turned to Jennifer Nelson, CEO of Izzi Software. Nelson has built her career around mainframe systems, spending 15 years at Rocket Software and five years at BMC before broadening her perspective through senior engineering roles outside the IBM Z ecosystem. In 2024, she founded Izzi Software, a company dedicated to acquiring and growing businesses built on IBM Z and IBM Power platforms.

Her vantage point — spanning traditional mainframe engineering and modern software leadership — makes her a rare voice in today’s conversation about technology strategy in financial services.

Enjoy the interview!

 


 

1. As fintech races toward cloud-native everything, you’ve argued that the mainframe remains critical to global banking stability. What do you think most innovators get wrong about the role of older systems today?

The first thing they get wrong is to call the mainframe a legacy system; that because they were launched more than 60 years ago they’re somehow obsolete. That’s like calling the Windows operating system a legacy platform. It’s just not reality. Mainframes are more relevant today than when they were first invented.

Everybody wants data at the speed of light. They want data returned to them as soon as they press the button, no matter where that data sits. And rightly so because the end consumer wouldn’t know, and shouldn’t have to know, the complexities of their request, such as where the data sits. But only mainframes can give you the performance and security in a hybrid environment.

Mainframes can ingest data anywhere it sits, analyze it, and report it back, complete with recommendations, better than any other platform, and faster. Show me another system that can ingest data from all across a global network, analyze it, detect anomalies in real-time, and send it right back to the caller. 

He who knows his data best wins because data is as precious as cash capital. When innovators dismiss mainframes as legacy systems, they’re dismissing their speed and power, and the ability to process massive quantities of data at the speed required for real-time risk detection. 

People think the cloud was game-changing and modern, and that mainframes are outdated by comparison. The concept of cloud computing across a network is indeed modern and game-changing for many. But if you’re familiar with mainframe technology, users will recognize it has many of the same characteristics as cloud. For example, when you log into the mainframe you’re logging in to TSO, short for “time sharing option”. You have your own TSO session, or Microsoft Teams ‘instance’.

You’re all using the same processors on the mainframe. But when you’re not running a program or batch job, capacity is given to those who need it. You also are logging into an LPAR, or logical partition, complete with dedicated storage, security and privacy. Users on one LPAR can’t access data on another LPAR, unless specifically configured to do so. That’s what the cloud is at its core; sharing resources when you aren’t using them, and securing data dedicated to your instance. But the mainframe’s been using these concepts for years.  

 

2. Hybrid infrastructure—mixing mainframes with newer cloud layers—is becoming the norm. From your experience, what are the real risk factors introduced when organizations try to modernize too quickly or superficially?

Of the multiple risk factors, I can boil it down to two. 

The first risk is data consumption. The data on a mainframe is some of the most secure data anywhere. When you take it off the mainframe or make it visible to someone ingesting that data, there's a risk to data privacy and regulation. Who's looking at it? Where is it going when it leaves the mainframe?

The second risk is in optimizing applications to run in a hybrid environment. Applications optimized for the mainframe may end up running sub-optimally on another server. Latency and performance issues could harm productivity. 

 

3. You’ve raised the alarm about a skills gap in mainframe expertise. How serious is the institutional risk when fewer engineers know how to operate and secure the systems financial institutions still depend on?

The risk is severe. Newer developers — not just younger, but those new to the industry — will learn and grow their expertise. But until the next generation catches up, there will be an exposure at financial institutions for some time when institutional knowledge is not as deep as it needs to be. 

Folks with a shallow depth of experience or knowledge may do things inadvertently to cause risk to data or to an operating system. These systems are resilient and have several layers of protection against human error, but there's still a fair amount of risk until skills are where they need to be. Banks are already battling this skills gap today.

 

4. Security conversations often focus on tools, but you've pointed out that people are still the frontline. What operational blind spots have you seen emerge most often in the management of mainframe environments?

Managing relevant environments usually centers around elevated permissions. When a software engineer is writing code, they sometimes need an elevated permission to do something specific on the operating system, where they can enable the program to do something more sensitive. If the engineer misunderstands the developer’s best practices when writing software, they won’t know when to go in and out of that elevated authorized state. That state brings more risk, so engineers won’t stay in it long enough to fully understand the best practices when developing for that system.

There are also some fundamental security best practices to use in any IT network. When you give special authorization to someone in a certain role, you need a clear process in place to remove that authorization when they switch roles, to ensure you remove access. Much of the time it’s not an issue, if they’re either still an employee of the company or not a bad actor. But there's always a risk when leaving too much sensitive data available to people who no longer need it. 

Furthermore, mainframe system-level data sets allow users to do fundamental things to a system. You only want certain users to have access to those functions. For example, certain security controls can only be toggled at the deeper levels of the operating system. You would be surprised at how often companies leave basic security principles unchecked. There are ways for engineers to do their jobs without having access to those root-level resources, but it's easier to work with that level of access, so companies leave the backdoor open more than they should. 

Most employees can be trusted, but these are fundamental principles some financial institutions leave open and forget about.

 

5. Ransomware attacks are targeting not just endpoints, but core infrastructure. What makes legacy systems both uniquely vulnerable—and, in some cases, more resilient—than newer platforms?

Mainframes have built-in layers of security that most servers just lack. Just because you can log into the mainframe doesn't mean you now have access to business-critical data, which is what ransomware usually locks down. You then have to know where the data is, and how to access that data. And then the data might be compartmented, so an invader only has access to a segment of the data and not everything they need for a successful ransomware attack. And if you don't have access to the storage device, you can't see the data on that device. 

 

6. From your experience, what does effective modernization actually look like for financial institutions that can't afford to “rip and replace” but need to be future-proofed?

Modernization means different things at different companies because of where they are with the applications they run. Whether B2B or B2C, companies are modernizing continually, upgrading servers and laptops. 

The same thing happens with business critical applications. A business might periodically update those applications, but because traditional mainframe applications were developed generations ago, the best thing companies can do is fully assess what each application does end-to-end. That way they can phase their modernization in manageable pieces. 

Companies can compartmentalize an application, breaking it into pieces so the different features and functions get upgraded and rewritten slowly over time as is affordable. If you look at modernization as an ongoing process, the urge to improve and iterate becomes continual. 

Leaders should always have a proactive mindset. The questions should be: “What can we do now? What can we contain this year? What can we contain in the next two years?” That’s a better approach than “how do we rewrite this whole thing?”

You have to iterate on systems and build them out over time. Start by rewriting one feature of a business-critical application, then build on that by adding the rest of the features as you can. Phase changes in a little at a time. 

Rip-and-replace is one option. It sounds raw and brutal, but all it really means is to stop using one system to use another. But leadership needs to have the stomach for a big change all at once, and has to approve the budget. The truth is, it’s more just “replace,” because it can take years to complete the procedure. 

 

7. For tech leaders coming from a cloud-first mindset, what would you say is the most important shift in thinking when engaging with mission-critical mainframe systems?

Learn what the mainframe is actually doing. The Hippocratic Oath says to first do no harm, so learn what the mainframe is responsible for to keep from making harmful errors. Once those with a cloud-first mindset understand the totality of what transactions are coming into the mainframe, the nature of those transactions, and how much their company's revenue depends on those transactions, they'll understand and know how to avoid damaging their company’s performance and profitability.

 


 

About Jennifer Nelson

Jennifer Nelson has spent the most of her career in the mainframe space, including 15 years at Rocket Software and five years at BMC. In 2019, she transitioned into senior engineering roles at global technology firms outside the Z Systems ecosystem, broadening her perspective and skill set. In early 2024, Nelson began laying the foundation for what would become Izzi Software, a company focused on acquiring and growing software businesses built on IBM Z and IBM Power platforms.
 

Related Articles