Onur Alp Soner is the co-founder and CEO of Countly.
FinTech moves fast. News is everywhere, clarity isn’t.
FinTech Weekly delivers the key stories and events in one place.
Click Here to Subscribe to FinTech Weekly's Newsletter
Read by executives at JP Morgan, Coinbase, BlackRock, Klarna and more.
When a data breach makes the news, it’s usually framed as an exception – a misconfiguration, an overlooked permission, a human error that could have happened to anyone. The discussion often stops there, as if the incident itself were the cause. In reality, breaches are more often signals than failures. They expose dependencies that became too central and too opaque long before anything went wrong. By the time data is leaked, the risk has usually been building quietly for years.
For a long time, analytics sat in a safe mental category. It was seen as observational, something that watched the system rather than shaped it. Unlike payments, identity, or core infrastructure, analytics was rarely treated as a layer that could materially affect outcomes.
In fintech, especially, analytics now influences how systems evolve and how decisions are made, shaping product behaviour, risk controls, and even automation. Yet the infrastructure behind it is still often external, running on third-party platforms outside the organisation’s direct control.
This is the invisible dependency we stopped questioning.
Why "no PII" stopped being a sufficient definition of safety
When teams justify outsourcing analytics, the argument usually centres on personal data. Events are anonymised. No names or emails are collected. Without PII, the risk is assumed to be low.
While that logic held when analytics was mainly about counting users and sessions, it breaks down once analytics starts capturing how systems behave.
Modern event data does far more than describe individual users. It exposes the internal structure. Feature names, internal URLs, experiment variants, error states, timing patterns, and backend responses reveal how a product is designed and how decisions flow through it. None of this directly identifies a person, yet together it can reconstruct large parts of an organization's internal logic.
This is where the mosaic effect becomes relevant in practice. Individual events appear harmless in isolation. Aggregated over time, across features and flows, they reveal how a product really works. In fintech, this has real consequences. Even anonymised events can hint at approval thresholds, risk scoring rules, or escalation paths. The sensitivity of analytics data today lies less in who it tracks and more in what it reveals.
The limits of "We handle security for you."
Analytics vendors excel at scale, performance, and speed of integration. Those strengths matter. What they don’t optimise for is long-term safety, regulatory defensibility, or an organisation’s ability to explain its own architecture under scrutiny.
When vendors say they "handle security," they usually mean the complexity is hidden. You can’t see how data is combined, retained, or what secondary signals are derived. Invisibility is sold as simplicity, but control is replaced with trust. Standards like SOC2 validate controls, not architectural choices. A system can be fully certified and still concentrate sensitive analytics data in ways that would be difficult to justify under scrutiny.
That trade-off may be acceptable elsewhere. For analytics that shape decisions, it creates structural risk by replacing verifiable safety with hidden systems and assumed trust.
Financial ledgers already operate under this logic: traceability, auditability, and ownership are non-negotiable. Analytics now shapes decisions just as consequential, but it has not yet been treated with the same discipline.
How structural risk accumulates in analytics systems
Most analytics incidents don’t stem from a single bad choice. They emerge gradually, as systems take on responsibilities they were never designed to hold.
Teams add more events, then more context, then more metadata. Feature flags, experiment IDs, internal error codes, backend states, and user classifications slowly find their way into event streams. Over time, analytics becomes a detailed mirror of how the product actually works. At that point, it stops being a passive reporting layer and becomes a form of institutional memory.
When data is exposed, what leaks is rarely just raw numbers. It is structure: how features are rolled out, how decisions are staged, how services interact, and how edge cases are handled. Recent incidents have shown this clearly, with logs once considered harmless revealing internal routing logic, experiment configurations, admin paths, and behavioural patterns that should never have left organisational control.
AI does not introduce this risk, but it amplifies it. Behavioral analytics increasingly feeds automated decision systems, meaning structural exposure can influence model behavior, bias, and decision logic. A single incident can affect not just transparency, but how systems act going forward.
In fintech, the impact is amplified further. Analytics data often sits close to systems that assess trust, detect fraud, or automate approvals. Even when analytics doesn’t make decisions itself, it increasingly shapes the systems that do.
Convenience as a substitute for scrutiny
For teams under pressure to move fast, polished dashboards, quick integrations, and instant insights are hard to resist. Over time, though, convenience tends to replace scrutiny. Few organisations map their analytics data flows in detail, assess how difficult it would be to exit a platform, or account for how much institutional knowledge has effectively been outsourced. This is rarely a deliberate choice. It’s the result of treating analytics as tooling rather than infrastructure.
This isn’t an argument against third-party services in general. In fact, some layers are well-suited to being rented, especially when failure is contained, and exit is straightforward. The distinction that matters is whether a system shapes outcomes.
To put it plainly, any system that influences access, trust, eligibility, or core user experience should be visible, auditable, and fully understood by the organisation that relies on it. Systems that are easy to replace and do not encode institutional logic can safely live outside the institution.
A simple test clarifies the boundary: if this system disappeared tomorrow, would you still be able to explain how your product behaves and why decisions are made the way they are?
The broader accountability question
Fintech systems increasingly function as public-facing infrastructure. They shape who can open accounts, access credit, or participate in the economy. That reality shifts the responsibility model. Architectural decisions are no longer purely internal technical choices; they carry societal consequences.
When critical layers such as cloud platforms, analytics systems, or AI models are concentrated in a small number of opaque systems, failures and unexplained decisions can ripple far beyond a single company. Invisible dependencies do more than increase security risk. They weaken accountability.
Ultimately, if a system cannot be seen, it cannot be governed. And systems that cannot be governed should not be trusted with decisions that materially affect people's lives. Analytics stopped being purely observational some time ago. Our architecture, standards, and assumptions have yet to catch up.
About the author
Onur Alp Soner is the co-founder and CEO of Countly, a digital analytics and in-app engagement platform. A technologist and self-starter, he bootstrapped Countly from the ground up to give companies more control over how they understand and interact with their users. Under his leadership, Countly has grown into a trusted platform for enterprises worldwide that want to innovate quickly while keeping user privacy at the centre of their growth strategies.