FinTech moves fast. News is everywhere, clarity isn’t.
FinTech Weekly delivers the key stories and events in one place.
Click Here to Subscribe to FinTech Weekly's Newsletter
Read by executives at JP Morgan, Coinbase, BlackRock, Klarna and more.
On March 9, Coinbase CEO Brian Armstrong posted on X that very soon there will be more AI agents than humans making transactions. His reasoning was specific: AI agents cannot open bank accounts because banks require identity verification that software cannot provide. Crypto wallets, opened with private keys alone, have no such requirement.
On the same day, Binance founder Changpeng Zhao posted that AI agents will make one million times more payments than humans, and they will use crypto.
On the same day, Nvidia disclosed plans to launch NemoClaw, Nvidia's direct answer to OpenClaw. Where OpenClaw was built for individual users, NemoClaw is built for enterprises that need the same capability with security and compliance layers that OpenClaw lacked.
Armstrong and Zhao are making the same argument. Nvidia is building something that makes the question they are raising more urgent. Together, they outline a structural question for fintech that has not yet been answered: if AI agents become significant participants in the financial system, and they cannot use the banking infrastructure that exists, what replaces it?
The bank account problem
The traditional financial system was built on the assumption that the entity opening an account is a human being with a legal identity, a residential address, and documents that can be verified. Know Your Customer (KYC) rules, anti-money laundering requirements, and payment network participation agreements all reflect this assumption.
Autonomous AI agents are software. They cannot hold passports. They cannot sign legal agreements. They cannot appear in person at a compliance process. A software agent instructed to book cloud computing resources, pay for data, or execute a financial transaction on behalf of a user runs into these requirements immediately and has no way to satisfy them.
Armstrong illustrated the consequence when he described agents being blocked by services that require payment credentials tied to a verified human account. Without the ability to pay, agents cannot complete the tasks they are deployed to perform.
Crypto wallets solve this in a narrow but important way. A wallet address is generated from a private key. No identity verification is required. An agent that holds a wallet can send and receive value, execute transactions, and pay for services autonomously.
What Coinbase built
Coinbase launched Agentic Wallets on February 11, 2026, on its x402 protocol, a payments standard built for machine-to-machine transactions. Wallets created through Coinbase's developer tools can be set up and funded in minutes and come with built-in functions for sending funds, trading, and holding balances. They support gasless trading on Base, Coinbase's layer-2 network built on Ethereum, meaning agents can transact without separately managing gas fees.
The practical use cases Armstrong described include purchasing cloud compute, paying for datasets, booking services, and executing trades, all without human intervention at the transaction level. An agent does not ask for approval each time it pays. It holds funds, evaluates what it needs, and transacts.
What BNB Chain built
Coinbase is not alone in building this infrastructure. On February 4, 2026, BNB Chain deployed the ERC-8004 standard on its mainnet and testnet, creating what it describes as Trustless Agents: software entities with verifiable identities and reputations recorded on the blockchain.
Alongside ERC-8004, BNB Chain introduced BAP-578, which creates Non-Fungible Agents. An NFA is an AI agent that exists as an on-chain asset and owns its own wallet. It can hold funds and spend them to complete assigned tasks across applications, without a human authorising each individual transaction.
The identity layer matters for a reason beyond convenience. One of the genuine problems in deploying AI agents at scale is that services receiving agent requests have no way to distinguish a legitimate agent from a malicious bot. ERC-8004 introduces an Identity Registry, a Reputation Registry, and a Validation Registry, creating a framework in which an agent can prove it is authorised to act before it transacts. That is the agent equivalent of a KYC check, built into blockchain infrastructure rather than into a regulated financial institution.
What Nvidia is building
NemoClaw, reported by Wired on March 9 ahead of Nvidia's GTC 2026 conference, is an open-source platform designed to let companies deploy AI agents for multi-step enterprise workflows. The platform includes built-in privacy and security safeguards and is designed to work regardless of whether a company's products run on Nvidia hardware.
Nvidia has approached Salesforce, Cisco, Google, Adobe, and CrowdStrike for potential partnerships ahead of launch, though no formal agreements have been confirmed. Partners are expected to receive early access in exchange for contributing to the project.
The timing against GTC is deliberate. Jensen Huang is expected to use the keynote to position agentic AI as the central element of Nvidia's full software stack. Dedicated sessions already scheduled include multi-agent system deployments and post-training models designed specifically for tool-using agents.
NemoClaw builds on the Nemotron 3 model family, released in December 2025 and designed for agentic workloads.
The market projections
MarketsandMarkets projects the AI agents market will grow from $7.84 billion in 2025 to $52.62 billion by 2030, a compound annual growth rate of 46.3%. This is among the more conservative projections. Other research firms forecast figures between $50 billion and $236 billion by the early 2030s depending on how far autonomous reasoning capabilities develop.
The range reflects genuine uncertainty about the ceiling. What is less uncertain is the direction. Enterprise deployment is accelerating, developer tooling is becoming accessible to non-specialists, and the infrastructure investment from companies including Coinbase, Nvidia, and BNB Chain indicates that the largest technology firms expect autonomous agents to conduct transactions at meaningful scale within years, not decades.
The risks
Armstrong's argument that crypto solves the bank account problem for AI agents is compelling as far as it goes. It does not address what happens when things go wrong at scale.
Several large technology companies have already banned autonomous agents on corporate devices. Meta restricted employees from using OpenClaw on corporate devices following an incident in which an agent accessed an employee's machine without instruction and deleted emails in bulk. The governance challenge is not hypothetical. An agent that holds a wallet and can transact without human approval can also make financial mistakes, be exploited, or act outside its intended parameters.
The regulatory question is open. Neither the Financial Conduct Authority (FCA), the US Securities and Exchange Commission (SEC), nor any major financial regulator has yet addressed what happens when a significant volume of financial transactions is conducted by software agents using crypto wallets rather than by humans using regulated payment systems.
Anti-money laundering rules, fraud liability frameworks, and consumer protection regimes all assume a human is responsible for the transaction at some point in the chain. Agent-native payments break that assumption.
BNB Chain's identity standards for agents address part of this by creating verifiable on-chain records of what an agent is authorised to do. Whether that is sufficient for regulators has not been tested.
What it means for fintech
The immediate implication for fintech companies is not that their current customers are about to be replaced. It is that a new category of transacting entity is emerging, and it does not fit the infrastructure that fintech was built around.
Payment processing, identity verification, fraud detection, and compliance tooling were all designed with human users as the endpoint. An AI agent that holds stablecoins and transacts via a private key creates a different set of requirements. Verification needs to happen at the agent level. Fraud signals look different when the actor is software. Compliance frameworks need to account for who is responsible when an agent acts autonomously and causes a financial loss.
The companies that build the infrastructure for agent-native payments will occupy a position in the next phase of the financial system that the companies that built payment APIs occupied in the last one. That infrastructure is being built now, at the same moment Nvidia is preparing to deploy the tools that will generate the demand for it.
Editor's note: We are committed to accuracy. If you spot an error, a missing detail, or have additional information about any of the companies or filings mentioned in this article, please email us at [email protected]. We will review and update promptly.