By Stephanie O’Connor, Wind River Payments.
The intelligence layer for fintech professionals who think for themselves.
Primary source intelligence. Original analysis. Contributed pieces from the people defining the industry.
Trusted by professionals at JP Morgan, Coinbase, BlackRock, Klarna and more.
Join the FinTech Weekly Clarity Circle →
Fraud tools are designed around how people typically shop: how they move through a site, how long they take to browse and what they change before clicking buy. Those signals typically determine whether a transaction is legitimate.
Modern fraud systems are already capable of identifying traditional bot behavior. The challenge with agentic commerce is different. AI agents can be trained to mimic human patterns closely enough that those signals become harder to distinguish from human shoppers.
Even when fraud systems work as intended, separate issues emerge when AI starts making purchasing decisions.
AI agents are typically built to optimize for price and speed. They don’t stop to question things a human might, like a price that looks slightly too low, a seller that’s not an authorized retailer, or a listing that doesn't quite match the brand’s. They execute instructions. That efficiency may improve conversion rates, but it also removes the layers of informal risk filtering that humans naturally apply.
Price optimization puts immediate pressure on small and medium-sized businesses. If an agent is instructed to “buy X under $Y,” the lowest-cost seller wins. Larger manufacturers and high-volume marketplace operators are structured to compete on price. Many SMBs compete on service, specialization, and customer trust. Automated buying weakens those advantages.
Counterfeit listings also become machine-optimized opportunities. While a human buyer would recognize that a deeply discounted product looks suspicious, an AI agent won’t, unless it’s been explicitly programmed to assess brand legitimacy and pricing patterns. Counterfeit sellers do not need to price far below market to win. Even slight undercuts are enough to capture automated purchases.
Spoofed domains and websites add further risk. If agents transact autonomously, they must assess whether a site is legitimate. A cloned website can intercept automated orders before the consumer realizes anything is wrong. The reputational damage falls on the real merchant. Smaller businesses tend to lack the monitoring tools and security resources larger enterprises use to detect and shut down impersonation quickly.
From the payments layer, we see how fast exposure moves when transaction behavior changes. Chargeback models, fraud scoring, and dispute processes were designed around human purchasing behavior. If AI-driven transactions increase counterfeit disputes or unauthorized purchase claims, SMBs will absorb the financial impact first.
Even if consumer adoption is gradual, infrastructure decisions are happening now. Payments and software providers need to adjust risk models before automated buying scales.
That means:
- Updating fraud models to factor in machine-led behavior
- Implementing machine-readable merchant verification standards
- Monitoring for cloned or lookalike websites
- Clarifying liability and dispute handling for AI-initiated purchases
AI-driven commerce can be more efficient. But without changes at the infrastructure level, it will also shift fraud exposure and pricing pressure onto the smallest players in the market.
If the buyer changes, risk models and liability frameworks must change with it.
About the author
Stephanie O’Connor is Director of Operations and Merchant Experience at Wind River Payments, where she leads a team of relationship managers who work directly with clients to help them navigate the complexities of modern payments—from transaction processing to fraud prevention and customer experience. She brings more than a decade of financial services industry experience working closely with merchants and payment partners.