What’s the Missing Link in Consumer AI Agents?
Imagine you’re a retailer that guides a shopper to an artificial intelligence agent to return a pair of shoes.
From a system perspective, measuring the agent’s performance seems simple: either it issues a refund or it doesn’t. But from a customer perspective, that only shows a sliver of the experience.
If the refund requires multiple steps, questions or delays, the customer may tolerate it because they want their money back. But that frustration lingers, and they’ll remember it when making their next purchase. When the same refund happens quickly and effortlessly, the experience feels respectful and competent. It may lead to the customer immediately reordering the same product in a different size — or exchanging rather than returning — because the agent understood that the issue was the fit of the shoes, not dissatisfaction with the product as a whole. While the action is identical, the outcome couldn't be more different.
Why Context Isn't the Same as Understanding
Consumer AI agents are quickly moving from experimentation to real deployment. They're booking flights, handling refunds, and representing brands in moments that directly affect customer trust and revenue. In parallel, there has been a surge of interest in context graphs as a foundational layer for agentic AI, with many positioning them as the key to making agents finally “understand” what's happening around them.
That interest is well placed. Context graphs can help agents retain memory, preserve relationships between actions, and maintain continuity across interactions. They solve real problems that hold agents back from reaching their potential. However, there’s a slight nuance here. Context graphs are infrastructure, not intelligence.
Much of the current discussion assumes that if an agent can accurately capture what happened and explain why it took a particular action, it's on the path to success. That framing makes sense in controlled environments, but it breaks down quickly in consumer-facing systems. A context graph can tell you that a customer requested a refund, that the agent followed policy, and that the refund was issued. In some cases, it can even explain why the agent made that decision. What it doesn't inherently tell you is whether the interaction strengthened the customer relationship or damaged it. That outcome shows up in conversion rates, repeat purchases, and long-term brand trust. And it's where many otherwise impressive agent deployments begin to fail in the real world.
Consumer-Facing Agents Change the Stakes Entirely
There's a fundamental difference between internal agents and consumer-facing agents. Internal systems operate in relatively forgiving environments. Employees are trained, processes are known, and if something goes wrong it gets escalated. Consumer-facing agents, on the other hand, operate in open, emotional, high-stakes environments where patience is limited and consequences are immediate. Every interaction is a moment of truth, and anything short of the customer’s desired outcome is a failure.
One of the most consistent mistakes we see in consumer agent design is an overreliance on individual events. Consumer behavior doesn’t work that way. People don’t churn because of a single moment; they churn because of a pattern.
Behavior unfolds over time based on a series of experiences, expectations, and context. A pause, a retry, a return, or a question can mean very different things depending on who the consumer is and what happened before. Flattening those signals into isolated events strips away the meaning that agents actually need to make good decisions. That’s why outcomes cannot be reliably inferred from single decisions. They emerge from trajectories, and those trajectories vary by segment, timing, and circumstance. In real consumer systems, outcomes are probabilistic, not deterministic. The same behavioral pattern may lead to conversion for one segment and abandonment for another, and those probabilities shift as conditions change.
An agent that cannot reason about this state over time will always be reactive, even if it appears intelligent in a demo.
The Missing Capability: Stateful Reasoning Tied to Outcomes
This is where the conversation needs to go next.
If context graphs provide structure, agents still need a way to reason about intent and outcomes. They need to understand where a consumer is in their journey, which behavioral pattern they're exhibiting, what segment context applies, and what outcomes are most likely given those conditions. That requires systems that can model sequences, preserve temporal state, and continuously update outcome likelihoods as new behavior unfolds. It also requires operating at a level of dimensionality that most analytics systems struggle to handle without collapsing nuance in the name of performance.
This isn't easy work. It's computationally demanding, statistically complex, and unforgiving of shortcuts. But without it, agents optimize for activity instead of outcomes. They complete tasks — like issuing a refund for the shoes — even if it ultimately degrades the customer experience.
If you’re building consumer AI agents, don’t stop at capturing context. Build systems that learn what works, when it works, and how to do more of it.
Keith Zubchevich is president and CEO of Conviva, the only platform that uses full-census, comprehensive client-side telemetry to give you real-time insights into every digital experience.
Related story: Closing the Agentic Confidence Gap in Retail
- Categories:
- Artificial Intelligence (AI)
- User Experience
Keith Zubchevich is president and CEO of Conviva, where he helps digital businesses understand what their customers actually experience — not just what dashboards say is happening. He leads Conviva’s work at the intersection of agentic AI, real-time analytics, and customer experience, with a focus on measuring outcomes, friction, and risk once AI is deployed in production.
Keith has spent nearly 20 years scaling Conviva, previously serving as chief strategy officer, and has built a reputation for cutting through AI hype with operator-level realism. He frequently speaks with the media about why customer-facing AI fails, how to measure accuracy and efficiency in automated systems, and why “80% success” in consumer transactions is functionally a failure.
Before Conviva, Keith held senior leadership roles at Riverbed Technology and Cisco. He holds a bachelor’s degree in economics from San Diego State University.





