In my conversations with customers and recently with Futurum Vice President and Research Director, Keith Kirkpatrick, the same question keeps coming up: Why do so many AI pilots stall before they reach production?
The answer usually isn’t model quality or tooling. It’s whether organizations are establishing the right constraints, setting the right ambition, and designing for scale from day one. The counterintuitive lesson I keep seeing: the organizations moving fastest are the ones that kept their guardrails tightest.
As I shared with Keith, organizations that succeed don’t treat AI as a side project. One customer we discussed, a leading American multinational financial services company, is a good example of getting this right. Their first AI deployment wasn’t about a vague AI-adoption goal, or a generalist chatbot. It focused on a concrete, human problem: employees across branches spending time searching through hundreds of forms and procedures during customer interactions. The agent allowed branch staff—across consumer, small business, and other roles—to describe what they needed in natural language and be routed instantly to the right process.
The impact was immediate: shorter wait times, smoother interactions, and more time focused on the customer, without introducing new risk. Importantly, the agent didn’t approve transactions or make financial decisions. It helped people move faster, and the guardrails were exactly what made that possible at scale.
By setting clear ambition while focused on a high-impact, low-risk use case, they were able to build confidence in the technology. They created momentum while not forcing the organization to compromise governance.
Humans stay in control
Scaling AI successfully means keeping people in the loop. I’ve seen that the most successful implementations are explicitly human led. Transparency is key. Audit trails, activity logs, and clear explanations of agent behavior build trust and make systems easier to improve over time.
Success also requires the right kind of guardrails. In regulated industry workflows, it’s not enough for an agent to be right, it needs to be explainable. For example, in a power of attorney verification—agents can handle extraction, matching, and analysis in seconds. Ambiguous or high-risk cases are escalated to humans. With agents as colleagues, teams can spend less time on routine checks and more time applying judgments where they actually matter.
Scaling without sprawl
As adoption grows, the challenge isn’t just proliferation—it’s clarity. Successful organizations distinguish between:
- Personal productivity agents.
- Team-level agents.
- Enterprise agents.
Each category carries different expectations for governance and oversight. A personal agent helping someone summarize documents doesn’t carry the same risk as an agent touching customer data across systems. Builders can solve problems for themselves or their teams, but expanding solutions generally triggers reviews and accountability.
Governance as an enabler
The fastest-moving organizations have created clear, governed pathways for experimentation and deployment of agentic solutions. New solutions are started in constrained environments with limited data access. As they prove value and maturity, they can be promoted into environments with broader reach and tighter oversight. This approach enables innovation while maintaining visibility, accountability, and control over data access and sharing.
Here’s the reality leaders can’t ignore: people will use AI regardless. The choice is whether they do it inside your platform, with your data protected and your policies enforced, or outside of it.
The urgency has changed
Over the past year what’s changed most isn’t the technology—it’s the level of urgency. Organizations no longer ask why they need AI. They ask how to govern it so they can move now.
In the organizations I’ve worked with, the platforms and controls are often already there. The differentiator is execution: choosing the right first problems, designing with governance in mind, and scaling in a way that keeps humans firmly in control.
That’s how AI moves from a pilot to production—and stays there.
Watch my full conversation with Keith Kirkpatrick to hear more real-world examples and lessons from regulated enterprises putting AI to work today.