Why Most AI Transformations Fail Before They Start
- Evan J Schwartz

- 14 hours ago
- 11 min read

The race to adopt artificial intelligence has triggered a paradox: unprecedented technological promise alongside unprecedented organizational failure.
According to MIT's 2025 State of AI in Business report, analyzing 300+ initiatives, 95% of generative AI pilots fail to deliver measurable impact on the P&L. Despite the rush to integrate powerful new models, only about 5% of AI pilot programs achieve rapid revenue acceleration.
I approach this challenge from multiple vantage points. I serve as the Chief Innovation Officer at AMCS Group, a global SaaS company delivering enterprise solutions across 80 countries to logistics, field services, and resource-intensive industries such as waste, recycling, and manufacturing. I also teach technical project management, data analytics, and AI-era architecture at Jacksonville University's Davis College of Business and Technology.
From boardrooms to classrooms, the conversation remains the same: AI is coming fast, but the path to adopting it successfully remains unclear.
The real problem? Organizations are asking the wrong questions.
The Fatal Assumption: AI Understands Intent
Organizations tend to believe that AI is self-adapting and good at understanding intent.
The technology is getting better. But we're nowhere near where we need to be.
Here's what actually happens: while input parameters change, prompts don't change without human interaction. Intent often doesn't change with the parameters.
Let me give you a specific example from resource-intensive industries.
A human would easily understand that you don't nickel-and-dime your number one customer for making a small mistake at one of their many locations. You don't charge a multi-million-dollar-a-year customer $25 to send a service vehicle to a location that forgot about the appointment. You just resend it.
But for a customer who pays you $90 per year? You absolutely charge a fee to come back out.
To a human who understands the larger context of the business, we instinctively understand this distinction. Business leaders assume you'll get this intuition from an agent. You won't.
This becomes the first thread unraveling your strategy.
Poor use case design is a real struggle because leaders don't really understand the
limits of a technology that seems to know everything and nothing at the same time.
The Silent Unraveling: What Happens After Deployment
Let's dig in and see how this unraveling multiplies, quietly, over the months.
You've deployed a digital service agent to handle customer service. At first, any issue that is well-suited to be handled is, and all seems good.
Then frustration sets in.
Something outside the agent's purview occurs, and the agent continues playing the same old script, unaware that anything has happened. Customers are becoming dissatisfied.
One of two outcomes occurs:
Option One: The outliers all go to humans. This means human capital does the worst job, reserving the easy and best parts for the agent. You don't capture the value.
Option Two: Customers become frustrated and simply stop trying. They start looking for your competitor.
You're now blind to the problem.
Your CSAT score looks great, but your churn is through the roof.
According to research on Silent Churn, while AI reduces the cost per ticket to around $0.50, it can increase the total cost by driving high-value customers to leave. The revenue loss from churn often exceeds the operational savings from the bot.
By the time these secondary metrics tell you there's a problem, you're dealing with cleanup, not prevention. You've lost a ton of customers. You could hit critical mass before you can pull it back and get your hands wrapped around it.
This leaves you making desperate phone calls to employees you've laid off, begging them to come back and save you. They'll likely demand a 30% to 40% premium.
⚠️ Warning: Transparency, auditing, and the ability to measure granularly are critical. You need stewards who can identify challenges with agents early, respond to them, and adapt. Without that, you're at the mercy of the AI telling you things are bad. The AI is blind to context outside its purview. It doesn't know what it doesn't know.
The Governance-First Approach: Stop Building Systems You'll Regret
Before building agents or deploying models, you must first define how you intend to operate.
An AI Governance and Risk Framework (AIGRF) establishes the moral, ethical, and operational boundaries that guide decision-making. Frameworks such as the EU AI Act, NIST AI Risk Management Framework, and emerging global standards provide useful starting points.
According to a 2024 global survey of 1,100 technology executives, 40% believed their organization's AI Governance program was insufficient. The lack of enterprise-level AI governance programs is fast becoming a key blocker to realizing return on value from
AI investments.
But the most important outcome is internal alignment.
An AIGRF becomes the lens through which you answer difficult questions:
Should we automate this decision?
What risks are acceptable?
Where must humans remain accountable?
Companies that skip this step often build systems they later regret.
Ask "Should We?" Before "Can We?"
The most common mistake in AI adoption begins with the wrong question: "Can AI do this?"
In many cases, the answer is yes.
The better question is: "Should AI do this?"
That subtle change forces you to evaluate ethical considerations, business impact, and potential unintended consequences. It also prevents you from implementing technology that reduces cost while degrading performance.
Consider customer support. AI chatbots can reduce staffing requirements. Research shows that 53% of users feel customer service chatbots are "not effective" or only "somewhat effective," with 59% of respondents feeling frustrated at repeatedly providing the same information when chatbots cannot effectively serve their needs.
If the result is lower customer satisfaction or higher churn, your organization ultimately loses.
Technology must improve the metrics that matter, not simply lower expenses.
The Five Fundamentals of Successful AI Transformation
Fundamental One: Augment People Before Replacing Them
Successful transformation begins with a simple principle: AI should augment people before replacing them.
You should first measure how humans currently perform a task. Then test whether AI can match or exceed that performance. If it can, the next step is not immediate workforce reduction.
Instead, redeploy those employees to areas of growth.
According to Harvard research, after ChatGPT's November 2022 launch, job postings for occupations involving structured and repetitive tasks decreased by 13%, while employer demand for jobs requiring more analytical, technical, or creative work grew 20%. Rather than solely eliminating jobs, generative AI creates new demand in augmentation-prone roles, suggesting that human-AI collaboration is a key driver of labor market transformation.
Managing fleets of AI agents is itself a new skill set. Employees trained to supervise and orchestrate agents become more valuable, both internally and in the broader market.
Companies that prematurely eliminate talent often find themselves rehiring the same skills later at a 30% to 40% premium.
Headcount reduction should be treated as a provider-of-last-resort decision, not a primary strategy.
Fundamental Two: Invest in Organizational Change Management
AI transformation is fundamentally a digital transformation challenge.
According to S&P Global data, 42% of companies scrapped most of their AI initiatives in 2025, up sharply from just 17% the year before. The average organization abandoned 46% of AI proof-of-concepts before they reached production. Root causes include technical debt, poor data infrastructure, unclear ownership, and weak cross-functional coordination.
Organizations that underestimate the cultural impact of change are far more likely to fail. Investments in organizational change management, whether through dedicated internal roles or external expertise, are often the difference between success and failure.
Every organization has a rate-of-change threshold: the speed at which its culture can absorb transformation. When technology adoption outpaces cultural adaptation, friction increases, morale drops, and initiatives stall.
You must balance ambition with cultural readiness.
Fundamental Three: Know What Success Looks Like
AI initiatives must be tied to measurable outcomes.
Before deploying agents, you must define both the metric you want to improve and the current baseline.
For example, implementing AI customer support agents may allow your company to scale from 10,000 customers to 30,000 without adding staff while maintaining or improving customer satisfaction scores.
The goal is not simply efficiency. The goal is asymmetric growth.
When executives ask me about needing stewards to monitor AI agents, they're immediately shocked. "What's the point then?"
They've skipped the AIGRF and Person+AI fundamental step.
The magic happens when you use agents to grow asymmetrically, not reduce overall costs. The win is the ability to double, triple, or more your size without having to add more employees. You're not cutting the employees you have now in half.
Those seeking to reduce headcount are playing a finite game. Those who embrace the growth narrative are playing an infinite game.
You can only cut so far before you've hit bone.
Fundamental Four: Automate the Busywork
Nearly every business process contains two types of work: high-value activities and necessary but low-value tasks.
Customer success representatives, for example, create the most value through direct customer interactions. Yet much of their day involves administrative work: documentation, meeting summaries, follow-ups, and internal coordination.
AI agents excel at these repetitive tasks.
By delegating administrative work to agents, you allow employees to spend more time where they deliver the greatest impact. The result is not reduced human value, but amplified human productivity.
Fundamental Five: Agents Require Continuous Stewardship
One of the most persistent misconceptions about AI is that agents can be deployed and forgotten.
In reality, they require continuous oversight.
Businesses evolve. Markets change. Customer expectations transform. Humans naturally adapt to these signals, often without conscious effort. Agents do not.
If a CEO is only seeking one or two quarters of good numbers, then by all means, cut costs aggressively. But two quarters later, the silent killers creep in. You start seeing cracks in your business in areas you didn't expect.
Your market is adapting, but your business hasn't pivoted to account for it. Your competitors are gaining on you. You're losing market share. Unfortunately, it's hard to put the blame on an AI agent that is doing what you asked it to do.
To remain adaptable and take advantage of market conditions, your agent needs a human to keep managing context for those parts of your business it doesn't have visibility to. Agents currently don't have the ability to adapt their own prompt.
This is the limiter.
Agents lack the context and the organic adaptability that humans have as we instinctively share information across departments, which allows humans in other departments to pivot and refactor a strategy.
The Stewardship Model: How to Structure Human Oversight
The steward-to-AI ratio depends on the task and skill set.
A single steward may be responsible for a single orchestrator agent, who then sends commands to multiple worker and data agents to achieve complex tasks. The number of specific agents is less important than the operational process being automated by AI
.
If you took out the AI, humans own this process, and regular top-level management meetings discuss the need for changes. Stewards would attend those same meetings and adapt their agents accordingly.
Without stewards, you end up with agentic sprawl: armies of agents that do "something" that was set in motion by a group of humans who are no longer with the company.
Very quickly, automation can get away from you. With no stewards to look after, adapt, and manage it, entropy sets in and inefficiency builds up, like plaque in the system.
The ratio will depend on the department's capacity. Today, you may have a higher ratio, while in five years, as humans grow muscles around managing armies of agents, the need for fewer stewards will naturally evolve.
Real-World Examples: When AI Goes Off the Rails
Most of these cracks in the agentic strategy have occurred via proof-of-concepts. Thankfully, small experiments that went horribly wrong.
The famous one was a vending machine experiment that suddenly started giving away merchandise to employees it had no business providing, even though the items weren't even in the vending machine's inventory.
In this instance, there was no steward checking in on the agent. It was just developed, airdropped in, and left to manage a small administrative task: refilling inventory in its vending machine. But it had direct communication with employees.
Once employees realized they could get this thing to give them just about anything at any price, it went off the rails.
This wasn't just a couple of days. This was weeks of divergent, unplanned behavior of an agent left to its own devices, being driven by a weak prompt in an activity where conditions had dramatically changed.
Without a human steward to help it with context, it did the best it could with what data it had available.
So, who's to blame?
This is a relatively passive example. There are many where the cost was many customers to a competitor, where a chatbot on a website made it near impossible for customers to get to a human operator.
The result was the loss of customers, but the reported stats were high CSAT scores. The juxtaposed metrics of high churn with high CSAT contradicted each other, which was the only indicator there was a problem. Even then, it took time to work out where the problem was.
By then, the damage was done.
What to Measure: Leading Indicators, Not Lagging Ones
At Jacksonville University, we're teaching that the human is still the critical asset.
If you're an artistic photographer, you're now using AI rather than an artistic stage, lighting, filters, and special lensing. The only thing that has changed is the camera and the cost of iteration.
We're teaching students how to be good stewards of agents by inspecting transparent logs, understanding decisions, and looking at dropped calls (for a customer service agent) or dropped conversations (for a chatbot on a website).
Inspecting activity logs and identifying anomalous trends is key.
By the time you rely on high-level metrics, it's too late. The board-level metrics will ultimately show the price, but they can't be the leading indicator.
You must be a steward to your agent, inspecting the activity it is trained to perform, reviewing its logs, and taking specific measures to highlight challenges with its use case.
By the time it has escalated to impacting a board-level statistic, it's already a runaway problem for which you only have the option of containment and cleanup. There's nothing to prevent at this level. The damage is done.
Finding Your Stewards: Retraining vs. New Roles
At AMCS Group, we're finding that these "champions" exist within the departments.
They deeply understand their business and their job. If they engage an agent, they know exactly why.
For instance, the customer success representative is seeking to reduce churn. In this case, the agent maintaining communications (the busywork) after the interaction is a metric of quality:
Did the agent accurately summarize the interaction?
Did it capture all of the tasks we discussed?
Did it set up the next scheduled meeting or check-in date?
Each champion knows how to look after their agent.
You treat it, in many ways, like an intern or junior version of your job who's picking up the low-hanging work while you focus on the high-value work. You then check in on your junior worker to see how it's going and make sure they're doing everything right.
Only the future will tell whether an agent will ever replace the value of human-to-human interactions, which are so key to a customer success representative.
But for now, to get the most out of AI, let the human do that part, and relegate the AI to the low-value, but necessary work. Then inspect it regularly to see what it's doing.
If you see anomalies in the metric you use to grade your agent, dig deeper, find out why, and then make corrections to the agent to fight entropy.
This is how you avoid entropy setting in on your agent coworker and avoid agent sprawl. Realizing that every agent requires this forces you to focus on investing in the highest value areas, rather than anything and everything, hoping for the best.
The Careers That Will Matter Most
Understanding how AI transforms organizations also reveals where future career opportunities lie.
According to the EPOCH Methodology, the most valuable human capabilities are quantified across five dimensions less susceptible to AI replacement: Empathy, Presence, Opinion (ethical judgment and intuitive decision-making), Creativity, and Hope (creating meaning and inspiring action).
The most valuable human skills will increasingly revolve around what machines struggle to replicate:
Adaptability in changing environments
Creative problem-solving beyond statistical patterns
Emotional intelligence and empathy
Cross-disciplinary thinking
Human-to-human communication and trust
The future workforce will likely consist of broadly skilled generalists who use AI to dive deeply into specific domains when necessary.
In other words, humans will increasingly define the vision, while AI handles the execution details.
The Next Phase of Software
The software itself is also evolving.
Three stages of AI integration are already emerging:
Stage One: AI enablement through protocols such as the Model Context Protocol (MCP)
Stage Two: Bolt-on agents capable of operating existing user interfaces
Stage Three: Fully agentic systems, where humans interact primarily with orchestrating agents that coordinate specialized sub-agents
This transition will fundamentally reshape how software is designed and how people interact with it.
The Opportunity Ahead
The transformation is already underway.
Organizations that hesitate risk falling behind. Those who rush without a strategy risk expensive failures.
The most resilient path lies between those extremes: building governance, investing in people, and experimenting deliberately.
For leaders, the responsibility is clear. Build the cultural and technical foundations that enable AI to amplify human capabilities rather than replace them.
For students and early-career professionals, the message is equally clear.
Lean into the skills that make you uniquely human.
The age of specialization may increasingly belong to AI systems. The age of humanity—imagination, empathy, adaptability, and bold experimentation—is just beginning.
💡 Key Takeaway: Through the effective use of experience-based processes, technology, and AI, we can enable complex businesses to adopt technology that amplifies human potential efficiently, rather than adding complexity, cost, and lost efficiency. The question is whether you'll build your AI strategy on solid governance and human stewardship, or whether you'll learn these lessons the expensive way.




Comments