The Trust Gap Behind Every Failed AI Rollout
Most stalled AI adoption I’ve watched up close has had almost nothing to do with the technology.
The shape repeats: leadership team is genuinely excited, training sessions get scheduled, tools get licensed, and then… compliance theater. Teams nod along, submit the required AI generated outputs, and quietly keep doing things the old way. Or they use AI in the shadows without telling anyone. (The irony of employees secretly using AI while leaders mandate transparency about AI use is not lost on me.)
The standard diagnosis is “change fatigue” or “digital literacy gaps.” I don’t buy it. The problem is a trust gap that leaders built long before any AI tool was introduced.
Research published in the Journal of Applied Psychology found something that should give every manager pause. Mingyu Li and colleagues ran four studies across 2,750 participants and found that employees consistently perceive AI management as less benevolent than human management. Not less capable. Less caring. And when employees don’t believe their management cares about them, they don’t trust it, regardless of how competent it looks on paper.
Employees don’t really believe AI, or the leaders deploying it, actually cares whether they’re okay. The job in front of you is closing that gap.
This is a hard one because the same dynamic plays out with human leaders who adopt AI tools poorly. When you roll out a new AI mandate without first addressing what your team is afraid of, you’re not giving them a productivity tool. You’re signaling that efficiency matters more than their anxiety.
A 2021 study by Frick and colleagues in the Journal of Decision Systems found that employees navigating AI change don’t primarily need technical training. They need leaders who provide stability and genuine development support, and the study found that employees strive for environments where they can trust that leadership is actually on their side and not just using them to hit an adoption metric.
31% of U.S. knowledge workers admit to actively working against their company’s AI initiatives. A January 2026 survey found that 74% of C-suite leaders feel “excited” about AI while 68% of individual contributors feel “anxious or overwhelmed.” Those two groups are holding the same “alignment” meeting and living in completely different realities.
So what do you actually do?
Name the fear before you name the tool. Before your next AI rollout, ask your team directly what they’re worried about. Not a survey. A real conversation. You’ll hear things like “I’m scared my judgment won’t matter anymore” and these aren’t irrational fears. They deserve a real answer and not just reassurance that everything will be fine.
Let teams own the use case discovery. Don’t hand down a list of approved AI applications. Ask people where they’re spending time on work that feels repetitive and let them experiment. When people choose their own tools for their own problems, adoption isn’t a mandate. It’s a solution.
Create explicit ‘high empathy’ zones. The Li et al. research found that employees specifically want human management in situations that demand empathy. Be honest about where AI doesn’t belong: performance conversations, feedback on creative work, anything touching someone’s career path. Protecting those spaces sends a signal that you get it.
And be visibly honest about your own AI experience. If you’re mandating AI adoption, your team is watching how you actually use it. Share the failures. Tell them about the output you had to rewrite three times and the prompt that completely missed the nuance of a real situation. Authentic fallibility from leaders does more for ‘psychological safety’ than any training module.
The thing about AI adoption is that it’s never really about AI. It’s about whether people feel safe enough to be honest with you when something isn’t working and valued enough to believe the change is for them and not just for the quarterly productivity numbers.
Your team already knows whether you’re that kind of leader. The question is whether you do.
What would it take for someone on your team to tell you honestly that an AI tool isn’t working for them? And if they did, what would you actually do with that?
If this resonates, I’d love to continue the conversation on LinkedIn.
I write about leadership and AI at the intersection of human development on LinkedIn. Come say hi.
Part of the Lead Humanly series on leadhuman.ai.
Sources
-
Li, M., et al. (2024). How perceived lack of benevolence harms trust of artificial intelligence management. Journal of Applied Psychology.
-
Frick, N. R. J., et al. (2021). Maneuvering through the stormy seas of digital transformation: the impact of empowering leadership on the AI readiness of enterprises. Journal of Decision Systems.
-
Checkr (2026). Under Tension: The Manager-Employee AI Divide Report.
Jay Vergara is an L&D strategist and cross-cultural communication specialist based in Tokyo. He is a partner at Peak Potential Consulting and writes about leadership, learning, and building with AI at leadhuman.ai and on LinkedIn.
You might also like
AI Is Changing L&D Faster Than Most Leaders Realize
Most L&D leaders still think of AI as a faster way to write course descriptions. The real shift is much bigger and organizations that figure it out in the next 18 months will have a significant advantage in talent development.
Your Team Wants the Hard Conversation You Keep Avoiding
Most managers think they're protecting their team by softening feedback. The research says the opposite. Your people are starving for the conversation you keep dodging.
The Best Leaders I Know Talk Less Than Everyone Else
Listening isn't passive. It's the most underrated leadership behavior in organizations and a likely cause of better job performance, stronger relationships, and greater well-being. The research is clear and most leaders still get it wrong.