Skip to content
← Lead Humanly
Your Team Is Using AI Behind Your Back and That's on You
LEAD HUMANLY

Your Team Is Using AI Behind Your Back and That's on You

JV
Jay Vergara · April 3, 2026 · 4 min read
leadershipai-toolscommunications

I’ve been hearing the same sentence from L&D professionals and HR leaders for months: “We don’t really know who on our team is using AI.” That sentence should worry you if you’re in a leadership role. Not because people are using AI, but because they feel like they need to hide it.

A recent survey found that 57% of employees are concealing their AI usage from their employers. And when you dig into why, the answer isn’t complicated. Only 34% of individual contributors trust their company to actually train and support them on AI. Managers? They’re sitting at 59%. That’s a 25 point gap between the people making AI decisions and the people expected to live with them.

There’s a pattern behind that gap.


The Trust Gap Nobody Wants to Name

Most organizations rolled out AI policies the same way they roll out everything else. Top down announcement, maybe a town hall, a few bullet points on what you can and can’t do. Then silence. No follow up. No space to ask dumb questions. No acknowledgment that this stuff is genuinely scary for a lot of people.

When you don’t create space for honest conversation about a new technology, people don’t stop using it. They just stop telling you about it.

Bencsik et al. (2022) studied what drives employees’ trust in technology during digital transformation and found something that should be obvious but apparently isn’t. The single biggest factor wasn’t training quality or digital readiness. It was the supportive role of management. Leadership behavior affected trust both directly and indirectly through every pathway in their model. When managers showed up as genuinely supportive (not performatively supportive) trust followed. When they didn’t, no amount of training filled the gap.

Yue et al. (2019) found something similar in a study of 439 employees. Transformational leadership combined with transparent communication led to organizational trust, which in turn predicted openness to change. The mechanism wasn’t complicated. People who felt their leaders were honest with them were more willing to try new things. People who didn’t feel that kept their heads down.

Sound familiar?


The AI trust gap is a communication problem wearing a technology costume. The fix starts with leaders being honest about what they don’t know.


What to Actually Do About It

Say “I don’t know” out loud. If you’re a manager and you haven’t told your team that you’re also figuring out AI as you go, you’ve already created distance. Vulnerability from leadership is the fastest trust accelerator there is. You don’t need to be the AI expert. You need to be the person who makes it safe to learn.

Run an AI amnesty. Seriously. Tell your team that for the next two weeks you want everyone to share how they’re actually using AI. No judgment. No policy review. Just direct conversation. You’ll learn more about your team’s capabilities in two weeks than any audit would reveal in six months.

Replace your AI policy with an AI conversation. Most AI policies read like legal documents written by someone who’s never used ChatGPT. Instead of a static PDF, create a living document that your team contributes to. What’s working. What feels weird. What they wish they could use AI for but aren’t sure if they’re allowed to. Make it collaborative and keep updating it.

Close the information gap between managers and individual contributors. If your leadership team is having conversations about AI strategy that never reach the people doing the actual work, you’re building the trust gap one meeting at a time. Share the thinking, not just the decisions.


The 57% hiding their AI usage aren’t being sneaky. They’re being rational. They looked at the signals their organization was sending and concluded that transparency wasn’t safe. That’s a leadership problem, not an employee problem.

The fix isn’t expensive or complicated. It just requires going first and being honest about the fact that nobody really has this figured out yet.

What’s the AI conversation like at your organization? Open or underground?

I write about leadership and AI at the intersection of human development on LinkedIn. Come say hi.


Sources

Bencsik, A. et al. (2022). Trust in and Risk of Technology in Organizational Digitalization. Risks. 11 citations.

Yue, C. et al. (2019). Bridging Transformational Leadership, Transparent Communication, and Employee Openness to Change. Public Relations Review. 261 citations.

Jay Vergara

Jay Vergara is an L&D strategist and cross-cultural communication specialist based in Tokyo. He is a partner at Peak Potential Consulting and writes about leadership, learning, and building with AI at leadhuman.ai and on LinkedIn.

You might also like