Summary: AI cultural debt builds when organisations deploy the technology without addressing the human systems around it. The patterns driving suspicion at work existed long before AI – surveillance, conditional autonomy and the erosion of trust through control. Without clarity on good AI use, employees choose between burnout or concealment. Leaders must measure trust alongside adoption, involve people in creating norms, and develop managers who lead with curiosity rather than control.
For every AI deployment without clear direction, there is an equal and opposite culture reaction. Deloitte’s 2026 Human Capital Trends report offers one of the clearest warnings that we have seen on this. The research suggests that AI is creating a ‘steady accumulation of negative cultural behaviours’, referred to as AI cultural debt.
Essentially, this means that when organisations move quickly on AI implementation – whilst simultaneously leaving trust, clarity and behaviour to chance – the hidden costs on culture compound. The study found that while over half of respondents felt the impact of AI on culture was important or very important, only 5% are making progress in addressing AI cultural debt.
The physics of it all are inevitable. Newton’s Third Law dictates that ‘for every action, there is an equal and opposite reaction’. If we look at this through a work lens, whenever leaders apply ‘force’ in the form of urgency, expectation, monitoring and pressure, people will respond. The question is never whether there will be a reaction. The question is what kind of reaction is inevitable under the circumstances.
The suspicion economy
To fully understand what is unfolding, we need to look at human behaviour and very familiar leadership trends. When leaders feel uncertain, exposed or under pressure, their response to this perceived threat tends toward control. And control has a habit of quietly eroding trust and engagement at work.
The suspicion economy didn’t arrive with generative AI. We have been inching toward it for years by normalising the idea that if people aren’t visible, they aren’t working. It’s how we ended up with return-to-office mandates justified as ‘culture’ while activity trackers spread quietly in the background, teaching our people that autonomy is conditional and trust is temporary.
AI magnifies this pattern by disrupting the comforting illusion that effort is always observable. When work becomes less visible, control-oriented cultures reach for measurement that feels concrete, even when it captures very little of what truly matters.
Newton would recognise the dynamic immediately: leadership applies force such as surveillance, monitoring or productivity metrics, and the workforce pushes back with equal and opposite energy, through disengagement, concealment and quiet non-compliance.
Deloitte’s data makes the resultant suspicion uncomfortably visible: 80% of leaders, managers and workers worry that colleagues are using AI ‘to appear more productive than they actually are’.
That statistic isn’t really about AI. It’s about trust, or, more precisely, the absence of it.
The growing trust gap
AI cultural debt builds quietly, in the gap between what leaders intend and what employees experience. And it has a multiplier effect: the faster we move without cultural clarity, the faster it compounds.
In practice, this might look like a leadership team investing in AI, communicating efficiency gains and encouraging adoption, while also starting to treat AI use as suspicious. The tool employees were nudged to embrace becomes evidence of corner-cutting. The message employees hear is not ‘we trust you to work differently’, but instead, ‘we’re watching to see if you’re cheating’.
This is not new. We have seen this with mixed messaging around flexible working and RTO mandates with the rise of ‘coffee badging’ and the ‘hushed hybrid’ trend.
When people feel judged for using AI, they will do one of two things. They’ll either stop using it and quietly absorb the extra work, setting them on the fast track to burnout. Or they’ll keep using AI and stop being honest about it because in a culture of suspicion, transparency feels like a career risk.
That is how shadow AI grows. Not because employees are reckless, but because they are rational. They have read the room, calculated the risk, and chosen self-protection over openness. The force of top-down control generates an equal and opposite force of bottom-up concealment. Newton’s Third Law is running the culture of many organisations right now, and it’s producing suspicion and silos.
The trust gap that this creates is actually measurable. Checkr’s 2026 Manager–Employee AI Divide Report found that 70% of managers trust AI-driven tools, compared to just 27% of employees.
Managers sit close to the narrative of competitive advantage. Employees, on the other hand, sit closer to the lived reality of opaque decisions, uneven support and the creeping fear that AI is being deployed to justify asking more of people who are already stretched.
Curiosity over control
AI and digital transformation aren’t going anywhere. Business and HR leaders must not rely on AI to fix their problems, while ignoring broken processes and leaving culture to chance. Without careful design and a genuine commitment to balancing performance and people’s needs, productivity will simply mean that we expect people to do more with less, at a faster and cheaper rate. A model such as this is not sustainable, so here are three things we should all be doing right now:
1. Clarify
Ambiguity breeds anxiety, which erodes trust. Be clear about what ‘good use of AI’ looks like in your organisation. Share stories of successes and failures to promote transparency. Involve employees in creating workplace norms and customs relating to AI use, so that everyone is part of shaping the way work gets done.
2. Measure
Adoption metrics tell you how many people are using a tool. They tell you nothing about whether trust is growing or AI cultural debt is compounding. Ask whether people feel safe being honest about AI use. Ask whether it’s reducing low-value work or simply accelerating the hamster wheel. The answers will tell you more than any dashboard ever could.
3. Develop
Managers need functional competence in AI use just as much as they need confidence to empower their teams. Develop managers to lead with curiosity over control and encourage transparency. The manager who asks ‘What are you using AI for, and what’s still hard?’ gets better information, builds trust and surfaces problems before they become crises.
Paying the price for AI cultural debt
AI cultural debt, in Deloitte’s framing, does not stay invisible forever. It accumulates until the bill arrives – usually as absenteeism, attrition or disengagement.
So it’s important to remember that every force you apply in your organisation will generate a response, and the quality of that response is shaped by the culture you have built around your technology, not just the technology itself.
The leaders who are illuminating the way forward understand that speed without direction generates reaction without purpose. So they choose, deliberately, to create the conditions where AI and people can do their best work together.
Key takeaways
If you’re deploying AI whilst watching cultural debt accumulate, consider whether you’re creating the conditions for trust or suspicion:
- Recognise that control generates concealment, not compliance. When you treat AI use as suspicious after encouraging adoption, employees face a choice: stop using it and burn out, or keep using it and stop being honest. Shadow AI grows not because people are reckless, but because they’ve calculated that transparency feels like a career risk in your culture.
- Clarify what good AI use actually looks like in your organisation. Ambiguity breeds anxiety, which erodes trust. Can your employees articulate what responsible AI use means in their role? Involve them in creating workplace norms rather than imposing rules from above, and share stories of both successes and failures to promote transparency.
- Measure trust, not just adoption rates. Adoption metrics tell you how many people use a tool. They reveal nothing about whether cultural debt is compounding. Ask whether people feel safe being honest about AI use, and whether it’s reducing low-value work or simply accelerating an unsustainable pace.



