Summary: There is mounting pressure to trust AI. The narrative is that only by trusting it, can we fully benefit from it. However, to what extent can this inherently human notion be put upon a machine? Is a more apt goal to instead rely on AI to fulfil specific needs?
Silicon Valley and tech firms more broadly continue to pressure organisations, teams and executives to ‘trust’ artificial intelligence (AI). In the first quarter of 2026, the subtext of many headlines is: ‘If your people cannot trust AI, you will be left behind’.
Still, can we really trust an algorithm? And, if so, what are the risks of placing our trust in a machine? As champions of people within the workplace, HR leaders must question, if not outright reject, the thinking that humans should trust AI.
A seductive narrative
Consultants and tech firms are beating the drum about how ‘trust’ can help organisations realise the promise of AI.
For example, in reporting the findings of its 2026 AI Trust Maturity Survey, McKinsey states that trust ‘underpins two critical outcomes’. First, to enable organisations to realise investment value through sustained adoption. Second, to manage an expanding and evolving risk landscape.
This discourse is seductive for various reasons. Many organisations have yet to see fair return on their heavy investments in the technology. On a social level, teams fear to miss out on the benefits supposedly enjoyed by others. Finally, thoughts of trust elicit highly positive emotions in people.
HR leaders must question, if not outright reject, the thinking that humans should trust AI
Trust in AI
Clearly, the tech industry has economic and political interests in advising the world to plough more resources into automating the labour of the mind. But can and, if so, should your organisation follow Silicon Valley’s siren call to ‘trust’ AI?
Philosophers continue to debate this question, to which there is no one answer. Not least because trust itself is notoriously difficult to define. Nevertheless, several considerations merit attention.
Trust may be considered an interpersonal experience, which raises doubts about whether AI can be one half of the relationship. Two illustrations make the point.
Anthropomorphising algorithms
First, an algorithm has no feelings. You trust a doctor, not only to make an accurate diagnosis given her education, but also to have your best interests at heart. Likewise, you trust a friend to keep a secret, not simply to avoid trouble, but because she does not wish to wrong you.
A machine does not care about you and has no meaningful sense of your ‘best interests’, for it lacks motivations and attitudes. Whilst the human brain may respond to a machine as if it were a person (a fact that terrified Joseph Weizenbaum, who invented ELIZA, arguably the first chatbot), as an inanimate entity AI cannot be emotionally moved by any trust you place in it.
Second, an algorithm can bear no moral responsibility. Trust implies an obligation to meet whatever commitments are entrusted by an individual: your friend should keep your secret; your doctor should take care of your health. Both these people know your expectation and their duty of care.
Clearly, it is senseless to blame an algorithm when a breach of trust occurs. AI is not aware of the trust you have placed in it. These human-orientated imperatives of trust are simply beyond the capacity of a machine.
You cannot trust machines without ascribing to them human traits they do not possess, an act of ‘anthropomorphisation’. (Arguably, in describing algorithms as ‘intelligent’ we have already committed this mistake; still, two wrongs do not make a right.)
A machine does not care about you and has no meaningful sense of your ‘best interests’
Friend or foe…or somewhere in between?
As machines can neither feel, nor be accountable for, trust invested in them, it is not possible to take AI as ‘trustworthy’. But even if you do not accept this argument, is it really a good idea to trust an algorithm?
Recent headlines about AI failures show that blaming the technology allows developers and users of AI to evade moral responsibility. (Note that OpenAI has gone even further to say that a teen’s suicide was the fault of the child’s own ‘misuse’ of ChatGPT.)
While the world does not here speak with one voice, HR professionals may reasonably argue that accountability, agency and autonomy should rest with human beings. People are conscious of their choices and able, if not always willing, to see the impact of their actions.
A third risk of trusting AI is to miss the wider value that people create through their roles. Consider, for example, the moral choices of a doctor or lawyer, or the relationship value the hotel doorman brings beyond a functional role physically to open the door. To trust AI is to diminish human abilities.
Is reliance more realistic?
Clearly, AI is here to stay and organisations are called upon to do all they can to ensure its effective and safe use. If you cannot and should not trust AI, then what can you work toward as an ambition?
To seek ‘reliance’ on AI may be realistic.
To think of reliance, rather than trust, contains AI and allows organisations to verify that systems consistently and predictably support known functional demands.
This strategy of ‘relying’ on AI makes the algorithm a ‘tool’ for given tasks (and avoids the broad-brush applications that today fail to deliver returns).
Similarly, the pursuit of reliance leaves intact the human foundations of trustworthiness, such as awareness and moral responsibility. This enables organisations to secure their value chains, which will be at risk if senseless bots are left to themselves.
Finally, to reject the narrative on trust in favour of a strategy of reliance allows teams to see without distraction what they do that cannot be automated, which, when all the excitement dies down, is where organisations may discover lasting advantage.
To seek ‘reliance’ on AI may be realistic
Key takeaways
- The tech industry is pressing organisations across all sectors to ‘trust’ AI as a way to secure return on investments in its technology
- AI lacks the essential human characteristics of awareness and responsibility that make trust desirable or even possible
- Organisations should question the narrative on trust and seek to develop AI systems they can rely on for specific needs.
If you enjoyed this article, why not read another by the author?: How smart and driven managers fail



