Author Profile Picture

Dr Jo Burrell

Ultimate Resilience

Clinical Psychologist

LinkedIn
Email
Pocket
WhatsApp
Reddit
Print

AI mental health tools: Breakthrough or band-aid for employee wellbeing?

ChatGPT's new mental health tools promise 24/7 support, but are they a breakthrough or band-aid? With workplace stress at crisis levels, AI offers tempting solutions – yet the risks of surveillance, poor-quality care, and organisational complacency are real. Dr Jo Burrell explores how to harness AI's potential without losing the human touch.
Medical tools including a stethoscope and first aid kit on a pink surface, perfect for healthcare themes. Depicting AI mental health tools as a band-aid.

When news broke that ChatGPT is developing integrated AI mental health tools, it sparked a familiar mix of excitement and unease. The idea that artificial intelligence might one day offer scalable, personalised support for employee wellbeing sounds promising. After all, HR teams and occupational health services are under intense pressure, and AI can work 24/7 without ever getting tired.

But when it comes to mental health at work, the reality is more complex. The risks of over-reliance on AI are just as real as the opportunities. And the way organisations choose to deploy these tools will matter more than the technology itself.

Mental health at work is not a problem to be delegated to an algorithm

Why AI is attractive in the wellbeing space

Employers are facing an unprecedented wellbeing challenge. Rates of workplace stress, depression and anxiety have soared in recent years. Waiting lists for therapy are getting longer, employee assistance programmes are often underused, and line managers frequently feel ill-equipped to handle sensitive conversations.

Against this backdrop, the attraction of AI is clear. Digital tools can:

  • Increase accessibility: Employees can get support at any time, from any location. For dispersed or shift-working teams, this is a huge advantage.
  • Offer consistency: A digital tool responds in a standardised way, without the biases or variability that can come with human interactions. Some people may even find it easier to open up when they’re not worried about being judged.
  • Identify trends: By analysing patterns in aggregate data – for example, the topics employees most often seek help with – AI can highlight emerging wellbeing concerns at a workforce level.
  • Generate insights: When data is anonymised and protected, it can provide valuable information to guide organisational strategy and resourcing.
  • Lower costs: In a world of shrinking budgets, AI appears to offer more support for less money.

In other words, AI promises scale, speed, and efficiency – three things organisations urgently want.

The AI risks we can’t ignore

But just because a tool is accessible doesn’t mean it’s always safe, ethical, or effective. There are at least five significant risks to consider:

  • Data sensitivity and privacy: Mental health information is extremely personal data. Employees will only engage with tools they can trust. Any whiff of surveillance, data-sharing with employers, or potential breaches will erode confidence. Even anonymised data can be mishandled.
  • Quality of support: Mental health challenges are nuanced and deeply individual. AI may provide useful coping tips for everyday stress, but it cannot replicate the skill of a trained clinician in situations of complexity or crisis. There’s a danger of generic, ‘good enough’ advice being mistaken for professional care.
  • The sticking plaster effect: Perhaps the biggest risk is that organisations use AI as a quick fix, outsourcing care without tackling the underlying drivers of poor mental health. No chatbot can solve excessive workloads, toxic leadership, or lack of psychological safety. 
  • Equity and inclusion: Not everyone is comfortable with using AI. Digital literacy, cultural attitudes to technology, and accessibility barriers may exclude some employees. If AI becomes the default, those most in need could be left behind.
  • Trust and uptake: If employees suspect AI is monitoring them, uptake will plummet. Wellbeing support only works if people feel safe enough to use it. The reputational risk of a poorly implemented tool is considerable.

Providing access to an app or chatbot is not the same as providing genuine support.

The danger of false reassurance

Perhaps the most insidious risk is the sense of reassurance these tools can provide to leaders. “We’ve bought an AI wellbeing solution, therefore we’ve done our bit.” But providing access to an app or chatbot is not the same as providing genuine support.

With workplace stress the most common cause of sickness absence in the UK, the need for meaningful support is urgent. But no algorithm can reduce workload, make your manager more supportive, or create a culture of care. Those responsibilities rest squarely with organisations and leaders.

AI’s role in mental health support: Three principles

So, does this mean AI has no place in workplace wellbeing? Not at all. Used wisely, AI can play a valuable role – but only as part of a broader, human-centred strategy.

Here are three principles to guide responsible use:

1. Augment, don’t replace

AI should complement, not substitute, human support. Use it to increase access and convenience for low-intensity help, while ensuring there are clear routes to professional care for those who need it.

2. Be transparent and ethical

Employers must be upfront about how these tools work, what data is collected, and how it will (and will not) be used. Transparency builds trust.

3. Address the root causes

AI can ease symptoms, but it cannot cure the disease of organisational dysfunction. True commitment to mental health requires leaders to tackle workload, culture, and systemic pressures. AI should never be a smokescreen for failing to act.

Used poorly, AI tools risk becoming yet another sticking plaster on a gaping wound.

Beyond the hype

The truth lies somewhere between the extremes of ‘AI will solve workplace mental health’ and ‘AI has no role to play’. Used well, these tools can lower barriers, provide early insights, and supplement overstretched services. But used poorly, AI tools risk becoming yet another sticking plaster on a gaping wound.

Ultimately, mental health at work is not a problem to be delegated to an algorithm. It is a shared responsibility; one that requires human empathy, organisational accountability, and systemic change. AI can help, but only if we remember that behind every data point is a person, and behind every person is a complex story that no machine can fully understand.

Useful resources

Want to explore the human side of AI in the workplace? These related articles offer essential perspectives on balancing technology with genuine care:

Want more insight like this? 

Get the best of people-focused HR content delivered to your inbox.
Author Profile Picture
Dr Jo Burrell

Clinical Psychologist

Read more from Dr Jo Burrell