Author Profile Picture

Natasha Wiebusch


Marketing Content Manager

Read more about Natasha Wiebusch

Brand Logo

The AI workplace revolution: Three focus areas

The AI revolution is here, but organisations are struggling to develop practices to integrate the new technology fully and safely into their business. Natasha Wiebusch, Content Marketing Manager at Brightmine, identifies three priority areas.
AI readiness

How can organisations fully embrace the AI workplace revolution? In this article, Natasha Wiebusch, Content Marketing Manager at Brightmine, outlines three considerations: skills, project management and ethics.

After OpenAI boasted record breaking site visits and users in early 2023, employers and the public quickly acquainted themselves with the new, mesmerising and sometimes quirky world of generative artificial intelligence (AI). 

Business leaders announced AI initiatives to identify use cases, employees explored ChatGPT’s free writing services, and the public had fun with weird inaccurate pictures of … hands?

But then things took a turn. 

A dangerous game?

The New York Times published an article reporting that Bing’s AI said some things that were, well, disturbing

Then, Google’s top AI expert, aka the Godfather of AI, abruptly resigned, saying he regretted his work and warned of the technology’s very real dangers. 

Around the same time, and somewhat paradoxically, IBM announced a hiring freeze on all positions that could be replaced by artificial intelligence.

Yes, AI’s honeymoon phase is over, and it was short-lived, really. Despite concerns, evidence shows that organisations will continue to move forward with its adoption. 

To stay competitive, employers must achieve their own AI revolution.

AI’s honeymoon phase is over, and it was short-lived, really

The state of AI at work

Early data of adoption reveals that, so far, some organisations are struggling to develop practices to integrate the technology fully and safely into their business. 

As employers work towards their own AI revolution, it’s important to understand where the technology is and is not being adopted, and what the potential issues may be.

Business functions

First, data shows AI adoption is not ubiquitous but rather adopted for certain specific functions. And few organisations (fewer than one-third) have adopted AI beyond one business function.

Adoption also seems to be more inwardly focused, supporting internal operations by enhancing employee performance. For example, according to McKinsey & Company’s 2023 state of AI report, AI’s adoption rate overall is 55%. 

However, the average AI adoption rate to support the production of goods or services was significantly lower, at 3.9%, according to a report from the US Census Bureau (rates vary by business sector and are higher for large companies).

Meanwhile, McKinsey found that HR, marketing and sales were among the top three business functions on which AI is having the largest impact.

AI’s limited scope of use suggests that organisations may not yet have a clear strategy or the necessary skills within their workforce to guide their AI projects.

To stay competitive, employers must achieve their own AI revolution

Adoption of generative AI

Even though AI may not be integrated into wider business processes at every organisation, generative AI has proven to be somewhat of an outlier, in that its use has been notably higher. Recent surveys have found the following:

But generative AI has problems of its own. Much of its use is unguided and unmonitored. And generative AI’s shining stars, chatbots, are known to be inaccurate. 

Publicly available chatbots also lack citations, could compromise privacy and are the subject of litigation for copyright violations. These and other issues have raised novel ethical and responsible use issues for employers.

Focus areas moving forward

AI’s promises and problems are proving to be varied in these early stages of adoption. 

However, the data reveals three key focus areas that employers with AI ambitions should prioritise.

1. Skills

According to a survey commissioned by Amazon Web Services, hiring AI skilled workers is a top priority for 73% of employers. 

Additionally, in 2022, IBM found that the top barrier is limited skills, expertise and knowledge of AI.

Ensuring employees have the skills necessary to work with AI is not just a pain point for employers, it is also certain to be a major differentiator in the coming years.

Employers must implement a successful skills strategy to facilitate adoption, enhance the organisation’s performance and reduce talent loss.

Developing such a strategy will first require employers to have a clear vision of the role this technology will play within the organisation. So before hitting the ground running on a new skills programme, employers may consider answering some basic questions about the type of AI they’d like to adopt, why and for which functions.

Of course, this technology will enhance work for employees… but it will also lead to job elimination, and employers and employees need to face this reality head-on. 

In fact, in 2018 (before ChatGPT came on the scene) the Organisation for Economic Cooperation and Development (OECD) forecast that new automation technologies would likely eliminate 14% of global jobs and significantly transform about one-third of jobs.

Even in these early stages, AI is already leading to layoffs (even if indirectly) at notable companies like UPS and BlackRock. More layoffs are inevitable, but employers have an opportunity to minimise talent loss through robust upskilling and reskilling programmes.

Accordingly, in addition to determining which employees will require reskilling versus upskilling, employers will need to secure buy-in from leadership and managers. They must also evaluate the capabilities of employees and the potential job matches for reskilled worker, and align skills programmes with the organisation’s succession planning strategy.

Organisations are struggling to … integrate AI fully and safely into their business

2. Project management

AI adoption is increasing quickly in certain areas – particularly when it comes to generative AI. But companies are still struggling to get these projects off the ground. 

In fact, some estimate that the failure rate of these projects in business is upwards of 80%. This is almost twice as high as the failure rate of other corporate IT projects.

The problem may be in how organisations are managing these projects, which are significantly more complex than other tech-related projects.

First, teams charged with leading AI adoption may be too fixated on executing quickly, preventing them from taking the time needed to understand the complexities of such projects. 

This fixation is also called solution fixation, which is the tendency to focus on possible solutions before understanding the problem. 

To avoid solution fixation, teams will need to spend more time learning, asking questions and understanding the potential issues of their projects.

Second, teams may be using the wrong project management approach. According to Ron Schmelzer, managing partner and principal analyst at Cognilytica, teams shouldn’t rely solely on the agile method of project management for AI projects. 

Its short iterative cycles don’t account for the complexity of data. Instead, Schmelzer recommends that teams use a hybrid approach that blends agile and data-centric methodologies to help deal with the complexity and importance of data in such projects.

Few organisations are actively working to mitigate known ethical risks of generative AI

3. Ethical and responsible use

Unfortunately, thus far, organisations have been slow to respond to the important ethical and responsibility issues related to AI. McKinsey & Company’s report found that most organisations using the technology consider inaccuracy a relevant risk of generative AI. 

However, only 32% are mitigating those risks (and only 21% said they have generative AI policies in place).

Generally, few organisations are actively working to mitigate known ethical risks of this technology.

It’s an AI revolution

Ignoring or deprioritising ethics and responsible use can at a minimum damage the employer brand. At most, an organisation can expose itself to liability for several types of legal violations (eg. privacy, discrimination, intellectual property and securities laws).

Employers must address these issues to mitigate risks. Action items may include:

  • Implementing an AI policy
  • Adopting and communicating safeguards for use 
  • Establishing guidelines for ethical use
  • Creating a cross-functional AI working group with C-suite representation to address ethical and responsible use issues
  • Analysing the technology’s impact on the organisation’s diversity, equity and inclusion (DEI) or environmental, social and governance (ESG) strategies

In the coming years, employers will have the opportunity to revolutionise their organisations through AI. However, early data shows that, to succeed in this revolution, they must address the skills gap, adopt an AI-specific approach to project management and take precautions to ensure ethical and responsible use.

 Learn more from Brightmine – the experts in brighter business outcomes.

Productivity unleashed hub
Author Profile Picture
Natasha Wiebusch

Marketing Content Manager

Read more from Natasha Wiebusch

Get the latest from HRZone.

Subscribe to expert insights on how to create a better workplace for both your business and its people.


Thank you.