Your employees likely use AI tools for research purposes, to summarise transcripts, or develop competitive analyses. While their intentions might be to introduce efficiency and stay current with the latest tech, they’re putting your organisation at risk. Because every popular gen AI tool has likely already been hacked, putting your organisation’s data and reputation in jeopardy.
And these risks are exacerbated by the increasingly global relationships of many firms, because “Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated in existing products without clear descriptions or announcement,” said Joerg Fritsch, VP analyst at Gartner.
“Organisations are noticing changes in the content produced by employees using GenAI tools. While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations.”
Avoiding compliance landmines
The core challenge is employees inputting proprietary company or customer data into consumer-grade AI tools. They do this outside of corporate oversight. Many AI tools retain user inputs indefinitely, using the information to train their learning models further. While this dynamic makes the tools ‘better’ in terms of outputs, it creates opportunities for bad actors to access the data and also presents enormous regulatory compliance implications.
Consider a hospital, where different staff members might:
- Enter protected health information (PHI) into public AI tools, exposing a patient’s name, medical history or other data.
- Utilise AI to generate patient communications or care instructions tailored to individual needs using personal information.
- Upload lab results or images to an AI tool for analysis or to provide a second opinion, which could expose the patient’s private data.
Under stringent compliance rules such as HIPAA or GDPR, employees will break protocol simply by using ChatGPT or other AI tools. A breach of the shared information does not need to occur to move them out of compliance.
Understanding common AI security failures
Workplace AI tools have several recurring security failures. One of the most common issues is when API credentials are embedded directly into the website’s front-end code. This is like writing “123password” on a sticky note and taping it to your monitor for anyone to see. If someone finds that code they could easily break into the system.
Another failure emerges with providers like Microsoft 365 Copilot, where glitches caused what’s known as cross-content context leaks. This involved confidential information appearing by mistake in another user’s session due to a software glitch.
Firms risk exposing intellectual property with public AI tools. For example, Samsung moved to ban ChatGPT after some engineers uploaded source code to fix some bugs. While this points to the power of these AI tools to perform complex tasks, employees often see them as secure vaults and need to better understand the risks. If firms do not introduce AI integration policies and use secure tools, this can lead to breaches, compliance issues and PR nightmares.
Overcoming misunderstandings
Most executives assume a reputable AI that garners a lot of media attention, such as OpenAI, is automatically safe and has data storage and security standards that have the users’ interests in mind. The reality is more alarming and nuanced. Leaders, including HR executives, often underestimate the numerous pathways that can lead to data breaches and the misuse of company data within these public AI tools.
For example, AI tools often follow data compliance standards like SOC 2 or ISO. While leadership teams might believe those certifications mean bulletproof security, there are newer threats to be wary of. These risks include people tricking AI tools by stealing access tokens, using prompt injections, or AI platforms accidentally exposing other users’ data. Because these actions often go unnoticed, they are left unaddressed by security compliance checklists.
Enterprise leaders, including CHROs, need to start treating shadow AI usage (or openly acknowledged usage) as dangerous as phishing schemes or non-compliance with password management policies. They need to not only select advanced workplace-centered AI tools, but also start tracking breaches and educate themselves on the risks of public AI tools. Otherwise, they remain vulnerable.
Taking steps forward
HR teams can take immediate proactive steps to get in front of public AI tool usage and protect their companies and employees. These include:
- Drafting a clear “AI Acceptable Use” policy that outlines expectations and guidelines without ambiguity. HR can reinforce these policies during onboarding and quarterly training sessions.
- Formally approve secure AI tools that support audit logging, single sign-on and other data governance controls while at the same time block unapproved services from the network.
- Bring in external auditors to check for AI tool problems, such as prompt injections, which occur when someone tries to trick an AI into doing something it shouldn’t, or token misuse, which allows someone access to private information.
- Use data labeling in all AI tool interactions, so every prompt has a note detailing if the output is confidential, and for internal usage only, or if it’s okay for public viewing.
- Track and report shadow AI use to the executive team and create and follow fair yet strict procedures for employees who fall outside the guidelines.
A zero-trust approach
All of these efforts should also come with a zero-trust approach. In IT terms, this means assuming any tool has already been compromised or a compromise is imminent.
Do not give AI systems broad privileges to access a system; instead, provide the tools with minimal and temporary access for each task. You also need to direct IT staff to develop monitoring procedures for checking employee log-ins and ensuring sensitive data processing with AI tools stays on-device or in a controlled environment.
HR leaders can play a vital role in steering AI adoption towards secure and responsible platforms. Keeping AI data firmly within the organisation’s control, whether it stays on premises or in a private cloud is crucial for protecting the brand’s reputation and customers’ trust.
Of course, HR and the C-suite cannot reasonably ban all AI platform usage. Instead, they need to choose secure AI tools that are both easier and more powerful than consumer alternatives. This approach encourages adoption through usefulness, rather than stifling innovation through policy.