Using ChatGPT in the Workplace: Risks, Opportunities, and Policy Considerations

In recent years, artificial intelligence has rapidly transitioned from a niche innovation to a mainstream workplace tool, with ChatGPT leading the charge. As one of several generative AI chatbots powered by large language models (LLMs), ChatGPT is capable of producing human-like responses to a wide range of prompts – from drafting emails and writing code to generating creative content and answering complex questions. Its accessibility and versatility have made it a popular choice among employees, often used quietly and independently, raising important questions about transparency, and responsible usage in professional settings as well as data protection.

 

It is no surprise that these tools are already being used. A 2023 study of employees who use AI for work reported that 68% do not tell their managers. A more recent 2024 study by Deloitte reported that 48% of these employees use free online tools (like ChatGPT) and 31% use tools which they pay for themselves.

The potential benefits are many – these AI tools can answer complicated questions, write code, prepare presentations, create audiovisual content, often delivering results within seconds. As more businesses and employees become familiar with its capabilities, issues around permitted and responsible usage will naturally become more and more prevalent.

There are several key areas to think about when using ChatGPT in the workplace:

  • Confidential information: ChatGPT can be used for free. However, its terms of use state that information inputted by users can be used to help develop its service. This means that your company’s confidential or proprietary information could be stored and accessed by OpenAI or its subcontractors. This may result in disclosure of confidential information and may breach your obligations to third parties – for example under specific contracts or NDAs.

 

  • Bias: It’s important to remember that these chatbots are not search engines. They are generating content using the existing data they have trained on. This data can be inaccurate and may include other biases. This can result in the chatbot generating new content which reproduces these biases. This creates a risk of discrimination issues if the generated content is used by employees.

 

  • Hallucination: The primary purpose of these chatbots is to generate a plausible response using the data it has trained on. They do not verify whether their training data, or the content they generate, is actually true. Even if the training data is completely accurate, the chatbot could still generate new content which is inaccurate, because it has combined accurate information in an unexpected pattern. This is known as “hallucination” and can have embarrassing results – like a US lawyer who was fined for citing cases which didn’t exist.

 

  • Data protection: If employees input personal data into ChatGPT, such as personal data relating to customers, colleagues or service users, this is a form of “processing” for UK GDPR purposes. This should be captured in your privacy notices and like any form of processing, you will need to establish the purpose and lawful bases for processing. There are higher protections for sensitive “special category” personal data. Employees inputting personal data into Chat GPT could compromise its confidentiality, resulting in a data breach.

 

There are, of course, wider implications posed by the development and accessibility of AI tools in the workplace, around employee training and development, reskilling and changing job roles. Ultimately, this may result in efficiencies being made which in turn lead to redundancies.

 

However, the most immediate consideration is whether to permit the use of ChatGPT and other LLMs at all. If the answer is no, this should be clearly communicated, with the prohibition included as a standalone policy or within your existing policies on IT and systems use.

If you will permit employees to use these tools, you should consider implementing a generative AI usage policy. This policy would set standards for responsible use. For example, the policy would prohibit employees from inputting confidential or proprietary information, or any personal data. It would also remind employees that they are responsible for verifying the accuracy and suitability of any AI generated content they use. This is especially important given that LLMs are “black box” systems – meaning that users only see what they put in, and what comes out. Your policy can reinforce that employees are expected to check and understand the content they generate.

Many policies also require employees to record the prompts they have used, the content generated and the changes they have made. This creates an audit trail, which can be important given the difficulties in monitoring how individual AI tools have been used. Depending on the expected volume, you could require employees to log their usage in an internal database.

 

Having this policy in place should help set expectations within your workplace and allow employees to access the benefits of AI tools while mitigating risk. The right balance will depend on your individual circumstances. Many businesses are now implementing paid for and business specific AI tools – and even paying bonuses to employees for using them.

If you do roll out AI use across your workforce, this will naturally involve training and upskilling alongside setting these policy requirements. You should consider where any employees are placed at a particular disadvantage during this process due to protected characteristics such as disability or age. With the latter, studies suggest that AI uptake is lowest in older workers, with negative assumptions made about their suitability for AI related roles.

Breaching any specific prohibitions on using AI tools can be dealt with under your usual disciplinary procedure. Using these tools for permitted uses but failing to carry out proper checks will likely be either a conduct or performance issue. If you find that employees are using AI tools to create content which they cannot properly explain to internal or external stakeholders, this can be managed as a performance issue.

 

As generative AI tools like ChatGPT continue to reshape the modern workplace, organisations must strike a careful balance between innovation and responsibility. The benefits – efficiency, creativity, and enhanced productivity – are undeniable, but so too are the risks around data protection, accuracy, and ethical use. By implementing clear policies, providing appropriate training, and fostering a culture of transparency, businesses can empower employees to use these tools effectively while safeguarding against potential pitfalls. Ultimately, success lies not just in adopting AI, but in doing so thoughtfully, with human oversight and sound governance at the core.

 

For legal guidance and advice regarding AI Policies or Employment Law queries, please contact Jack Balmer or another member of our Employment Team for more information.

While great care has been taken in the preparation of the content of this article, it does not purport to be a comprehensive statement of the relevant law and full professional advice should be taken before any action is taken in reliance on any item covered.