article written by Andrada Popescu Senior Associate, Noerr
The business world is evolving at lightning speed, and technology is at the forefront of this change. One of the most significant technological advancements is the use of chatbots such as OpenAI's ChatGPT. Even though using ChatGPT might sound like a good idea and simplify workflows, its widespread use in the workplace can be a double-edged sword, raising concerns about the potential risks and legal issues that employers must take into account to safeguard their business.
Using ChatGPT without a proper framework can be a legal minefield, endangering both employers and employees. As a result, it's essential to be aware of the potential hazards associated with utilizing ChatGPT in the workplace.
In this article, we explore some of the most significant risks that employers must navigate to reap the benefits of workplace innovation while avoiding legal pitfalls.
What is ChatGPT?
ChatGPT is a cutting-edge natural language model developed by Open AI, introduced to the market in November 2022. Unlike conventional chatbots, which are usually designed for specific functions, ChatGPT's adaptability enables it to be used for various tasks. This makes ChatGPT a highly valuable resource, as it was trained using vast amounts of unlabelled data, making it highly advanced and versatile.
As companies continue to explore the option of using ChatGPT to optimize their business processes, it is increasingly becoming a preferred tool for employees across various industries when carrying out their day-to-day tasks, from drafting emails to generating job offers and even programming.
Potential risks for employers
While ChatGPT could seem like an attractive option for companies and individuals seeking to streamline their workflows, it's important to consider whether it is the best tool for the job. Is it true that ChatGPT's adaptability enables it to be used for a wide range of tasks; however, it may not always be the optimal choice for every situation since it may trigger some potential risks for companies.
Security, data privacy and confidentiality breaches
ChatGPT can expose employers to risks if employees feed it proprietary or confidential information/personal data. Such disclosure could lead to penalties or legal actions against the company such as:
claims for damages and/or penalties for breaching personal data regulations if the employees disclose information that includes certain personal data of other employees/ clients/customers, etc; and/or
claims for damages in case of disclosure of commercial and/or sensitive data; this may also cause damage to the company’s reputation.
Providing inaccurate or outdated information
While ChatGPT is an AI-powered language model trained on a vast amount of data, it's important to note that it may not always provide accurate or up-to-date information since the tool is not updated in real time. This may lead to errors or inaccurate results when performing tasks. Moreover, ChatGPT may not always be able to distinguish between reliable and inaccurate information, which could lead employees to make decisions based on misleading or incorrect information. For instance, if ChatGPT provides inaccurate information in response to a customer query, it could lead to customer dissatisfaction and harm the company's reputation.
Inaccurate/outdated information from ChatGPT could have far-reaching consequences for businesses, making it crucial to ensure that employees understand the limitations of this tool and use it judiciously. While ChatGPT can be an effective tool that streamlines workflows, it's important to validate its results and cross-check them with human expertise to ensure accuracy and reliability.
Copyright infringement
Using ChatGPT to generate responses to user queries might also be dangerous in regard to copyrights. The language model’s development relies heavily on internet data, and some of this data may be subject to copyright laws. Imagine a scenario where ChatGPT uses copyrighted content to generate responses to user queries. This could lead to copyright infringement, which could result in a costly lawsuit or infringement claim.
It’s important to understand that it's not just the employee who's at risk here; the employer can also be held liable for any copyright infringement committed by its employees. For instance, an employee’s use of copyrighted images, text, or any other protected content could breach the owner's exclusive rights and lead to a legal nightmare. If the breach is discovered, the copyright owner could file a lawsuit against both the employee and the employer responsible for the infringement.
Thus, employers should ensure that they comply with copyright laws, or they might find themselves and their employees navigating a minefield that nobody wants to face.
Inherent bias
There is also a growing concern about potential biases that may be ingrained in ChatGPT. After all, the tool learns from the data it is trained on, leaving room for potential unintentional biases to creep into its algorithms. These biases could affect how ChatGPT responds to user queries and trigger legal consequences. Moreover, if employees rely on ChatGPT for decision-making processes such as hiring or promotion tasks, the system's biases could result in discrimination claims.
For example, ChatGPT’s bias was put to a test by requesting it to generate basic performance feedback for various professions. The result showed that gender stereotypes influenced ChatGPT’s choice of pronouns for jobs such as receptionist or construction worker. Additionally, the feedback provided for female employees was longer than that for male employees, highlighting the potential for gender bias in the system. This example underscores once again the need for employers to be aware of and eliminate any inherent biases in ChatGPT.
What can employers do
Employers must take appropriate steps to address the risks and potential legal implications associated with using ChatGPT in the workplace. The decision to allow employees to use ChatGPT in their daily tasks is crucial and can have a significant impact on the company's reputation and compliance obligations.
To mitigate these risks, employers can choose to permanently restrict the use of ChatGPT in the workplace. In such case, the employers should:
clearly include this restriction in the internal regulations/policies; and
restrict their employees’ access to the ChatGPT website/ application.
Alternatively, if employers decide to allow the use of ChatGPT, the following actions could be considered:
drafting and implementing clear rules and procedures regarding the use of ChatGPT. These rules should provide a clear framework on how and to what extent this tool can be used by the employees in their daily tasks, such as:
prohibiting employees from referring to or entering confidential, personal data or information into ChatGPT;
expressly stipulating which tasks can or cannot be performed using ChatGPT;
imposing the obligation to include human oversight and review ChatGPT’s responses.
conduct training sessions with employees to raise their awareness of the potential risks associated with using ChatGPT, thus empowering employees to identify situations where it may not be appropriate or could pose a risk to the company.
Having clear rules and procedures in place for the use of ChatGPT in the workplace is crucial for employers, regardless of whether they allow its use or not. These rules provide a framework for employees to follow and also allow employers to hold employees disciplinarily liable in case of failure to observe them.
Conclusions
Is ChatGPT a game-changer in the workplace, or a legal headache waiting to happen? The answer is both. While ChatGPT can boost efficiency and productivity, employers must also navigate potential risks and compliance issues. It is also important for companies to realise that, as ChatGPT becomes more prevalent in the workplace, the competitive edge of offering innovative solutions or responses may diminish.
Thus, employers should thoughtfully consider whether and if so, how, their employees should use ChatGPT. In either case, they should establish clear policies and procedures to reduce potential risks. By being proactive in addressing these risks, employers can maximize the advantages of ChatGPT while minimizing potential legal and compliance issues.