The Use of AI in the Workplace: Updating the IT Policy for Responsible Use

The Use of AI in the Workplace: Updating the IT Policy for Responsible Use

Key Takeaways

The rapid adoption of AI in the workplace presents both opportunities and challenges. To prevent unauthorized use and minimize associated risks, companies must proactively update their IT policies and ensure that employees receive adequate training.

Generative AI burst onto the professional scene just two years ago and has already significantly transformed how we work. (1) Employees were quick to embrace these tools, which can notably boost productivity. However, many organizations are deploying AI systems without fully understanding the scope of their usage.

Many business leaders have yet to fully grasp the extent of this phenomenon. If employees use AI systems and generative AI tools without their employer's knowledge, without training, and without awareness of the risks associated with uncontrolled use, companies could face legal, financial, and security risks.

While the use of AI in the workplace presents risks, it also offers significant opportunities. To harness the potential of this technological revolution, it is essential to promptly establish clear guidelines through an updated IT policy and train employees on best practices for using artificial intelligence.


1. Risks Associated with Unregulated Employee Use of AI

A large number of companies, especially small and medium-sized enterprises (SMEs), remain unaware that their employees may already be using AI systems without their knowledge.

The unregulated use of AI by employees - without proper oversight, awareness, or training - exposes companies to legal, financial, and security risks.

For example, an untrained employee may inadvertently use or disclose data in violation of the GDPR (personal data), trade secret regulations, or commercial contracts with suppliers or clients. This could happen in several ways:
    - Disclosure of confidential data: Personal or confidential information, including trade secrets, could be disclosed or reused by the AI system, particularly with generative AI tools such as email generators;
    - Reuse of confidential data: Confidential commercial data used in prompts may be incorporated by the AI system as part of its training data;
    - Data storage outside the EU: Personal data may be stored on the AI provider’s servers located outside the European Union, potentially breaching data transfer regulations.

Additionally, generative AI systems may create content (e.g., images, text) that infringes third-party intellectual property rights. These systems often rely on massive datasets scraped from the internet without securing the necessary permissions. Using such content without prior verification may constitute copyright infringement.

AI is also known to generate inaccurate, poorly constructed, or entirely fabricated content. Using such content without prior validation could result in the user being held liable if the AI-generated content causes harm to a third party. (2)


2. Updating the IT Policy or Creating a Dedicated AI Policy

Implementing an IT policy within a company is highly recommended. An IT policy outlines the rules for employees' use of technological resources, including company-provided equipment, authorized software, internet access and usage, email protocols, and the role of the IT department. (3)

The IT policy should be periodically updated to keep pace with technological advancements and changing usage patterns. As part of this update, a section dedicated to the use of artificial intelligence in the workplace should be incorporated. Alternatively, companies may opt to develop a standalone AI policy. (4)

Drafting an AI policy is a complex task. Before its creation, it is crucial to designate the individual or team responsible for its development. Ideally, this team should include both IT professionals and legal experts. Their role will be to identify the areas where AI is used within the organization, map out the AI systems in operation, and determine the staff’s training needs.

The AI policy should cover the following key areas:
    - Best practices for AI usage;
    - The company’s ethical commitments regarding AI;
    - Authorized AI systems and their intended purposes;
    - Prohibited AI systems;
    - Risks associated with unauthorized or non-compliant AI use under the European Artificial Intelligence Act;
    - Guidelines for AI usage (e.g., verifying AI-generated outputs, complying with transparency obligations, avoiding the disclosure of confidential information, complying with GDPR and privacy regulations, etc.);
    - Concrete examples of permitted AI applications;
    - Mandatory training for employees prior to using AI systems;
    - Management oversight and disciplinary measures for using unauthorized AI systems or violations of the AI policy.

This list serves as a general framework, which should be customized to align with the unique needs and operational context of each organization. Like the IT policy, the AI policy should also be periodically reviewed and updated to reflect the evolution of AI systems and their usage.


3. Supporting Employees for Responsible AI Use

As with personal data protection, implementing an AI policy requires a multidisciplinary approach that brings together IT, legal, and compliance teams. It is essential to appoint an AI Compliance Officer - akin to the Data Protection Officer (DPO) - who will be responsible for raising awareness among the staff about the rules governing AI use within the organization. Given the complexity of the field, this person may be supported by legal or IT experts, depending on the circumstances.

Proper training is crucial for the responsible use of AI systems. This training should be tailored to the specific AI systems being used and should emphasize practical, hands-on learning. Key training topics will include best practices for using AI systems, understanding the risks associated with AI, the company’s ethical commitments to AI usage and the importance of responsible and compliant AI practices.


    Implementing an AI policy is part of a broader initiative to ensure compliance with the European Artificial Intelligence Regulation (AI Act). (5) As mentioned in a previous article, the implementation of the AI Act will be rolled out in phases over three years, culminating on August 2, 2027. Organizations must achieve compliance with the AI Act by August 2, 2025. A forthcoming article will provide a detailed roadmap for achieving compliance with the regulation, focusing on the proper use and governance of AI systems.

* * * * * * * * * *


(1) ChatGPT was launched by OpenAI in November 2022.

(2) See our article: "A company is responsible for information provided by its chatbot"

(3) See our previous articles on IT policies (in French): "The IT Policy in the Face of Technological Advancements: A Vital Tool for Defining the Rules of the Game," "BYOD: Legal Perspectives on a Controversial Practice," and "Protecting a Company’s Information Assets: A Comprehensive Overview."

(4) If a separate AI policy is adopted, the procedure applicable to IT policies must be followed to make it enforceable against employees and grant it disciplinary value.

(5) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June, 2024, laying down harmonised rules on artificial intelligence ( “Artificial Intelligence Act” or “AI Act”).


Bénédicte DELEPORTE
Avocat

Deleporte Wentz Avocat
www.dwavocat.com

November 2024