Companies need to be acutely aware of the legal risks and potential liabilities associated with the use of artificial intelligence (AI) technologies, as regulators are scrutinizing the field.

At the Compliance Week Europe conference in London, Justin Garten, an AI consultant specializing in large language models, compared using AI to having an intern with a drinking problem, saying, "At first, they look as if they know what they’re talking about, and then you slowly realize they don’t." He cautioned companies to be cautious about the types of AI tools they use and their intended purposes.
Garten also highlighted that while companies might have in-house expertise to identify inaccuracies, biases, or harmful outputs from AI, they cannot catch errors every time, as many individuals with the necessary skills work in siloed functions within organizations.
Sophie Goossens, a partner at law firm Reed Smith, emphasized the importance of rigorously assessing the reliability of AI outputs. She warned that companies should not blindly rely on AI outputs simply because the systems are confident. As the European Union is finalizing its long-anticipated AI Act, Goossens urged companies and compliance officers to consider the potential violations of data and copyright laws associated with their use of AI.
One area of concern is the risk of copyright infringement, which can vary from one jurisdiction to another. Goossens pointed out that companies need to determine whether they require permission from copyright holders to use data for training AI systems and whether it falls under the concept of fair use, which has different interpretations in the EU, the US, and the UK.
Additionally, Goossens cautioned that inputting company data into AI systems, like chatbots, may transfer the copyright to the tech firm. This could inadvertently lead to the surrender of intellectual property (IP) and may pose a risk of breaching data protection regulations, especially when personal data is involved.
Moreover, the application of AI technologies can result in varying risk categorizations under the AI Act. AI used to recommend books or movies may be considered low risk, while the same technology applied on a social media platform to curate content might be deemed high risk, subject to regulation. This classification can create challenges for companies depending on how they use AI technology.
By fLEXI tEAM
Comments