top of page

The European AI Act will compel businesses to address technology usage

The European Union has made regulating artificial intelligence (AI) a top priority for its digital agenda for more than ten years. Several pieces of legislation will go into effect starting in the following year.

The AI Act primarily targets the largest tech platforms that can cause the greatest harms while also attempting to level the playing field so that new businesses can enter the market without being sucked up by established ones. Its goal is to protect consumers from any unintended consequences AI technologies and machine learning (ML) might create. Businesses will be able to highlight unfair behavior under the guidelines, such as when Big Tech companies exploit (and misuse) targeted advertising to generate sales using consumer data without the customers' consent.

The draft proposal for the AI Act was released by the European Commission, the bloc's executive body, in April 2021. The law, which is extraterritorial in scope and is industry-neutral, aims to address current concerns about possible threats presented by unregulated applications of AI-based technology as well as fragmentation in the European Union's AI policy.

The AI Act controls AI systems in line with the amount of risk they pose and takes a risk-based approach. Four bands are present:

1. "Minimal" hazards, when the threat of damage is so remote that the act does not apply to such systems.

2. "Limited" risks, which must adhere to certain disclosure standards but should not cause significant harm.

3. "High" risks, such as those that assess a customer's creditworthiness, help with hiring or managing staff, or make use of biometric identification.

4. "Unacceptable" dangers, which are deemed to be too dangerous to be used.

Companies that violate the restrictions might be fined up to 6% of their global revenue, or 30 million euros, whichever is higher.

In 2023, the AI Act is anticipated to become law (with a transition period).

Two additional pieces of legislation, the Digital Markets Act (DMA) and Digital Services Act, will be added to the proposed AI Act (DSA).

The DMA aims to regulate large/dominant social media or cloud computing companies with 45 million or more active monthly users in the European Union, EU revenues of at least €7.5 billion (U.S. $7.5 billion), or a market capitalization of at least €75 billion (U.S. $75 billion). These platforms are known as "gatekeepers" and are intended to be regulated. The law is intended to set restrictions on the processing of data by gatekeepers, decide how interoperability interfaces will be implemented, and strengthen consumer and corporate user rights.

The DSA places additional requirements on internet intermediaries in relation to user-generated content made available through their services, including hosting service providers and social media platforms. If intermediaries and platforms follow their duties for content moderation and remove any unlawful content found on their services "without undue delay," they are said to be protected from responsibility relating to the online distribution of user-generated content.

The DMA's final draft was adopted in July and is anticipated to take effect in April 2023. By February 2024, gatekeepers must abide by the commitments and requirements of the law.

It is anticipated that the DSA will go into force at the end of 2022 and be completely applicable by mid-2024.

According to legal experts, the three planned legislation are largely intended to make multinational digital companies responsible for actions that have thus far eluded competition and data regulators.

Companies will need to "determine whether they fall within scope of one or more of these digital laws" and "start assessing what effect they may have on the business and what the impact may be from a compliance, product, and resources perspective," according to William Long, global co-leader of law firm Sidley's privacy and cybersecurity practice and head of the EU data protection group.

Companies employing AI and ML solutions as part of their operations or decision-making processes may not be covered by the AI Act, DSA, or DMA, but if they are using the data of EU citizens, they may still be governed by the EU's General Data Protection Regulation (GDPR). Similar eye-watering sanctions are a possibility for GDPR noncompliance.

Because they believe technology companies will be held accountable for potential flaws related to the technology's design rather than companies for how they use it, experts warn of risks when businesses ignore "danger signs" and do not carry out their routine checks, monitoring, and risk management reviews.

As a result, all businesses adopting AI should set up a thorough risk management approach that is incorporated into their daily business operations. A list of all the organization's AI systems should be included in the program, along with a risk categorization system, risk-reduction strategies, independent audits, data risk management procedures, and an AI governance framework.

According to Robert Grosvenor, managing director of Alvarez & Marsal's disputes and investigations practice, "AI-generated content or use of automated decision-making in the calculation of online financial products or product pricing, for example, will require its own controls and monitoring to ensure outcomes are fair and appropriate."

According to an insights article written by a group of legal representatives at business consulting firm McKinsey, companies should also be open about the purposes they want to use AI technologies for and have clear reporting structures that allow for multiple checks of the AI system before it goes live. Companies should have strong, GDPR-compliant data privacy and cybersecurity risk management policies in place given that many AI systems process sensitive personal data.

The first thing businesses need to know, according to Caroline Carruthers, CEO and co-founder of global data consultancy Carruthers and Jackson, is if anything within their organizations would immediately alter as a result of the legislation. She counseled abiding by the law's spirit rather than its word.

Organizations can future-proof their compliance by knowing how to handle data ethically and not waiting for legislation to expressly state how to act responsibly, according to Carruthers. "When it comes to data, and AI specifically, legislation will only continue to evolve as the AI and ML sectors continue to innovate at pace. Instead of planning to just about cross the compliance line, businesses should be changing their behavior to far exceed it."



bottom of page