top of page
Search

Four years of GDPR: Is new technology putting the data privacy law to the test?

Although the European Union's flagship data privacy legislation, the General Data Protection Regulation (GDPR), has been in effect for four years, concerns have already been raised about whether the GDPR is being outpaced by technological developments and their use of data.



In a way that its predecessor, the 1995 EU Data Protection Directive, failed to do, data protection authorities (DPAs) believe the regulation's underlying principles of lawfulness, fairness, and transparency make it "future-proof" to cover developments in artificial intelligence (AI), machine learning, cloud computing, and data.


Many legal experts believe that the GDPR is adaptable enough to deal with new technologies. "Although it is often presented as a conflict, the reality is there is very little which technology might make possible that the (U.K. or EU) GDPR would outright prohibit," said Will Richmond-Coggan, director and a specialist in data protection and new technology at law firm Freeths.


"While the GDPR may not have been drafted with these new technologies specifically in mind, the broad principles of lawfulness, fairness, and transparency still apply, along with a number of additional requirements for higher risk processing," said James Castro-Edwards, privacy and cyber counsel at law firm Arnold & Porter.

Others are not so certain. The GDPR, according to some experts, is stifling data innovation and technology adoption. According to a report released earlier this month by the National Bureau of Economic Research, a nonprofit research organization based in Cambridge, Massachusetts, the legislation has killed off just under a third (32.4 percent) of apps available on the Google Play Store, while new Android app development has halved as a result of compliance concerns and the risk of large fines.


Part of the issue may be that both developers and users of new technologies are unsure what practices will be tolerated, as different DPAs have taken different enforcement approaches and/or prioritized specific violations as being more harmful than others in the past. A lack of consistency in cross-border investigations and fines also raises more questions than answers about what data-driven technologies should be used for.


As a result, according to Camilla Winlo, head of data privacy at consultant Gemserv, businesses are still trying to figure out how to put AI into practice.


"It can be operationally difficult to collect informed consent, and it can also be difficult to fully understand the risks associated with a processing activity and the ways individuals will react to AI-driven outcomes," she said. "When data protection rules are difficult to apply in practice, organizations can fall into the trap of believing that avoiding them is a pragmatic approach."


The European Union, national data protection authorities, and European governments are attempting to resolve conflicts between the GDPR and the development and use of technology. Regulators are providing proactive support through sandboxes or by publishing guidance on specific topics like automated decision-making, while the European Commission is working on new regulations like the EU Data Act, Data Governance Act, and planned AI legislation to clarify what is expected and what is illegal.


When using new technology, several businesses have been fined for violating citizens' data rights. The Dutch Data Protection Authority fined a company 725,000 euros ($798,000) in 2019 for using biometric (fingerprint) data when less intrusive alternatives were available.


Budapest Bank was fined €650,000 (then $742,000) in February for using voice-analysis AI systems to assess the emotional state of customers who called its call center and monitor complaint handling, the highest penalty imposed by Hungary's DPA. While recording and storing call data was not illegal, the regulator claimed the bank failed to conduct a data impact assessment to mitigate potential high risks to customers, who were not informed or given the opportunity to object to the use of AI.


Clearview AI was fined more than 7.5 million pounds (US $9.4 million) by the UK Information Commissioner's Office earlier this week for collecting people's images from the internet and social media sites without their knowledge or consent in order to create a global online database that could be used for facial recognition in criminal investigations.


To avoid violating the GDPR when using technologies like biometrics, AI, and machine learning, companies should "continuously horizon scanning, understanding what new regulations, guidance, and fines have been published by the regulators in the geographies they operate and process data in and understanding the impact on their own organizations," according to Sharad Patel, partner at PA Consulting.


"Sometimes new AI technologies are deployed and implemented without their knowledge," he said, compliance departments should be aware of the technologies in use across their organizations. Patel also believes that privacy frameworks should be reviewed on a regular basis (every six months) and expanded to include items such as data ethics and fair use of AI policies and guidance.


Furthermore, before any new systems are deployed, all business units should be made aware of the privacy risks associated with new technologies and given specific guidance on how to avoid them, he added.


"Creating an audit trail and being able to justify business decisions—rather than implementing AI and new technologies without thought—is integral to compliance and to defending claims," said Lauren Wills-Dixon, solicitor and privacy legislation expert at law firm Gordons. She advised businesses to take advantage of data regulators' tools and other resources to "undertake appropriate assessments and fully document their analysis of the effects of these technologies on individual privacy rights as required under the GDPR."


Developers and companies that use new technologies, on the other hand, must ensure that systems and processes are GDPR compliant before they begin using them, according to Richmond-Coggan.


"The nature of the legislation is that it requires you to design in safeguards and protections from the ground up. It is often very difficult to achieve meaningful compliance when you are trying to bolt it on as an afterthought ," he said.


"As a quick rule of thumb," he continued, "I always ask clients to think about whether they would be comfortable explaining to someone what use they are making of their data. If they are not, it is usually a sign there is something that needs to be thought about again or that is an area of risk."

By fLEXI tEAM


bottom of page