Litigation Support

 View Only

What Impact Could the EU AI Act Have On eDiscovery?

By Franki Russell posted 01-05-2024 10:13

  

Please enjoy this blog authored by Tom Whittaker, Senior Associate, Burges Salmon LLP.

 
AI regulation will affect the eDiscovery industry directly and indirectly. It may be directly relevant to providers of an AI system incorporated into an eDiscovery platform, the provider of the platform, and users of the eDiscovery platform (end clients and advisors). This article summarises the EU Act from an eDiscovery perspective, including how the Act could affect the providers and users of AI in eDiscovery.

What does the future hold?
 
AI technologies and use cases - and, simultaneously, the eDiscovery industry - are currently developing at a high pace and consequently long-term predictions are difficult.  Nonetheless, the current high profile of AI suggests that the old adage that short term changes may be overestimated while long-term changes underestimated applies here.  Three things are likely in the author’s opinion.

- First, eDiscovery will be used by more organisations in more sectors for more purposes.   There is the demand-side: the underlying commercial and legal drivers will continue for increased and improved use of eDiscovery to analyse larger, more complex volumes of data.  Other parties, regulators and courts will expect greater use of eDiscovery, including AI, and indeed will use the technology themselves.  There is also the supply-side: the eDiscovery market will continue increase in competition, increasing options for users; eDiscovery providers will continue to use their tech stack, with associated technical and cost options, as important points of difference for clients.
 
- Second, AI technologies and use cases will continue to evolve, improving availability and affordability.  There will be better understanding of their opportunities and risks.  Stakeholders will expect to see (and be more comfortable seeing) the use of AI in more domains generally, including in eDiscovery.  There are already various uses of AI in eDiscovery, as identified in a 2021 EDRM AI project.[1]  eDiscovery platform providers and eDiscovery service providers will continue to develop their own AI systems themselves and also incorporating third party AI systems and foundation models. With the rise of generative AI, various eDiscovery providers are using large language models, including to help analyse and summarise large volumes of data and allow for natural language queries.  This seems set to continue and develop.

- Third, AI regulation is coming. In the US, the Biden-Harris and Trump administrations have passed executive orders regarding AI, whilst there are proposals for AI regulation at federal and state level, with some states having passed such legislation already.  In the UK AI regulation of some form is said to be inevitable. And then, of course, there is the EU AI act, which is the focus of this article.


The EU AI Act
 
The EU AI act is the EU’s proposed cross-sector legislation to regulate AI technology and uses, with limited but important exceptions.  At the time of writing there was political agreement on the EU AI Act but the final text was not published. Even when it is published, legal and technical guidance is expected that will clarify the meaning and scope of the Act.  The key points to know now are:

- The Act is intended to have extra territorial effect in that it will apply to anyone who places an AI system on the EU market.  Given the complexity of the AI value chain, including in eDiscovery, there is the potential that one or more parties in the value chain will need or choose to adapt to the AI Act.
- The Act is expected to be enacted in early 2024.  There are various transition periods of between 6 months and 24 months. Practically, that means industry needs to prepare now for compliance.
- There are significant consequences for non-compliance. These include fines of potentially €40m or 7% of global turnover for the previous financial year (whichever is higher), removal of an AI system from the market, a requirement to make significant change(s) to the AI system in question and regulatory investigations.
- The Act takes a risk-based approach to AI regulation.  Some AI systems will be prohibited, some considered high risk and subject to additional obligations, some considered low risk and subject to lighter obligations and voluntary codes of conduct.

How will the AI Act apply to the eDiscovery industry?

Consider an organisation, instructing a third party law firm, which needs an eDisovery provider utilising a third party disclosure platform which itself incorporates a large language model (LLM) to allow for natural language searches of large volumes of email data to prepare for civil litigation.

First, is there an AI system? The definition in the AI Act is understood to have been updated as follows.

A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

The LLM will be caught by this definition. Other types of AI systems used in eDiscovery also are likely to be caught.
 
The EU has recently adopted the definition used by the OECD, an apparent move to increase international regulatory interoperability.  The aim is to be both specific enough to provide legal certainty whilst flexible enough to respond to changing AI technologies and use cases.  The definition has been subject to much debate and criticism, in particular, that there is a large grey zone of what does and does not fall within the definition. Further guidance is expected.
 
Second, are those involved with an eDiscovery project potentially subject to the Act?  The Act places different obligations upon different stakeholders in the AI lifecycle, including:

- Providers of AI systems and foundation models – means a natural or legal person, including public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark – that could include the provider of the AI system incorporated into the disclosure platform.  
- Deployer – a natural or legal person, including public authorities and agencies, who use the AI system – that could cover the organisation, the instructed law firm and eDiscovery provider.


Third, what risk profile does the AI system have? This will affect what obligations are placed upon the various stakeholders.[2]

- Prohibited risk – some AI systems are prohibited. It is not clear that any apply to eDiscovery in a traditional sense or the hypothetical example above. However, providers should be aware of which AI systems are prohibited so that they do not inadvertently sail close to the wind in future.  For example, if the hypothetical organisation used an eDiscovery platform not to prepare for civil litigation but instead as a non-real-time way to analyse employee sentiment, the stakeholders will want to familiarise themselves with the intended prohibition of ‘emotion recognition systems’ which identify or infer emotions or thoughts of individuals or groups, albeit specifically where using biometric data.
 
- High-risk - High risk systems include specific types or use cases of AI systems.  A couple include use of AI systems by law enforcement to analyse large datasets which could arguably include a form of eDiscovery, otherwise it’s not clear that eDiscovery would be considered high-risk.  However, if the system is not prohibited it is for the provider to consider whether the AI system is high risk and consequently subject to high risk obligations. There are various obligations upon the provider of an AI system if the AI system is considered high-risk. They include: risk management; quality management; data and data governance; keeping technical logs and documentation; transparency; human oversight; security; corrective actions; duties to provide information; and co-operation with authorities.  Some of these may give the discovery industry pause for thought, such as about the quality of training and validation data sets used for the AI model(s).
 
If the provider determines that the system is not high risk, it must make notification to a supervising authority who may question or overrule that determination. It appears likely that many deploying AI systems will choose to make proactive notifications to an authority. Consequently, others in the hypothetical example may be impacted by how the AI system provider chooses to adapt to the high-risk obligations in the Act.
 
- Low/minimal risk – these are AI systems which are not prohibited or high-risk but for which the EU encourages voluntary engagement with high-level principles including technical robustness and safety, privacy and data governance, transparency.  In the hypothetical example, the provider and deployers may need to consider what degree of transparency is required including how, when and for what the AI system is used and by whom.


Will the AI Act set new standards for AI?


The EU intends for the AI Act to set global standards for how AI is developed, deployed and procured.  Due to its role in the global market, EU rules may well have practical effects more widely.  The AI value chain is complex.  Changes in one part of the value chain may flow up and down the chain; for example, providers of large training data sets may adapt to the EU AI Act, affecting what data is available for the large language models which are used as part of an eDiscovery platform. Many in the AI value chain will operate in the EU; if they don’t comply with the Act they may be denied access to the market alongside other consequences. If the EU AI Act appears to set the highest bar, organisations may choose to comply with the AI Act as the baseline requirement across jurisdictions. There is precedence for EU regulations setting standards beyond its boundaries; some may argue that the EU’s data protection regulations (GDPR) did so.  Further, the EU has taken steps to encourage voluntary compliance with the AI Act before it is enacted and in force.



[2] Note that there are various exceptions and nuances to the Act which will also need to be worked through which we don’t cover here. Also, there are specific and potentially additional obligations upon providers of foundation models which may affect others in the AI lifecycle, such as where an eDiscovery provider integrates a foundation model to their platform(s).


#GlobalPerspective
#PracticeManagementandPracticeSupport
#ArtificialIntelligence
#FutureandEmergingTechnologies
#eDiscovery

0 comments
117 views

Permalink