Artificial/Augmented Intelligence - includes Industry Participants

 View Only

ILTA Just-In-Time: Who is Liable if Generative AI Breaks the Law or Things Go Wrong?

By Andrea Scholfield posted 02-25-2025 10:01

  

Please enjoy this blog post authored by Dr. Megan Ma, Research Fellow and the Associate Director of the Stanford Program in Law, Science, and Technology and the Stanford Center for Legal Informatics (CodeX), Stanford Law School.

At the face of new technologies, the question of liability frequently surfaces as the technology moves beyond an initial state of fascination. We have seen this historically, and most recently, with self-driving and/or autonomous vehicles. Yet, what often underpins the question of liability is the notion of agency, and importantly, how human intervention (i.e., human-in-the-loop) can and should change. Put differently, when humans can no longer predict where in the process they would play a role, they have perceivably lost visibility into defining responsible and accountable oversight.  Accordingly, the first instinct becomes discussions of creating and establishing new legal and governance regimes. 

In an earlier wave of artificial intelligence (AI), the notion of algorithmic accountability was brought into conversation with the rise of consumer-facing medical diagnostic apps. Amongst other applications, algorithmic accountability was a reaction to confident claims from medical diagnostic companies that leveraged machine learning to diagnose users of their ailments. Babylon Health had a meteoric rise in late 2016, and in 2021, was valued at $4.2 billion. What was particularly intriguing about Babylon Health were their bold assertions that their AI platform could diagnose more accurately than experienced doctors.

Setting aside problems with the underlying technology (it eventually was revealed that their “AI platform” was grounded on a set of complex Excel spreadsheets rather than state-of-the art machine learning), the healthcare tech company eventually imploded. Babylon Health simply did not have the institutional infrastructure relative to one of a medical insurer. That is, Babylon Health was legally acting as a health care provider, but did not have the architecture in place to bear the burden of accountability in the event of a medical error resulting from their platform. 

Consider alternatively the example of the autonomous vehicle. Efforts of defining the extent of automation became a tool to support the shift in liability between the individual driver to the car manufacturer. In theory, the more automated driving features included, the higher the likelihood that liability fell in the hands of the manufacturer. Interestingly, car manufacturers began to forthrightly accept product liability should a vehicle be classified as SAE Level 5 (otherwise, full driving automation). Anything below SAE Level 5, it remained unclear where liability would fall. Insurers began to intervene by drawing the line on whether a human is capable of “taking back the wheel”. As a result, car manufacturers were disincentivized from classifying their vehicles as SAE Level 5, even if de facto there were nominal differences between SAE Level 4 and 5. Some legal scholars have since argued for a no-fault model, stepping away from a traditional tort remedy.  

I often cite to these two examples as important lessons learned for when a new legal regime may be necessary to manage the risks of a given emerging technology. In the former, the lack of understanding of the health care industry contributed to the downstream issues with accountability. Babylon Health represents the problem of forcefully retrofitting an existing legal regime onto an unfamiliar and disjunct circumstance. In the latter, a conflict of stakeholder incentives contributed to the persistent murkiness of establishing causation in liability. 

Interestingly, in the case of generative AI, questions of liability are not singular. Unlike the aforementioned cases of self-driving cars and medical diagnostic platforms, there is a specific purpose to the technology and a particular legal regime in question (i.e., torts versus strict liability). As generative AI may be perceivably a raw material, and in effect, enable illimitable use cases, and across numerous stakeholders, the circumstances that could invoke liability are multidimensional. Even within a single domain, such as law and the provision of legal services, liability extends beyond one regulatory regime. 
 
More tangibly, there is a difference in contractual liability between the use of large language models (LLMs) directly from AI companies, such as OpenAI, Anthropic, Google, etc., in comparison to their use via specialized enterprise vendors such as Thomson Reuters’s CoCounsel, LexisNexis’s Lexis+ AI, Harvey, etc. 

Importantly, the specific technological offering could also yield incredibly diverse analogies. That is, new advancements in the technology have increasingly moved the imagery from “tool” towards “employee”. 
 
In March 2024, Cognition Labs introduced the first ever AI software engineer, Devin. Early testing had experimented with the extent of Devin’s capabilities to work autonomously. Anecdotally across X (formerly Twitter) and Reddit, stories had exploded with individuals testing it on their own GitHub repositories and codebases. Shortly after, Spellbook launched the first ever AI Associate for law firms. 

The performance of state-of-the-art reasoning models (e.g. OpenAI’s o3) coupled with the rise of agentic capabilities (e.g. OpenAI’s Operator, Deep research), has led to a reframing of generative AI as a human counterpart. In a recent keynote at the Consumer Electronics Show (CES), Jensen Huang, CEO of NVIDIA, had likened IT departments as the “HR of AI Agents.” 

This begs the question: should AI agents, that leverage generative AI, be afforded legal personhood? While discussions of AI personhood have pre-dated current technological developments, I consider that reflecting on the history of corporate personhood should provide better context as to whether circumstances today may be comparable. 

Though corporate personhood was arguably established in 1886 in Santa Clara v. Southern Pacific Road Co., the extension of rights afforded to corporations today is frequently attributable to Citizens United v. Federal Election Commission in 2010, where the Supreme Court held that corporations had First Amendment rights to freedom of speech.  The broader affordance of equal treatment (under the Fourteenth Amendment) had garnered variable public debate. However, the extension of political speech rights to corporations provoked the widespread critique around the fundamental definition of a “person”. That is, critics of corporate personhood argued that legal persons should not include artificial entities, as they cannot necessarily exercise nor make their own independent judgment. 
 
The onset of the next generation of AI agents increasingly places the notion of judgment into question. The role of agents is intended to work collaboratively and/or in tandem with humans, but to take decisions on certain actions and execute them for a particular goal entirely autonomously. 

In Corporate Personhood: A Limit to Corporate Empowerment, Christina Park argues that contrary to popular criticism, the Supreme Court has been rather restrictive about when “corporations were viewed as separate entities and not as extensions of natural persons.” Park notes that the “key difference was the involvement of the right of a natural person, or more accurately, of the rights of the owner of the corporation.” Therefore, while corporations may, in theory, be persons, their rights are subject to their relationship vis-à-vis the individuals within the organization. As entirely separate entities, corporations do not possess the same rights as natural persons. 
 
In a similar vein, we may analogize notions of corporate personhood to AI personhood. That while AI agents may not perceivably be afforded the same rights as natural persons, they may be given rights comparable to those of corporations. It may be possible to imagine that in the near-distant future an extension of respondeat superior could apply in circumstances where AI agents are treated as de facto employees (otherwise, legal persons) of a company and that employers could be held vicariously liable for mistakes resulting from their executed actions. 

Nevertheless, I do not necessarily argue that we are ripe for discussions of legal personhood for AI agents. Rather, I suggest that it is not inconceivable that we are on the horizon of demanding a new legal regime to manage the broader integration of generative AI in our work and practice.  In the interim, it is more important than ever to ensure that relevant guardrails, both technical (e.g. Guardrails AI, Supervisory AI) and policy (e.g. RAILS), are in place to account for increasingly ubiquitous adoption of these technologies. 








#ArtificialIntelligence
#GenerativeAI
#InformationGovernanceorCompliance
#200Level
#Just-in-Time
#BlogPost

0 comments
176 views

Permalink