Skip to content
Legal Technology

AI’s regulatory and litigation frontiers continue to expand

David Curle  Principal / David Curle Communications

· 6 minute read

David Curle  Principal / David Curle Communications

· 6 minute read

Even as more people express their wonderment and concerns about generative AI, there is a lot more going on here, especially around regulation and ethics

It’s quite the paradox: While many lawyers wring their hands in worry over being replaced by artificial intelligence-powered software, the legal and regulatory environment for AI continues to evolve rapidly, ensuring a steady flow of work for lawyers whose clients create or are impacted by AI technologies.

Most of the focus is on current legislative and regulatory initiatives targeted at technology providers. Recently, tech company GC Irene Liu reviewed some of the more significant regulatory initiatives from the European Union, the United States, the United Kingdom, and other countries. These large-scale regulatory frameworks “will bring increased compliance obligations for any company that leverages AI in almost any region in the world,” says Liu.

However, every week brings new legal challenges to AI and the companies that create or use it. The issues raised by these legal skirmishes revolve around the same issues that seem to follow many new technologies, in which the desire to promote innovation and tech-based competitive advantage soon runs into challenges around privacy, personal integrity, bias, fraud, collusion, price manipulation, and copyright infringement.

Here are some of the more recent regulatory and litigation actions involving AI innovation that are worth following:

FTC investigation of ChatGPT

The U.S. Federal Trade Commission (FTC) — an agency whose mission is to promote competition and protect consumer interests — has announced a broad investigation of OpenAI, the company behind ChatGPT, that will focus on potential harm to consumers. Open AI’s CEO Sam Altman has invited regulation in testimony before Congress and is cooperating with the agency’s investigation. The investigation comes less than a year after the founding of ChatGPT, and regulators’ urgency reflects the belief that other tech industry giants were not regulated until they had grown substantially and the various harm they might cause are more widespread.

The risks that the FTC seeks to address include: the possibility of sensitive personal information becoming part of AI systems’ training data; the accuracy of the responses that systems like ChatGPT provide; the risks to users who might rely on these inaccurate outputs (often termed hallucinations); and the risks that generative AI systems can more easily facilitate the creation and distribution of harmful misinformation.

In a recent opinion essay in The New York Times, FTC Chair Lina Khan detailed the risks that are inherent in today’s tech market, in which a handful of companies control the raw materials (data) that go into today’s large-scale information technology systems. She described the tension between promoting innovation and competition among those firms on the one hand and privacy, personal integrity, and free markets on the other. She also explained how the potential dangers as fraud, collusion, price manipulation, copyright infringement, and the potential of discrimination and bias are actually built into existing data.

The FTC investigation is an attempt by regulators to protect the public interest ahead of this new technology and to avoid some of the regulatory neglect that many believe led to abuses in privacy, pricing, and other areas by an earlier generation of fast-growing tech giants such as Facebook, Google, and Amazon.

AI guardrails for the major players

Shortly after the FTC investigation of OpenAI was announced, the Biden administration indicated it wanted to cast an even wider proactive net around the more prominent players in the AI space. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI jointly announced a set of voluntary safeguards in response to a meeting with President Biden. The precautions include security testing, research on bias and privacy concerns, information-sharing with governments about risks, and transparency measures to identify AI-generated content.

However, the non-binding standards to which the group agreed are vague and open to interpretation, and critics have pointed out that they are largely unenforceable. At best, the seven companies likely see an opportunity to forestall quick regulatory action and participate in the various regulatory proposals brewing in Congress and the executive branch.

Copyright: Who owns the training data for AI?

Nora Roberts, Michael Chabon, and Margaret Atwood are among the more than 10,000 professional writers who have signed a letter demanding that AI companies such as OpenAI and Meta stop utilizing their works without consent or compensation. The letter was initiated by the Authors Guild, a professional organization of writers with more than 12,000 members.

The letter objects to how GPT-4 and other large AI language models are trained on existing documents, many of which are scraped from the internet. The authors claim that AI products like ChatGPT work by ingesting copyrighted material and then generating derivative works, which should require compensation under copyright law.

The argument is reminiscent of an earlier struggle between the Authors Guild and Google over its Google Book Search service in 2015. In that case, the Guild asserted that Google’s process of scraping and analyzing the content of copyrighted works while building a database of books constituted copyright infringement, even if Google did not make the full text of the books available to users. That particular dispute did not go well for the Guild, as the courts declined to approve a proposed settlement in the case and eventually ruled, in effect, that Google’s use of the copyrighted books constituted fair use.

Other one-off lawsuits by individual authors and artists suing AI providers for copyright infringement are reaching the courts, notably by comedian Sarah Silverman and authors Mona Awad and Paul Tremblay.

Labor: AI and synthetic performers on the silver screen

Meanwhile, AI is a central feature of the ongoing strike by actors and writers from the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) union and the Writers Guild of America in the film and television industries. The AI-related issues concern using images and performances as training data for AI systems, digitally altering performances, and creating metahumans — entirely AI-generated actors that could replace human performers. Studios have said they would like to reserve the right to create synthetic performers based on real-life performances.

The union seeks consultation and approval before casting synthetic performers and for studios to get an actor’s consent for post-production changes. So far, studios have proposed giving notice and negotiating terms before replacing a human actor with a synthetic performer and agreed to obtain an actor’s permission before using their digital replica outside the production for which the artist was hired.

No end in sight

These various challenges to the scope and power of quickly-popularized new AI-based technologies and the companies behind them are a response to the earlier growth of the tech giants a decade ago. Not eager to make the same mistake again, regulators and litigants seemed focused on getting regulatory schemes in place before AI gets too well-established to be reined in.

More insights