Training data, however, is only part of the process. Frequently, artists who use generative AI tools go through many rounds of revision to refine their prompts, which suggests a degree of originality.
Answering the question of who should own the outputs requires looking into the contributions of all those involved in the generative AI supply chain.
The legal analysis is easier when an output is different from works in the training data. In this case, whoever prompted the AI to produce the output appears to be the default owner.
However, copyright law requires meaningful creative input — a standard satisfied by clicking the shutter button on a camera. It remains unclear how courts will decide what this means for the use of generative AI. Is composing and refining a prompt enough?
Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.
The illustrator Hollie Mengert encountered this issue firsthand when her unique style was mimicked by generative AI engines in a way that did not capture what, in her eyes, made her work unique. Meanwhile, the singer Grimes embraced the tech, "open-sourcing" her voice and encouraging fans to create songs in her style using generative AI.
If an output contains major elements from a work in the training data, it might infringe on that work's copyright. Recently, the Supreme Court ruled that Andy Warhol's drawing of a photograph was not permitted by fair use. That means that using AI to just change the style of a work — say, from a photo to an illustration — is not enough to claim ownership over the modified output.
While copyright law tends to favor an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.
In many ways, generative AI is yet another creative tool that allows a new group of people access to image-making, just like cameras, paintbrushes or Adobe Photoshop. But a key difference is this new set of tools relies explicitly on training data, and therefore creative contributions cannot easily be traced back to a single artist.
The ways in which existing laws are interpreted or reformed — and whether generative AI is appropriately treated as the tool it is — will have real consequences for the future of creative expression.
This article is republished from The Conversation under a Creative Commons license. You can find the original article here.
Robert Mahari is a JD-Ph.D. student at the MIT Media Lab and at Harvard Law School. He studies how technology can and should affect the practice of law with a focus on increasing access to justice and judicial efficacy.
Jessica Fjeld is a lecturer on law and the assistant director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. She also is a member of the board of the Global Network Initiative.
Ziv Epstein is a Ph.D. student in the Human Dynamics group at Massachusetts Institute of Technology (MIT). Epstein received compensation from OpenAI for adversely testing DALL-E 2 in spring 2022.