Indeed, the tools will block a request if it names an artist. However, the labels argue that the safeguards have significant gaps. Following news of the lawsuits, for example, social media users shared examples suggesting that if users separate an artist’s name with spaces, the request can be completed. My own request for “a song like Kendrick” was blocked by Suno, citing an artist name, but “a song like Kendrick” resulted in a “hip-hop rhythmic beat-driven” track and “a song like korn” resulted in “nu-metal heavy attack.” (To be fair, they didn’t look like the respective artists’ unique styles, but to even fit the properly defined genre seems to suggest that the model is actually familiar with each artist’s work.) Similar solutions were ruled out in Udio.
Possible results
There are three ways the case could go, Grimmelmann says. One is completely in favor of the AI startups: the lawsuits fail and the court rules that the companies did not violate fair use or imitate copyrighted works too narrowly in their results. If the models are found to be fair use, it would mean that songwriters and rights holders would have to find a different legal mechanism to seek compensation.
Another possibility is a mixed bag: the court finds that the AI companies didn’t violate fair use in their training, but they need to better control the production of their models to make sure they don’t inappropriately imitate copyrighted works. Grimmelmann says this would be similar to one of the original rulings against Napster, in which the company was forced to ban searches for copyrighted works in its libraries (although users quickly found workarounds).
The third and essentially nuclear option is that the court finds fault with both the training side and the output side of the AI models. This would mean that companies would not be able to train on copyrighted works without licenses and also would not be able to allow results that closely mimic copyrighted works. Companies could be ordered to pay damages for breach, which could run into the hundreds of millions for each company. If they are not bankrupted by such a decision, it will force them to completely restructure their education through licensing agreements, which could also be cost-prohibitive.
COURTESY OF SUNO.AI
For permission or not for permission
Although the plaintiffs’ immediate goals are to get the AI companies to stop training and pay damages, Recording Industry Association of America president Mitch Glazier is already looking ahead to a licensing future. “As in the past, music creators will assert their rights to protect the creative engine of human art and enable the development of a healthy and sustainable licensing market that recognizes the value of both creativity and technology,” He wrote in a recent article Advertising sign.
Such a licensing market could mirror what has already unfolded for text producers. OpenAI has licensing agreements with several news publishers, including Policy, The Atlantic, and The Wall Street Journal. The agreements promise to make content from publishers discoverable in OpenAI products, although the ability for models to transparently report where they get information from is confined at best.
If music AI companies follow this pattern, the only ones with the means to build powerful music models may be the ones with the most cash. Maybe that’s exactly what YouTube thinks. The company did not immediately respond to questions from MIT Technology Review about the details of its negotiations, but given the vast amount of data required to train AI models and the concentration of music rights holders, it’s fair to assume that the price tag for record deals would be eye-watering.
In theory, an AI company could bypass the licensing process entirely by building its model entirely on music in the public domain, but that would be a herculean task. There have been similar efforts in the field of text and image creation, including a Chicago law firm that created a model trained in dense regulatory documents and a model from Hugging Face trained on images of Mickey Mouse from the 1920s. But the models are small and unusual. If Suno or Udio were forced to train only in the public domain—think military marching music and royalty-free songs found in corporate videos—the resulting model would be a far cry from what they have today.