Taylor Swift’s AI voice

Back in 2023, I wrote a post on my personal blog about how AI was a tremendous opportunity for artists willing to cash in on their very voice. The way it would work is simple: artists would generate AI audio content replicating their voice — with their authorization. It could also work for deceased artists, although someone else would have to sign off: AI still won’t bring back the dead.

The point I was trying to make was that AI had emerged as a new way for artists to generate content at next-to-zero cost (no studio time or session musicians required) and perhaps also create songs that could otherwise not exist. Think Freddie Mercury singing “Golden”, which AI made possible last year. Or new Elvis music. That kind of thing.

Obviously, this can only work with a proper legal framework, through which artists would get paid for this new AI-generated content, the same way they would for a regular recording. It does imply a discussion on percentages, especially with multiple stakeholders involved, but that’s nothing we haven’t seen before. There is however one tiny issue: no such legal framework currently exists, as one’s voice is not legally protected the way one’s likeness is. You can protect a recording of your voice, but not the voice itself: while that distinction was futile before the advent of ChatGPT, it now becomes fairly critical…

Which is precisely why Taylor Swift recently moved to trademark her voice: to ensure it cannot legally be used by AI models without her consent — and, coincidentally, compensation. Which is a perfectly natural stance: Swift is the most important musician alive today, so anything with her voice on it instantly becomes a hot commodity, even if it’s just her talking about the weather. The thing is, while this may be an option established artists can consider, it does create a hurdle for up-and-coming creators trying to make a name (and a voice) for themselves. You have to go through the procedure, which takes time — and money…

The way forward here is clear: adding clear legal protections for one’s voice. Quite frankly, I don’t understand why this hasn’t happened yet: the risk is obvious, as it affects everyone, not just musicians. I get that AI companies have been lobbying to have free rein on data usage, and that the law progresses slower than tech, especially AI tech. Still, Suno doesn’t let you create a song à la Stevie Wonder: you have to describe the actual style instead. And there is a precedent: in 2024, the state of Tennessee passed the aptly named ELVIS Act to protect artists’ voices in the wake of AI. From Tennessee to the world?

This is not only about adding a new logical extension to the law: it is about fostering a new industry that will be beneficial for everyone: artists who will have a new way to create content and generate revenue in the process, listeners who will be able to enjoy said content, especially one that could not have otherwise existed…

Next
Next

Coachella