AI Is Impersonating Real Musicians on Spotify — and Fans Can’t Tell the Difference

{Post Title} | AiToolInsight

AI-generated imposters are flooding Spotify with fake tracks under real artists’ names stealing streams, wrecking algorithms, and confusing fans. Now, after years of inaction, Spotify is finally doing something about it. Here’s the full story of a crisis that has the music industry genuinely rattled.

Imagine releasing a carefully crafted EP music you made deliberately by hand, with no AI anywhere near it only to have a fake AI-generated track appear under your name on Spotify days later. That’s exactly what happened to British singer-songwriter Ormella in January 2026, and it’s becoming an all-too-common story across the music industry.

Ormella, who has around 83,000 monthly Spotify listeners, released a live EP from her living room as a statement of authenticity. “I wanted to do something ungenerated,” she told TIME. The irony of what followed was brutal: within days, an AI-generated song appeared on her Spotify profile completely without her knowledge or consent and Spotify’s own system notified her superfans about the new release. They listened. “I had a lot of fans message me, asking, ‘Is it you? It doesn’t sound like you,'” she said.

The AI track racked up a thousand plays on its first day. It appeared on other singer-songwriters’ profiles too, with different titles and cover art. The fraudsters behind it probably made a few dollars in royalties a trivial sum, but the damage to Ormella’s artist identity was something money can’t easily fix.

The Scale of the Problem Is Staggering

This isn’t an edge case. It isn’t a niche problem affecting a handful of obscure artists. The volume of AI-generated music flooding streaming platforms in 2026 is genuinely alarming, and the numbers back it up.

{Post Title} | AiToolInsight

That last number 39% of all new music hitting Deezer daily being AI-generated should stop you cold. Nearly four in ten new tracks uploaded to that platform every day are machine-made. The open distribution model that enabled independent artists to release music without a label deal has been systematically exploited by bad actors with AI tools and very little scruples.

Dennis Kooker, President of Global Digital Business & US Sales at Sony Music Entertainment, put it plainly when speaking at the IFPI’s Global Music Report 2026 launch in London: these AI deepfakes cause “direct commercial harm to legitimate recording artists.” Not theoretical harm. Not potential harm. Direct, commercial harm to royalties, to algorithmic recommendations, to fan relationships, and to careers.

Who Gets Targeted And Why Dead Artists Aren’t Safe Either

The targeting pattern is revealing. Fraudsters aren’t randomly dropping AI tracks under any name they can find. They’re targeting musicians strategically, and the strategy varies depending on what they’re after.

High-profile acts make for lucrative impersonation opportunities. Last year, AI-generated tracks mimicking Tyler, the Creator’s Don’t Tap the Glass album flooded Spotify and TikTok ahead of its official release. A fake version of the album briefly climbed to the number two spot on Spotify under the actual album name. Tyler has millions of fans even a fraction of their streams represents real money.

Cork’s experience is a good illustration of how quickly the novelty wears off. He posted a clip of himself playing a new track on social media. Within a week, an AI-generated fake was circulating built to capitalize on the viral moment before he could release the real thing. The AI music wasn’t trying to be great art. It was trying to be timely, to exist in the window where interest was highest.

But here’s the part that underscores how deeply broken the system has become: the problem doesn’t stop when an artist dies. Fake AI songs have been uploaded to the Spotify profiles of SOPHIE the pioneering electronic music producer who died in 2021 and to the page of Uncle Tupelo, the long-defunct 90s band associated with Jeff Tweedy of Wilco. Folk singer-songwriter Blaze Foley, who was murdered in 1989, has also had his profile targeted. You cannot call this identity theft from the victim. But it’s still theft from fans, from estates, from the cultural legacy of artists who can no longer defend themselves.

How the Scam Actually Works And Why Spotify Couldn’t Stop It

Understanding the mechanics here matters, because the fix needs to address the actual vulnerability not just the symptoms.

Most music reaches streaming platforms through third-party distributors: services like DistroKid and TuneCore that let independent artists upload tracks without going through a label. These services are largely designed around trust they take metadata at face value. Want to upload a song and say it’s by a specific artist? The system has, historically, been remarkably easy to manipulate.

As Ormella noted, these platforms “lack robust authentication processes to prevent impersonation of existing artists.” A bad actor can upload a track, attach an existing artist’s name to the metadata, and that music will route to the artist’s profile. Spotify’s algorithms then do what they’re built to do they see a new release associated with an artist and push it through Release Radar and fan notifications. The platform is doing exactly what it’s designed to do. The design itself was the problem.

This connects to a broader pattern we’re seeing across the AI landscape. The tools to create convincing synthetic content music, images, video, voice have outpaced the tools to verify authenticity. As we’ve covered in our analysis of companies that replaced human workers with AI and then regretted it, the “move fast” mentality in AI adoption has consistently created downstream problems that are harder to fix than they were to prevent.

Spotify’s own culpability here is real, even if unintentional. When King Gizzard & the Lizard Wizard pulled their catalog from the platform last year, an AI impersonator called “King Lizard Wizard” appeared almost immediately with songs using identical titles and lyrics. The fake versions were initially recommended by Spotify’s own Release Radar playlist before being taken down. That’s not a minor embarrassment. That’s the platform’s core discovery algorithm actively amplifying fraud.

Spotify’s Response: Artist Profile Protection

On March 24, 2026, Spotify announced it was entering beta testing for a new feature called Artist Profile Protection. The company described it as “a first-of-its-kind solution” though critics would reasonably ask why it took this long to build something that should have existed years ago.

Here’s how it works: artists opt in through Spotify for Artists, the platform’s dashboard for musicians. Once enabled, they receive a notification when music is delivered to their profile and can choose to approve or decline the release before it goes live. Only approved releases appear on their Spotify profile, contribute to their catalog stats, and factor into algorithmic recommendations like Release Radar.

To prevent legitimate releases from getting caught in the approval queue, each artist receives a unique “artist key” essentially a code they can share with trusted distributors. When music is delivered with that key attached, it’s automatically approved without requiring manual review. If an artist fails to act on an incoming release, it’s blocked by default.

The announcement came one week after it emerged that Sony Music had asked streaming platforms to remove more than 135,000 AI-generated songs impersonating its artists a staggering request that signals just how organized and systematic the impersonation problem has become. According to Music Business Worldwide, the timing wasn’t coincidental: industry pressure had been building for months.

What the Feature Does Well And Where It Falls Short

Credit where it’s due: Artist Profile Protection is a meaningful structural change. Moving from a purely reactive “report and remove” model to one where artists can intercept fake releases before they go live is the right direction. The artist key system is also sensible it solves the obvious problem of legitimate releases getting blocked by an overly paranoid approval system.

But the limitations are real and worth examining honestly.

There’s also the active management burden. Spotify is upfront that the feature “isn’t necessary for every artist” and is “best for those who are comfortable very actively managing their catalog.” For artists who are already managing their music career alone booking, promotion, social media, fan engagement adding another approval workflow isn’t trivial. The artists who are most vulnerable to impersonation (mid-level independent acts with devoted but not massive audiences) are often the ones with the least bandwidth to monitor an additional inbox.

And then there’s the fundamental problem that declining on Spotify doesn’t stop the fraudulent upload from existing. The fake track doesn’t get deleted from the distributor’s account. It doesn’t get reported to the distributor. It simply doesn’t appear on Spotify. The same bad actor can try again tomorrow.

The Broader Implication: AI Is Reshaping Who Benefits From Music

The AI impersonation crisis in music isn’t just a fraud story. It’s a story about what happens when powerful generative technology meets an industry that still runs on relatively weak identity infrastructure.

What’s playing out on Spotify right now is a version of the same dynamic we see across creative industries facing AI disruption: the tools to create convincing imitations at scale have arrived far faster than the systems needed to verify authenticity. The music industry built its streaming infrastructure around accessibility and openness values that are genuinely good for independent artists and is now being forced to retrofit gatekeeping mechanisms it was philosophically opposed to five years ago.

There’s a legitimate debate about AI music itself. Some musicians use AI as a creative tool, and not every AI-generated track is fraud. The issue isn’t generative music it’s generative music attached to someone else’s identity without consent. That distinction matters enormously for how the industry builds its response.

Deezer has been moving faster on detection, building an AI music identification tool that it’s now licensing to other companies and industry bodies including Hungary’s EJI rights organization. That approach building detection technology that can be deployed across platforms seems like a more scalable solution than asking every individual artist to manage an approval queue. The question is whether the major platforms have the will to implement detection systems that might slow down upload velocity.

Spotify’s Artist Profile Protection is an important first step. But for artists like Ormella, who still face impersonation on every platform that isn’t Spotify, the problem is far from solved. The music industry’s fight against AI fraud is, as Digital Music News aptly put it, “an unfortunate game of whack-a-mole” and right now, there are a lot more moles than mallets.

What Comes Next

The pressure on streaming platforms isn’t going anywhere. With Sony’s 135,000-track removal request establishing a new benchmark for how aggressively major labels are willing to pursue AI fraud, and with IFPI’s Global Music Report 2026 putting impersonation front and center in the industry conversation, expect every major DSP to announce equivalent features in the months ahead.

Apple Music, Amazon Music, and Tidal have all been conspicuously quiet so far but that silence is unlikely to hold. The reputational cost of being the platform where Tyler, the Creator’s AI clone is living rent-free is becoming too high to ignore.

The deeper challenge is building a music ecosystem where authentication is baked into the upload process not tacked on after the fact. That means distributors like DistroKid and TuneCore need to implement verification at the point of upload, not rely on platforms to clean up afterward. And it means the industry needs to decide, once and for all, whether it wants a genuinely open distribution model or a verified one. Right now, it’s trying to have both and the results are becoming increasingly difficult to defend.

For independent artists building their careers one honest listen at a time, the stakes couldn’t be higher. The fight for authentic music isn’t just about streaming royalties. It’s about whether a listener who discovers your music through an algorithm can trust that what they’re hearing is actually you.