Right up there with a bullshit detector—why do we not yet have one of these?
That’s the question Deezer’s latest AI-music revelations make you ask, or maybe it’s just because we’ve been watching too much Poker Face and wish we had Charlie Cale’s uncanny knack for spotting the fakes instantly.
The Paris-based streaming service has just pulled back the curtain on the scale of AI content flooding its platform, and it’s staggering: 60,000 AI-generated tracks are uploaded every single day, now making up 39% of daily releases.
In just a year, that’s a sixfold jump from 2025, when AI accounted for just 10% of daily uploads. Deezer has flagged over 13 million AI tracks since launching its detection tool last year.
The problem isn’t just volume. Around 85% of streams on AI tracks are fraudulent, played by bots or click-farms rather than real listeners.
For creators, it’s a sharp reminder: releasing track after track doesn’t automatically equal success.
Quantity isn’t necessarily quality, and flooding the system won’t protect royalties from AI fraud. Deezer is fighting back with shadowbans and by excluding fake streams from payouts, ensuring that real artists still get paid.
Even more fascinating, Deezer is licensing its AI-detection tech to the wider music industry. Claimed to be 99.8% accurate, and already tested with France’s Sacem, the tool can identify tracks from models like Suno and Udio, potentially saving billions in projected lost revenue by 2028. Deezer CEO Alexis Lanternier sums it up simply: transparency for fans, protection for artists.
By turning internal safeguards into a shared tool, Deezer is positioning itself as the industry’s first real “security layer,” separating human artistry from synthetic noise.
So if we can now detect AI bullshit, maybe it’s time for a human bullshit detector too.