James Murphy memorably encapsulated the cyclical nature of electronic music trends in LCD Soundsystem’s 2002 breakout single, Losing My Edge: “I hear that you and your band have sold your guitars / And bought turntables / I hear that you and your band have sold your turntables / And bought guitars.”
When a subgenre or production technique blows up, be it tech house or hard techno, there’s always an undercurrent of producers and artists inspired to make something decidedly different. As AI tools proliferate across every aspect of our lives, many artists are reacting by making things that are irrevocably human, like Rosalía convincingly singing in 13 languages on LUX without AI assistance.
At the same time, the AI cat is out of the bag, and music-makers are deploying the technology in a variety of capacities. AI is something that we need to regulate, discuss and – for some – innovate with, instead of simply casting it aside. I often think of musicians in the early ’80s who were anti-synthesizer, fearing the instrument would take their jobs and make music sound too artificial.
Article continues below
The UK Musicians Union even attempted to ban synths and drum machines in 1982 as their popularity spread. (Of course, an innovative electronic instrument is very different to a far-reaching technological advancement, but AI music tools are just that: another tool.) New technology can feel threatening (especially when controlled by tech billionaires) but isn’t electronic music all about using tech in innovative ways to create new sounds and new possibilities?
As Association For Electronic Music’s Chief Strategy Officer Jay Ahern points out, going even further back to the ’60s, modular synths were the original generative music-makers, which, of course, required a skilled technician to operate. (And while DAWs have made a world of digital sounds accessible at the click of a button, many electronic producers still covet outboard gear and modular experimentation.)
Despite the recent rapid advancements in AI, Ahern reminds us that this isn’t a new conversation. “For musicians, iZotope and tools like that have been around. For me, it’s fairly normalized. I mean, would I have seen Suno and Udio coming? Yeah? But the thing that kind of irks me is the wholesale ripping of people’s creative works.”
Crucially, large language models are trained on datasets comprised of creative works pulled from the internet, much of it copyrighted material. You can’t make “Fake Drake” without ripping off Drake’s music. This needs to be addressed, and could be a viable income stream for artists that choose to opt in.
“The thing that kind of irks me is the wholesale ripping of people’s creative works”
“There’s a political and cultural element to what we’re talking about. It doesn’t matter what the tools are, this is a deeper conversation. ChatGPT and these tools that output music at scale are trained on stuff. The laws in different countries are different. The gray areas can be exploited so that – exposing myself as a complete lefty – the rich come in and suck up all the money, and that’s actually what’s going on,” Ahern underscores. He adds that countries with strong musical traditions, like K-pop in South Korea, understandably aren’t happy about their chart-topping sounds seasoning the uncredited AI musical soup.
Tech companies with billion-dollar valuations are extracting value from copyrighted music on the internet and selling it as a service: making music-making easier and, they claim, more democratic. But creatives have always found ways to democratize and innovate music and art, long before tech companies tried to bite their flow.
Young Black men in New York and Chicago sampled soul, funk and disco records (along with Roland TR-808-crafted beats) to build new foundational genres – hip-hop, house and electro. CDJs reshaped DJing as we know it, lowering the barrier to entry and diversifying the scene. DAWs allowed more DJs to become producers, whether or not they could afford expensive gear.
Reducing the process of music-making to a series of prompts is, unsurprisingly, the depressing tech “solution,” but it’s not necessarily one we must accept. And maybe this time, artists can be properly paid and credited for their work.
“Democratizing music is something that has always happened,” Ahern adds. “The tools come along, like sampling – you don’t need musical semantics to make music great, it can be a free, pure form of expression. Using other people’s copyrights to engage with the process of making music has been a thing in music for many years – the whole canon of classic rock is built on blues.”
Elvis, the Rolling Stones, Eric Clapton and many others became rich and famous copying Black music that wouldn’t get airplay on segregated radio. While the legacy of the music industry exploiting artists, particularly ones of color, is far-reaching, the more active presence of copyright laws and performing rights orgs in music since the ’80s have helped credited artists get better payouts for covers and samples of their work.
Copyright does, in essence, allow people to get paid, and if you can’t get paid, you can’t sustain an ecosystem
“Copyright does, in essence, allow people to get paid, and if you can’t get paid, you can’t sustain an ecosystem, ultimately. So, the idea of jumping in and saying, ‘Hey, we’re going to democratize music,’ yes, it’s an old idea, but in the past that was done to emancipate subcultures,” Ahern continues. “The idea of prompting music to exist, to remove the pain point I find a bit suspect.”
At the recent Winter Music Conference in tech hub Miami, an eye-opening panel entitled Artificial Music, Real Consequences: Ethics and Adaptation in the New Streaming Era dug into the complicated legal implications of AI in electronic music and debated how best to move forward in an ecosystem where AI-generated tracks are already in the mix.
“I am very worried about not being able to get a handle on it, because tech moves very fast. Legal reform and uniformity in legal systems moves very, very slow,” music business lawyer and AFEM co-founder Kurosh Nasseri said at WMC. The conversation was moderated by Parag Bhandari, CEO of PR company UG Strategies, and featured another American entertainment lawyer, Joshua Love, along with Sherlo Esajas of BumaStemra, Netherland’s music collection agency.
“The core issue we’re facing is we’re taking legal concepts that are 100 years old or more and have not been updated significantly, and we’re trying to shove a new square peg into this round hole,” Nasseri continued.
“Pretty much all these companies decided that they think they can do this without the permission of copyright owners because in the United States, we have a legal construct called fair use, which is a defense to copyright infringement. In certain instances, you can essentially bypass the copyright owner’s exclusive rights if you have a good fair use argument,” Love underscored. “There’s massive litigation going on in this very topic.”
While imperfect, copyright laws and music publishing credits exist to protect the creators, so they can create without someone coming along and ripping off their work for their own gain. And just as music copyright cases have at times been messy, deciding where to draw the line with AI will likely be as well.
“These are massive questions and there’s a paradox built into every one of them. Like, should there be copyright protection of [AI-generated] output?” Love asks. “On one hand, the purpose of copyright is to protect and incentivize human creativity. But I also don’t necessarily believe that prompting itself is a creative output that is worth protecting.”
Where do we draw the line on music made with generative AI versus AI-assisted tools? AI exacerbates the idea that all art is derivative of something, but do we really want to live in a world even more flooded with AI slop than the one we currently inhabit? If a gen-AI track sounds good and isn’t directly infringing on any copyrighted material, is it better or worse than music made by humans that’s an obvious and uninspired pastiche?
Tech companies aren’t going to regulate themselves, so it’s up to music orgs to set the standards for protecting artists’ rights in the age of AI. That’s where AFEM, a global electronic music trade association home to 300 member organizations, including indie labels, agencies and beyond, has stepped in with their AI Principles. Launched at Sónar-D last summer, it outlines four core principles to protect music creators in the age of AI: Consent matters, old contracts don’t apply, moral rights apply, and credit and pay.
Essentially, it asserts that AI training data “music be licensed with explicit authorization,” contracts created before AI don’t cover its use, creators have a say in how their music is used – even if it was licensed, and both training data and outputs must properly credit, pay and attribute all their sources.
Nasseri closed the panel with an important reminder and call to action, emphasizing the importance of attribution and the urgent need to compel tech companies to adopt the AI Principles, as they’re unlikely to do so of their own accord.
“When there is output, there must be a credit. There are a lot of people that say, ‘Oh, it can’t be done’. I’ve got news for you, if you can be innovative enough to get a machine to spit out music, then you can attribute it with tech. Unfortunately, usually the answer is, ‘We’ll do it when we have to,'” Nasseri emphasized. “So, the adoption of the principles as a consensus is, to me, the critical element.”
While tech companies can’t be trusted to regulate themselves and AI is causing many headaches and complications, the silver lining is that it may also be a powerful tool for creating these controls. “Maybe I’m a tech optimist, so I see the irony that we probably are going to need AI to regulate AI, then distribute AI in our business models,” Esajas points out.
Ahern agrees. He explains that AI can also help get people paid, find and share credits, and enforce copyright. AI will make it harder to “hide” bits of uncredited samples in a song and may lead to a push to preemptively credit all sources, whether or not you used AI to make music. The number one question Ahern gets from artists – “How can I use AI and not get sued?” – underscores their concern with the ethics and legality of using these tools. “Will AI normalize naming your sources? Yeah!” Ahern posits.
And if you release music, now’s the time to protect your work. “Don’t be too underground to understand how rights work,” Ahern adds. BumaStemra, GEMA in Germany, and ASCAP and BMI in the U.S. exist to protect music copyright and pay its creators; and they’re not just for songwriters.
Last year, GEMA won a case against OpenAI for violating German copyright law by using song lyrics without a license to train ChatGPT’s large language model. GEMA also filed suit against Suno in 2025, while Universal, Sony and Warner Records came together to sue the tech behemoth in the U.S. Warner dropped their suit after reaching a licensing agreement which their artists can opt in to, while the other two are reportedly struggling to come to an agreement, as they push for Suno-made songs to stay in the app instead of being spread across the internet.
Despite the widespread skepticism towards AI-assisted creative outputs, artists of all stripes are using these tools, and, as Ahern points out, some don’t even realize they’re AI, like bands using iZotope for mixing. “I can tell you; everyone is using these tools. Artists at all levels. They don’t want to talk about it, but products like Suno Studio have been really eye-opening for artists,” music and tech executive Drew Thurlow, who recently published a book titled Machine Music: How AI is Transforming Music’s Next Act, affirms.
“No one I’ve talked to really thinks this tech is threatening to creatives. Quite the opposite: it’s declining the friction between ideation and realization. If there is one downside, it’s that we’re going to need less session musicians, mixing, and mastering engineers. That’s less of an issue in electronic music where most of the artists have been a one-stop shop anyway,” Thurlow adds.
Iconic French Touch producers Alan Braxe and Fred Falke illustrate this point in a recent conversation with MusicRadar, where they single out AI, particularly Suno, as the new music tech they’re most excited about. In their eyes, it’s the next evolution of sampling, a powerful tool to give you exactly the sounds that you want.
The negative comments on the YouTube clip highlight the aforementioned skepticism; when technological advances (like CDJs or Photoshop, for example) make a creative output easier, there will always be detractors insisting the new way doesn’t require any skill. Even though the technical process of sampling has become much easier over time (razor and tape anyone?), the art of it has endured, with modern artists like Jamie XX still finding innovative ways to reimagine deep cuts into fresh new sounds.
Some artists have managed to experiment with AI while sidestepping the ethical issues raised by big tech’s dubiously trained models. Dutch hard techno DJ/producer Reinier Zonneveld trained an AI model on his own catalogue and has been going B2B with his virtual alter ego. The model “listens” and responds to his set, playing drum machines and synths via robotics “in a way that a single person would never be able to,” as Zonneveld puts it.
Experimental electronic artist Holly Herndon has been leading the charge in pushing the boundaries of visual AI art and music. Much of her output is wound up in the exploration of identity; she’s created multiple AI models trained on her own voice, including Spawn and the “digital twin” Holly+.
On the more insidious side, groundbreaking producer Timbaland unveiled a rather lifeless virtual artist Tata Taktumi as part of his new AI-led entertainment company, Stage Zero. Taktumi and the other AI artists’ music is made in Suno, and with the initiative, Timbaland hopes to coin a new genre: A-pop, or artificial pop.
The Virginia Beach producer sampled from South Asian music to bolster his then-futuristic sound in the ’00s, but he’s now profiting from an Asian woman’s likeness without having to share the wealth or actually invent any new sounds. (It’s giving Diplo pretending to be Jamaican with Major Lazer.)
Whether or not A-pop takes off, AI is already dramatically reshaping the way music is made, for better or worse. When asked if AI will change the sound of electronic music in the next five or 10 years, Ahern is enthusiastic.
“God, I fucking hope so. I’m absolutely for AI because, finally, we can figure out how to bust genres open. We’re so siloed in terms of ‘This is tech house. This is house,'” Ahern says. “I hope genre can be so much more fluid. That’s why I’d like to see this all get over the line legally. As a creative thinking partner in music, and not only assistive in the studio, I think it’s a fabulous thing. I just don’t think we should do it at the cost of others, to enrich the few.”
!["Artists of All Levels Are Embracing These Tools — Yet They're Reluctant to Discuss It": The Impact of AI on Electronic Music] 1 "Artists of All Levels Are Embracing These Tools — Yet They're Reluctant to Discuss It": The Impact of AI on Electronic Music]](https://backingtracksfullcollection.com/wp-content/uploads/2026/04/Artists-of-All-Levels-Are-Embracing-These-Tools-—-Yet-758x426.jpg)