top of page

World Economic Forum: will.i.am Sees Live Stages Growing While AI Floods Music

  • Mars
  • 5 days ago
  • 4 min read


At this year’s World Economic Forum in Davos, will.i.am used a CNBC interview to lay out how he thinks artificial intelligence, streaming and live shows are reshaping the business that made him a star. The Black Eyed Peas founder, who now teaches a course on personal AI at Arizona State University in collaboration with Eduify, said the current wave of AI generated tracks is only an early stage. “AI slop, this is the worst it is ever going to be right now,” he told CNBC, adding that systems will move from text prompts to what he called promptless tools as the math behind them improves.


For an artist who came up in hip hop’s sampling era, he placed AI inside a longer story about technology in music rather than treating it as a separate category. He compared today’s complaints about AI to jazz players who once questioned whether samplers and chopped breaks counted as real music, the same tools that later powered hip hop production.


Even with that context, will.i.am drew a line around who should benefit when AI systems learn from older records. He pointed out that every AI music system starts with a developer and code that he considers a type of art in its own right, but he said there is still a debt to the catalogs that feed those models. In the interview, he argued that “people should be paid” when systems train on “the entire library that humans have made,” and warned that the AI that worries him most is not yet here, the version that creates without leaning on yesterday’s music. That future, in his view, will arrive on top of an industry that is already shifted by streaming, social media and constant attention demands. He said today’s artists are operating in a different environment than the vinyl era or even the compact disc era that shaped his own career.



When the conversation turned to hits, will.i.am said the idea of one song owning an entire summer is harder to reach in the current landscape. He described a past where listeners shared the same music video channels and radio stations, and contrasted that with an audience now split across many feeds and platforms. According to him, trends still appear, but they burn out quickly instead of running through a whole season or year, replaced by what he called an onslaught of new content and distractions. He also walked through how revenue models changed, from Lionel Richie’s time selling vinyl to his own years selling compact discs to today’s moment where TikTok activity can drive relevance more than pure sales. “Now, we are now in a world where DSPs have devalued the worth of music,” he said, adding that songs no longer generate income the way they did for earlier generations of performers.


For will.i.am, that shift sets up a bigger role for concerts and in person performance as AI makes it harder to trust what comes through speakers and screens. “We are going to get to a point where live is the place to be. You cannot trust the screen in a couple years,” he said, predicting a time when human made and AI generated tracks are “indistinguishable” without clear labels. He suggested listeners will need simple tags that tell them, “This is human music. This is AI music,” similar to how people now look for language around organic products. The conversation also touched on long running questions about authenticity in pop performance, from lip sync controversies to backing tracks at award shows. He expects the bar for live shows to rise, with more demand for real improvisation and theater style moments that can only happen in one room on one night.


Alongside his industry commentary, will.i.am used the Davos stage to explain why he is investing in personal AI through both his company FYI.AI and his work in the classroom. At Arizona State, his “agent itself” course teaches students to build their own AI agents using Nvidia hardware, an effort he compared to setting up a personal data center at home. He told CNBC that just as workers now need bank accounts, email addresses and phone numbers to be employed, he believes they will soon need an AI agent of their own. “In the next couple of moments, you are going to need an agent to have a job,” he said, noting that companies are already replacing some roles with agents that have no lived experience, diploma or certificate. His class, he added, gives both students and their agents formal certificates as a way to connect this new technology to existing systems of recognition.


That push for individual control links back to his concerns about AI clones and ownership for artists, journalists and other public figures. He noted that networks or studios already have enough archived material to build digital doubles that look and sound like real people, and that those doubles could appear in content or even teach classes long after the original person is gone.


The problem, he said, is that most people do not yet “own” their agents or have a trusted place to bank their data and likeness the way they bank their money. Right now, he described the landscape as the wild wild west of the web, with companies scraping information and little accountability for how it is used. He told CNBC he expects new rules around name, image, likeness and data to arrive soon, and said they will need to protect people without blocking the innovation that is driving the next era of music and work.

Comments


bottom of page