AI. Artificial intelligence. Those unfamiliar have at least seen it demonstrated in Hollywood – I, Robot, Terminator 3: Rise of the Machines, Bicentennial Man, Spielberg’s aptly named, A.I. Artificial Intelligence, the list goes on.
One area seeing an increasingly larger presence of AI is in the music industry, with its effects felt across the creation, promotion, application, and listening processes.
AI and machine learning are changing the very face of the music industry, from the way music is made to its consumption, but is it all for the better?
AI In The Music Industry: Major or Minor Improvement?
The music industry at large is not immune to the growing presence of AI in the general workforce.
True to the “augment vs. replace” distinction, it is expected that music producer and songwriter jobs will be augmented, rather than outright replaced, as AI continues to integrate into the creative process.
AI technologies and applications have become significantly more popular and capable, increasing data volumes and advancing algorithms while providing improvements in computing power and storage, all commodities that music creators need.
While innovations in how we create music and how it is consumed are indeed exciting, as a creator, one must be wary of completely removing the human element out of art. Because, naturally, all art forms are, by definition, human.
Music is littered with notions of subjectivity and taste; it’s a transaction with emotional currency, so it is difficult to wrap my head around a computer being able to tap into the creative process, as well as the intellectual components that make great music resonate.
In Bicentennial Man, they had to make Robin Williams fully human before he could really be human. I think this says a lot about AI’s role in music and the arts in general.
Below, I outline several innovations in the world of AI in music, and speak to a few pros and cons that I’ve observed.
AI = The New Musician?
Over the last several decades, there are numerous examples of musicians using AI-generated methods like neural networks to fuel the creative process and augment music composition. These have run the gamut from more boutique, “techy” offerings, to Billboard-charting singles.
In 2016, super producer Alex da Kid released the song “Not Easy,” a collaboration with X-Ambassadors, Elle King, Wiz Khalifa and…IBM’s computer system, “Watson.” Said Rolling Stone, “Watson…presented five years of analyzed cultural data in a way that meant something to Alex: through colors, patterns, textures, and words, Alex was able to read the data in a customized way – a data visualization made up of colors, words, patterns, and textures – that moved him.”
In 2017, American Idol alum Taryn Southern released I AM AI, an album that’s music was composed entirely by an AI composition program from Amper Music that used internal algorithms to produce melodies that blended appropriately with a particular mood and genre.
In 2019, Björk and Microsoft partnered with New York City’s Sister City Hotel, using AI to adapt in real-time based on sunrises, sunsets, and changes in barometric pressure, triggered via a live camera feed from the hotel’s roof.
Called Kórsafan, Microsoft’s AI uses sounds from Björk’s personal archive to create endless variations of music to set a weather-based mood for hotel guests. Moreover, the AI-generated system is trained to recognize different types of clouds, snow, rain, clear sky, and birds in different lighting and seasons.
So, as you can see, there are some pretty groundbreaking innovations in the world of music-driven AI. But in these examples, the AI is being assisted by humans virtually every step of the way. So don’t expect pop songs written entirely by AI any time soon.
The biggest point of contention in all of this is whether or not AI rejects what makes music “music”: humanity.
Many musicians and creatives empathize with those who argue that “computer music” could never emulate the human touch of an actual musician, that the “expressionism of music” is too sacred to impart onto technology. For some, the very thought of computers generating music from an algorithm, rather than being woodshedded by an artist, seems like heresy.
Perhaps one of the most stunning innovations in the world of artificial intelligence is one that sits neatly on our phones: Shazam.
One of the first consumer-adopted artificial intelligence services, Shazam uses intelligent technology to listen to and then identify songs, all in a matter of seconds. How many times have you been to a bar or restaurant and seen someone randomly holding up their arm with their phone in their hand, or even standing on a chair to get it closer to a speaker?
Partnering with Apple Music, Shazam does the equivalent of a fingerprint or retina scan on a piece of music. It matches that scan to the Apple Music library and swiftly comes up with the song. The song is then automatically added to the user’s Apple Music library under the pre-made “My Shazam Tracks” playlist.
I can’t tell you the number of times I have been watching television shows like The O.C., Shameless, Bojack Horseman or Insecure and thought, “Wow! That’s a great song! I wonder what that is?!” then held my phone to the TV and voila! there it is, loaded onto my phone ready for listening.
Shazam’s service provides an incredible amount of consumer satisfaction, and it very well may be my favorite piece of artificial intelligence technology that exists.
AI Case Study #2: Music Streaming Services & Playlisting
We can’t discuss AI music without discussing the music streaming giants. It’s this generation’s “Coke or Pepsi.” Streaming services are now the dominant force in all aspects of music, overtaking digital downloads as the next big thing.
Apple has been slowly killing off its iTunes service for the last several years, Google folded its Google Play service into the YouTube Music streaming service and Amazon has been putting more stock into its Amazon Music streaming service over digital downloads.
If you are an artist trying to gain some clout, independent or otherwise, getting your song onto an official Spotify or Apple Music playlist has become the holy grail of promotional gambits. In recent years, independent artists looked to blogs and other publications to increase their buzz, but now those days are gone and it’s all about playlisting.
And if you thought having high numbers on social networks like Instagram and Facebook were important, Spotify even has your music streaming data front-and-center for industry professionals to examine.
While Spotify and Apple Music playlists can hold the key to generating buzz in a crowded 2021 musical landscape, they also are a testament to AI’s integration into how we consume music.
Spotify’s bread-and-butter is their “Discover Weekly” playlists. These are generated for users based on an algorithm that reacts to a user’s listening history and creates a playlist highlighting their tastes, as well as new music that the algorithm thinks they will like.
Spotify’s artificial intelligence undergoes ongoing improvements by collecting as much listening data as possible, performing a comparative analysis from other users, and then uses these results to suggest new music.
Apple Music is different because, as it claims, their playlists are strictly curated by an expert curation team. Additionally, one of Apple’s flagship services is Apple Music 1 Radio – formerly Beats 1 Radio – which is a 24-hour radio service that is curated and hosted by actual DJs and celebrity hosts.
Despite the convenience of Spotify’s machine learning/algorithm-centric playlists, Apple Music’s human curation provides a personal touch to the music discovery process. Additionally, Apple Music will inform playlist subscribers when a playlist has been updated with new music.
So where do I fall on this? Personally, I subscribe to both Apple Music and Spotify and feel that both have their strengths and weaknesses.
When it comes to the AI components, sure, an algorithm based on a title, keyword, genre, etc. can come up with a good playlist, but when a human acts as a curator, as opposed to a computer, they can better understand the feelings behind the music.
Behind the running order. Behind the sonic zigs and zags. The way a lyric can change the way someone feels about a song, or the way an outro of one song can make you want to hear the complimentary intro of another song.
This somewhat resembles the old “mixtape” days of literal tapes, burned CDs, or just making a playlist for someone you know.
” ‘Artificial Intelligence’ and ‘Machine Learning’ are buzzwords for a type of software that is ultimately experiential: what goes into AI represents collective information but not necessarily innovation. It is one reason why AI is very good at identifying what you like based upon your interests, but why at the same time what it produces – in art, in music, in writing – tends to be bland and uninspiring. Musicians should see it as a tool, but like any tool, overreliance on AI can become a crutch, because it can drown out the spark of what makes art original … you.” – Kurt Cagle, Community Editor, Data Science Central.
AI Case Study #3: LANDR
The final area I would like to particularly focus on is artificial intelligence in music production. AI audio mastering services are beginning to seize major market share from traditional mastering engineers, with the major player being LANDR. But to better understand this competition, we need to answer a simple question: what exactly is mastering?
Mastering makes your completed music sound professional, balanced, cohesive, and competitive for commercial release, while ensuring that it maintains the same level of quality across a wide variety of speaker systems and media formats.
Create an even distribution of frequencies to help dynamic recordings translate on different playback systems, protecting against low-end rumble and harsh high-end
Remove pops, clicks, and other unwanted noises from a sound source
So how does LANDR work? Straight from the horse’s mouth, LANDR uses machine learning to replicate the “human intelligence” mastering process through “smart” mastering software that is…
“Built around an adaptive engine that ‘listens’ and reacts to music, using micro-genre detection to make subtle frame-by-frame adjustments using tools like multi-band compression, EQ, stereo enhancement, limiting and harmonic saturation, based on the unique properties of your song.”
Having the ability to finish a song, then have it mastered in a matter of minutes, all without having to leave your chair or deal with communication channels, various schedules, and invoicing, is pretty amazing.
But here’s the thing. No matter how smart LANDR’s software may be, it makes decisions based on algorithms, not emotions or feelings.
Inevitably, this can result in just-OK-sounding masters, however, if you are only paying $20/song, can you really be that upset?
Mastering engineers, however, can respond on instinct and feeling, can go through several rounds of feedback, while also having the ability to master a full album, complete with consistent loudness levels from track-to-track, fades, etc.
AI just can’t match that level of personal intimacy… for now.
AI Is Here To Stay, But Its Contributions Are A Grey Area
So as has been outlined, artificial intelligence is an ever-populating tool that can be utilized in most industries. Its contributions to music composition, production and more, are farther-reaching by the year.
But how long before legalities come into play?
With current copyright law as it stands, currently, there is nothing to legally stand in the way of AI copying an artists’ style, inflections, and instincts. Some legal experts say that unless the AI is directly sampling, being directly marketed as sounding like that particular artist, or creating derivative works, then there is nothing to be done.
As Dani Deahl from The Verge wonders, “depending on how legal decisions shake out, AI systems could become a valuable tool to assist creativity, a nuisance ripping off hard-working human musicians, or both.”
How do you think AI should be treated from a copyright standpoint? And what do you think about its growing influence in the culture? Do you use it to create music? Sound off in the comments!