When I log on to Spotify, it recommends I listen to T. Rex because I listened to Marc Bolan. Bolan is the frontman of T. Rex. I roll my eyes and click away.
“On the record, I’m going to get rid of that one.”
That’s Brian Whitman, co-founder and chief technology officer of the Echo Nest. If you’ve listened to music online, you’ve probably unknowingly used his company. The Boston group has powered the back-end of music recommendations for companies like Rdio, iHeartradio and Vevo, and was recently acquired by Spotify for some $100 million.
Whitman recently presented at Gigaom’s Structure Data conference, a slightly technoanxiety-inducing event with attendees’ eyes flitting from the “data scientists” on stage to Google Chat on laptop screens to a huge projection of conference tweets. At least two people were wearing Google Glass. Presenters discussed machine learning, artificial intelligence and how to utilize data from stock ticks to heart beats to help create more productive, accurate and connected businesses, products and people.
Recommendation algorithms increasingly suggest everything we may want to watch, read or buy; who to follow on Twitter, what New York Times article to read next or what home goods products to order from Amazon.
What happens when that logic is applied to something as personal, unexplainable and previously unquantifiable as music?
Data is trendy right now, and the music industry is catching on. Samsung just launched a mobile personalized radio app called Milk, Lyor Cohen is tapping Twitter metrics, Gracenote is analyzing BitTorrent data and Warner Music inked a label deal with Shazam. A forthcoming Cone speaker promises to really get to know you by using contextual information like what room you are in and the time of day to tell you exactly what it thinks you want to hear.
It’s not a totally new concept. Pandora’s genome project was first launched in 2000 to analyze and catalog a web of musical attributes. Apple launched its Genius feature in iTunes in 2008 using purchase history to recommend what you might like. Most digital music services today come with suggested artists and some sort of auto play function — and most of those services are, or were, being powered by Echo Nest data. But as the T. Rex example shows, music recomendation is not an exact science yet.
In a conversation after his presentation, Whitman likened what the Echo Nest does to Google search, “It’s a way to browse and discover things. It’s like a Google search — we’re not forcing anything on you, its a way to explore.” The Echo Nest and Nest, the home electronics company recently acquired by Google, have a little more in common than their names. While Nest’s smart thermostats learn your habits and preferences to set your home’s temperature just right, the Echo Nest learns how, when and where you listen to music to help you figure out what you may want to hear next.
As streaming services like Beats, Spotify and Rdio offer essentially the same catalogue of music, with a few notable exceptions, the real differentiation factor between services is going to be the user experience and the quality of those recommendations.
Because most casual music listeners want an easy button, like a radio dial, that provides a finite number of options rather than a seemingly infinite flood of choice. As the Echo Nest’s director of developer platforms Paul Lamere said at SXSW: “You have to find a way to engage the people that are going to be intimated by a search box [sitting] in front of 30 million songs.” Humans tend to prefer to lean back and trust what a computer suggests as correct, that’s why three-fourths of all viewer choices on Netflix are from the recommendations on the home screen.
The Echo Nest has been quietly observing how users listen on a variety of different platforms for years. They’ve developed a sort of Scrobble 2.0, tracking not only what you listen to, but also how. In his Gigaom presentation, Whitman says:
“Taste profile is a huge portion of the Echo Nest business these days. We are tracking tens of millions of people’s music listening history on our systems. This powers all of our personalization, so when you log on to one of the services that use the Echo Nest Taste Profile, they’ll know about you right away. They’ll say ‘Well I know this person, what kind of music they like and what kind of stuff they want to listen to in the morning.’ If sometimes they listen to kids music, thats a different thing than listening to metal music another time of the day.”
With all this data to crunch and tastes to triangulate, what about human reccomendation?
Though there is no data point for musical serendipity, there may be a sweet spot between rockism (humans) and technological determinism (computers) when considering algorithmic discovery. It can be easy to frame the choice as human versus machine, but those algorithms were created by humans, for humans. A computer-generated recommendation is not necessarily always better than a human’s, but perhaps better than a human mind alone. People ultimately care about the music, not the technology that delivers it.
When asked by an audience member if the Echo Nest can use their data to predict hits, Whitman responds, “I don’t like the concept, personally, of predicting hits. It sort of felt creepy to me. If all we did was predict hits, it sounds like a bad sci-fi book — all of a sudden all of the music in the world was stuff that a computer thought people wanted to listen to… of course that’s never going to happen.”