Nine music-technology companies presented their creations to a room filled with potential investors Thursday evening.
Of all the ingenious technologies presented at the Techstars Music Accelerator Demo Day at NeueHouse Hollywood on Thursday, none could compete with the pure razzle-dazzle of EmbodyMe, an AI-enabled technology that allows users to map their face onto a virtual avatar from a single photograph.
“Every once in awhile, a new format comes along that changes everything,” said EmbodyMe founder and CEO Issay Yoshida from the stage. “And today, I am going to talk about a new technology that is going to change how visual content is created.”
The words seem straightforward, but the presentation wasn't. That’s because Yoshida’s words were coming from the mouth of former Secretary of State and 2016 presidential candidate Hillary Clinton -- or at least an image of her that has been combined with Yoshida’s voice (altered to sound very roughly like Clinton’s) and facial expressions to create a bizarre hybrid. As the assembled crowd giggles in disbelief, the presentation was quickly taken over by an interpreter for the Japanese inventor, who cheerily explains both the technology that powers EmbodyMe and the digital watermark system that keeps bad actors from abusing it.
“Current technologies can only track up to 70 two-dimensional points on the face,” the interpreter explained. “On the other hand, EmbodyMe's 3D-dense face tracking technology can track up to 50,000 three-dimensional points. This allows for the highest accuracy and flexibility to match your facial expressions to any image or video.”
EmbodyMe was just one of nine technologies presented at TechStars Music’s third annual Demo Day, the culmination of a three-month, mentorship-driven startup accelerator that gave CEOs access to a variety of music industry professionals and investors. The Demo Day itself served as something of a graduation ceremony-meets-corporate presentation, offering founders the chance to show off their creations to a room of music industry professionals and potential investors.
Following a brief introduction by Techstars Music managing director Bob Moczydlowsky -- who noted that startups involved in the previous two accelerator programs have raised a total of $51 million since their participation -- each of the CEOs from the 2019 class took the stage to publicly present their creations for the first time. First out of the gate was John Funge, co-founder (with Thomas Jerde) and CEO of The Music Fund, a company that provides up-front payments to independent artists in exchange for a percentage of royalties from their back catalog.
“We had a bunch of conversations with people in the music industry, and they just told us about the challenges that artists have, especially in the sort of mid- and long-tail, in having access to capital to invest in their careers,” Funge told Billboard following the presentation. The technology behind The Music Fund utilizes a complex algorithm that customizes cash offers for each artist based on the popularity of their catalog. Using the Music Fund website, artists can search for their name, discover how much money they're able to secure, then further customize the offer based on how much they’re willing to give up in terms of royalties. The customization is hyper-targeted, meaning artists can choose which specific tracks The Music Fund collects royalties on, as well as limit the window of time in which they can do so.
As Funge noted, the technology is even more valuable in the streaming age, when less up-front cash is generally available to artists from a physical album release. “With streaming, it might be more money overall, but it comes slower, sort of year by year,” he told Billboard. “That’s exactly why musicians need capital upfront so that they can invest in their careers and keep going. And then eventually their streaming revenue will build up over time.”
Another Techstars 2019 alumnus, Signal (founded by Travis Rosenblatt and Elizabeth Moody), is also focused on royalties, though that company is looking to fix the royalty-collection ecosystem as we know it. In Rosenblatt’s estimation, the old model is broken; at one point, he noted that the companies which make up that complex ecosystem (including such major players as ASCAP and BMI) gave “between a quarter and half” of last year’s $10.8 billion in collected royalties to the wrong people.
“Signal is creating a single, global digital supply chain... where there's no redundant overhead to collect on every kind of music right, and it can be fully automated from end to end,” said Rosenblatt from the stage. “The product itself is simple. It looks like an artist and label distribution platform, but by asking who wrote the songs, we'll be able to serve songwriters and publishers as well. This seems small, but it means that Signal can directly collect on all royalties globally. We'll offer distribution and publishing administration for free, while handling collection for less than half the current cost and deliver payments more than twice as fast.”
The remaining members of TechStars Music’s 2019 class were less specifically focused on music, though all had varying levels of applicability to the industry. Brandon Sowers and Tim Fillmore’s company Inklocker uses “sophisticated routing software” and a global network of small-apparel print shops to allow any direct-to-consumer (D2C) seller to offer local printing and customization of merchandise with same-day delivery. For the music industry, this means that an artist touring in another country could utilize a local print shop for T-shirts, caps and other merchandise rather than depend on a larger, more centralized operation, with accompanying customs issues in tow.
“Last year, we networked 150 print shops and brought in $1.2 million in gross sales,” said Sowers, noting the company also recently integrated with Postmates. “We expanded internationally, and we're now 40 percent cheaper and 78 percent faster than our competitors. We've achieved Amazon Prime service for on-demand merch.”
Hailing from both Cambridge, Massachusetts, and Santiago, Chile, was Rhinobird, an “interactive video layer” that allows publishers and brands to integrate swipe technology (a la Snapchat and Instagram) into the video streams on their actual websites, thereby increasing engagement while allowing them to bypass revenue-sharing with third-party companies like YouTube. One key component of the company is its “synchronous video swiping” technology, which recently caught the attention of ticketing giant Ticketmaster.
“You can actually use this technology to synchronize video and for example follow a sport[ing] event or a concert from multiple perspectives, switching through multiple angles in a seamless way,” CEO Felipe Heusser (who co-founded the company with Benjamin Fatini and Sebastian Echeverria) told Billboard of the technology, which is also currently being tested by Vevo. “So Rhinobird is about creating a more compelling video story, where you can bring together different perspectives, different angles, and make sense of an experience where a lot of people actually participated.”
Also focused on increasing engagement, albeit in an entirely different way, is Marble, founded by Tom Brockner, Ethan Stickley and Julius Freund. That company is looking to orient the consumer augmented reality (AR) market towards more practical, day-to-day usage, as opposed to the game-centric experiences like Pokémon Go that most are familiar with. Using the technology, users would be able to easily create actual virtual “marbles” anywhere around the world via their smartphones, which they would then have the option of filling with whatever content they wanted.
“It's a world based on locations, on people and their interest in these locations,” said Brockner from the stage. “We can leverage these hot spots to create premium content, digital sales or even sponsored layers [by brands].” Individuals would also have the opportunity to create marbles for only select viewers, thereby creating a more private layer within the app.
One of the most attention-getting presentations came from Replica, an AI company that can create “hyper-realistic digital replicas” of human voices. Described by CEO Shreyas Nivas (who co-founded the company with Riccardo Grinover and Keni Mardira) as a “Photoshop for your voice,” the technology would allow entertainers to license their voices to companies, which could then manipulate those voices to say anything they wanted, allowing for, say, highly-personalized audio and video ads.
If this all sounds a little frightening in the "post-truth" age, Nivas noted that -- like EmbodyMe -- the technology will come with what amounts to a digital watermark that would allow unauthorized uses of someone’s voice to be traced back to the platform, thereby identifying them as fakes. Reassured Nivas: “Your voice is your identity, and it can be secured."
The most philanthropic-minded of the presenting companies was Mila, which creates musical games for children with learning differences based on “neurologic music therapy methods," assisted by technology that can track motor control and language skills. A machine-learning model interprets that performance into a cognitive score, which then allows for a personalized treatment plan.
“Today, technology enables us to deliver musical therapy remotely, improving the process for those already engaged in therapy and reaching those who don't get any help at all,” said CEO Kenneth Burns, who co-founded the company with Francois Vonthron and Antoine Yuen. “Our mission is to bring digital therapy to children with the power of music.”
Mila ran their first pilot study last year at a hospital in Paris, with encouraging results. They're currently undergoing two more with partners including Children’s Hospital Los Angeles and Kidz Bop. After that, they plan to take Mila to market, with a vision of extending the technology to such areas as language, motor rehabilitation and behavior therapy.
Finally, one of the more practical technologies presented came from Vaux, a company out of Australia that, in considering a more voice-centric future internet, has found a way to compress voice files using AI without sacrificing clarity.
“Our AI never stores or records your speech, it just listens to you speaking and creates a recipe for what was said,” said CEO Meaghan White, who co-founded the company with fellow AI experts Lindsay Watt and Christopher Gage. “Then, when it wants to play your soundbite back, it simply regenerates it based on the recipe.”
The use of that “recipe," as opposed to the speech itself, allows for a compression factor of 64x, making voice files created with the technology “faster to send and cheaper to store.” By using only the recipe for the speech, the technology also removes all background noise, thereby creating crystal-clear voice communications that are a distinct improvement over what's currently available.
Given the tightness of their presentation, it was amazing to learn that Vaux had only five weeks to come up with the technology, the result of a hard-right turn that occurred over halfway through the accelerator program. As White put it, constant talk of the rapidly-expanding voice market during the accelerator was simply too enticing to pass up, and the team decided to risk a last-minute pivot in order to take full advantage.
“[The experience with TechStars Music has been] transformative,” White told Billboard following the demo. “We came in as a... company that did audio analysis and created metadata and all of these things, and we're leaving as a voice company that's done a compression protocol. So I mean, it's changed our business.”