NSYNTH SUPER

Short List
TitleNSYNTH SUPER
BrandGOOGLE
Product/ServiceMUSICAL INSTRUMENT
Category A09. Use of Advanced Learning Technologies
Entrant GOOGLE CREATIVE LAB London, UNITED KINGDOM
Idea Creation GOOGLE CREATIVE LAB London, UNITED KINGDOM
Production GOOGLE CREATIVE LAB London, UNITED KINGDOM
Credits
Name Company Position
Google Creative Lab Google Creative Lead

Background

Magenta is a research group within Google, exploring how machine learning can help musicians in new ways. One of their first projects was NSynth a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, then create thousands of completely new sounds based on these characteristics. To make the algorithm more easily accessible, we took the sounds that NSynth generates and put them into a musical instrument that enables musicians to play them in a more intuitive way. We call it NSynth Super.

Describe the creative idea

We had to find a way to organize the over 100,000 output sounds of the NSynth algorithm and put them into an instrument to make it intuitive for musicians to play. Using dials in each corner of NSynth Super, artists can select up to four source sounds (out of sixteen options, from a clavichord to a car engine) to define the space of sounds they’d like to explore. By dragging their fingers across the space between these points, they can play thousands of unique sounds which combine all four of their acoustic qualities. It fits seamlessly with a musicians’ existing equipment. All they need to do is plug in a MIDI controller, like a keyboard, and use the touchscreen to explore new sounds. We wanted lots of musicians to access NSynth Super, so we made all of the source code, schematics, and design templates available for download on GitHub.

Describe the strategy

We started with a technical exploration of the algorithm and what the creative opportunities could be. By creating plots and projections of ML models, and testing out the new sounds created by the algorithm, we gained a good understanding of how we could approach the project. From these insights, we created our initial designs and began making prototypes. Throughout the project we maintained a feedback loop between design, musicians, and the technology, with each informing the development of the other. The result is a uniquely simple interface to the complexity of the underlying technology, and one that is instantly understandable for musicians.

Describe the execution

We took one of the most exciting machine learning research papers (NSynth) and made its potential accessible to non-researchers and musicians. The tech was inaccessible without very specialist skills; but even if you were able to run the algorithm, its output was not easily usable, lacking an interface that people can interact with in an intuitive way. We explored how we can use machine learning techniques (like dimensionality reduction) to create a new interface for the multidimensional space of sounds generated by the algorithm. The easiest way for musicians to create is through dedicated hardware interfaces, and this is what we did for the output of the NSynth algorithm. We created over 100k new sounds with the algorithm and put them in a format that musicians intuitively know how to use, and can quickly be expressive with. The interface also unlocks new possibilities for performance, allowing people to easily move between worlds of sound by swiping the screen, rather than manually adjusting multiple parameters. Without the interface we created, this would be a big folder of sound files. Instead it’s something that can be used to make music.

List the results

As an open-source project, results for NSynth Super are difficult to track. So far, we can see 1168 GitHub stars, 86 forks, and approximately 200 downloads of the software. Musicians and makers from around the world have also been sharing the NSynth Supers they’ve made, and the music they’re making with them. Some people have even began selling their own versions of NSynth Super online, and creating 3D printable versions. The project has received coverage from The Verge, CNBC, and Wired.