Google built a hardware interface for its AI music maker
Music and technology go together hand in hand; drum machines and modular synths are just some of the more recent music technologies to emerge. Last year, a Google Brain project called Magenta created NSynth (Neural Synthesizer), a set of AI and machine learning tools that learn the characteristics of sound and create entirely new sounds from those attributes. Now, in collaboration with Google Creative Lab, the team has built NSynth Super, hardware to interface with NSynth using up to four source sounds at once to algorithmically create new sounds.
The team recorded 16 sounds sources across a 15-pitch range for input to the NSynth algorithm, which resulted in more than 100,000 newly created sounds, not just blends. These new sounds were then loaded into the NSynth Super, which has a touch screen musicians can drag their fingers across to play the new sounds. It’s still early days with this music tech, but the project is open source; code and design files can be found on GitHub if you want to make your own.
Source: Google Brain, Google