(2020) Nick Collins, Vit Ruzicka and Mick Grierson. [PDF] [sound examples and code] "Remixing AIs: mind swaps, hybrainity, and splicing musical models". Proceedings of the 1st Joint Conference on AI Music Creativity. Stockholm, Sweden
Associated github project with code implementing autoencoder splicing, including live implementation for SuperCollider and Web Audio API [Keras-to-Realtime-Audio]
[SuperCollider code] to generate the audio examples below
Note that a few distortions and clicks are in this audio, revealing some of the side effects of the treatments.
Public Enemy Don't Believe the Hype orginal acapella (brief excerpt)
The same run through a 6 layer DNN model trained on that audio via the PV_Kerasify UGen (note that this model was only trained for a few iterations and distortions are very audible)
Two 6 layer DNN models are interpolated with the PV_DNNMorph UGen; larger changes in texture correspond to changing which layer is interpolated
Demo of the PV_KerasifyActivationFromBuffer UGen, with the 2048 buffer values continually replenished from a bank of sine oscillators at different frequencies