Composition Through Machine Learning
The Danger of Home.
This piece was created via machine learning with the goal of hybridizing the musical qualities of human speech with different styles of piano music using a recurrent neural network trained on scores (in a MIDI format) created by the author. This piece was realized using a Disklavier piano.
Infastain (acoustic Piano augmentation demo)
This is a demonstration of an acoustic piano augmentation that allows for infinite sustain of one or many notes. The result is a natural sounding piano sustain that lasts for an unnatural period of time. Using a tactile shaker, a contact microphone and an amplitude activated FFT-freeze Max patch, this system is easily assembled and creates an infinitely sustaining piano.
#speak baghdad music journal
#Speak BMJ is a composition for live electronics and projection art that both sonifies and visualizes text that is relevant to the composer’s experience as a soldier and counter intelligence agent in the Iraq War during 2004 and 2005. This text is collected in real-time during performance from Twitter search queries and is processed digitally in many interesting ways. In addition to sound art #Speak BMJ showcases sound reactive visual projection art that abstracts video filmed by the composer in Baghdad 2005.
The Joystuck is a new instrument designed and built by William Thompson. It has been realized using a Pure Data patch that manipulates the speed and play-back direction of audio recordings. The PD patch is loaded onto a Raspberry Pi single-board computer encapsulated by a custom laver-cut pine box with it's own built in speaker.
JellyRoll On Arron Harris.
Here’s one of a new composition style I’ve been working with. Each of these compositions were created (or inspired) from three short speech recordings, in this case a recording of Jelly Roll Morton. The melodic and rhythmic material has been taken from these speech recordings. You can hear this material in the beginning (and a few other spots) of each composition. The rest of the material was created with Machine learning as implemented as a recurrent neural network. Here this RNN is attempting to create piano music that resembles the musical material found in the specific speech examples.