April 7th, 2018

Studio 4 Northampton

www.studio4noho.com/hut

 

Excerpt of a ~25 minute improvised set. None of the material is pre-recorded. The audio and video are generated using a webcam, feedback, and oscillators driven by the video's luminosity.

 

More details: There is a webcam pointed at the projection screen that is displaying that same webcam mixed with video feedback and pushed around in the XY plane based on the amplitude of the audio. A mixture of edge detection, FBM, and a polynomic function determine the direction. The luminosity of the projected video (after it's downsampled) is used to drive oscillators and modulate FM variables to generate the audio. This audio will in turn distort the video, which can generate more audio, etc. I performed with a small midi controller that I used to control 20 parameters across the video and audio domains.

 

Made in Max7