The generative computer video utilizes a mixture of edge detection, fractional Brownian motion, and polynomic functions to determine the 2-dimensional direction of video warping. The luminosity of the projected video (after it's downsampled) is used to drive oscillators and modulate FM variables to generate the audio. This audio in turn determines the magnitude of the warping factor, which will generate more audio, etc. There are multiple levels of feedforward and feedback in both the colorspace and sample coordinates, employing an algorithm based on a damped chaotic oscillator. A webcam captures the projected image to introduce direct video feedback. The live performance utilized a small midi controller to modify 20 parameters across the video and audio domains.