This has lead me to a two tier application. After
To activate the system, the user will speak into the microphone in front of the pendulum. this will trigger the system to begin recording sound up into the 16 buffered. Once the voice has stopped, the 16 buffers will be played in sequence with the MIDI In, coupled with the "Glitch" VST cyclically stepping though it's effects, this should be enough to keep the public entertained.
With this done and under my belt, I can start thinking more about chaos (in the hope I can produce some true algorithmic based sounds for the project!)
Note: Max/MSP and LabVIEW are definitely NOT the same thing, although they may look like it... spent a lot of the day getting my syntax out of a mess! ;)