Tenterhooks

The source material for this piece was created in January of my first year of PhD research and is comprised entirely of short recordings, ranging from 1 to 4 seconds in length, of me playing the disembodied comb from an old music box. Each of the 20 small clips were performed using a screw as an excitation source.

A file player, which I created when composing the Ouroborus pieces, was then used to trigger these sound files. The exact timing and duration of these sound files were controlled using a pair of GUI sliders within the CsoundQt front end, each of which was then modulated by a dedicated pseudo-random counting device. The level, attack and decay characteristics of the amplitude envelope for each sound was also determined through the use of virtual chance operations. The audio output from this file playing instrument was then bussed to a set of auxiliary devices comprised of a reverb unit, a delay unit and a Csound version of an algorithmic beat processing device developed by Nick Collins for the audio programming software Supercollider.

In each of these auxiliary devices, the values for the majority of parameters were chosen using indeterminate processes. The range of values that each of these parameters could be assigned was gradually refined after many meticulous experiments with each device. The large number of pseudo-random processes in use within this system provide countless permutations which create a gamut of idiosyncratic and nuanced possibilities of sound when viewed on a micro level, however the total systematic output, when viewed at a macro level, is measured and circumscribed to a preconceived idea. Through the subtle tweaking of the maximum and minimum bounds for certain parameters, a nebulous prediction could be made as to what the general sonic output from the system would be. 

I rediscovered the source material generated through this system when searching through an old hard-drive and decided to reimagine it using techniques that I had recently been experimenting with in Abelton Live. I had begun using Live instead of Pro Tools because of its OSC compatibility and I was having some success mapping VST parameters to incoming data from the LEAP motion sensor. I refined over two hours worth of material into a source file of fifteen minutes. I then prepared five iterations of that source audio, repitching the material in octaves and minor thirds above and below the original material. A range of devices, including resonant filters, signal vocoders, various reverb and delays units, were inserted into the signal chain of each of these five tracks. A mixture of parameter automation and gestural input from the LEAP was used to control the behaviour of each device in the signal chain of the tracks.

 

Composed in 2018.