conatus

Conatus is part of a series of compositions that I have been working on as part of my PhD research into composing electroacoustic music with the body. In this particular context I use motion and gesture to trigger and articulate sound objects in real-time. My desire to compose electroacoustic pieces that are informed by corporeal input comes from observations that I have made of the disconnect that can sometimes be present between the audience and performers of electronic music.

This physicality of performance is something that I want to explore within the context of electronic music performance with a view to achieve a stronger connection between the performer and the audience. In order to achieve this I have been exploring intuitive ways of interacting with computers and computing devices using motion sensors and wearable technology. The goal of this interaction is to create a synergy between the physical actions of a performer and musical output of the computing device that is been used in the composition of the material.

Linking musical gesture to the physical movements used to create them is, in my experience, vital in the appreciation of the performance of electronic music and this is something that I wish to highlight through the use of new forms of interaction. The use of computers has also significantly reduced the possibility for serendipitous events to occur in the composition and performance of electronic music. I am of the opinion that these chance events are valuable in the inspiration for and realisation of artistic works. It is as result of this opinion that I will purposefully invite elements capable of introducing change and uncertainty, such as human interaction and environmental influences, to be major actors within my compositions.

Conatus represents the next step of my research into gesturally driven electroacoustic compositions. Two of my previous compositions made use of the Xbox Kinect to map out the corporeal movements of a performer. These movements could then be ascribed to different musical functions such as triggering synthesised instruments or processing incoming audio. While this approach to gestural performance worked quite well, I found there to be some issues with robustness and reliability of the infra-red camera, especially when operating under theatre lighting. To solve this issue I have included a homemade glove controller in my compositional system. This controller now grants me far more responsive and robust control over the digital audio processes that I use to construct my music.

Data pertaining to the status of the glove is sent via OSC from a small micro- controller on the back of the Glove to a server running the audio processing software, Csound. Gestural information collected by the Kinect is also sent to Csound via OSC and is used to compliment the gestural data collected by the glove.

Within Csound the incoming audio, which is sampled from the source audio of the percussion, is stretched, frozen, granulated and resynthesised in real- time, facilitating a dialogue between the percussion and the live electronics. Musical parameters are still manipulated through the movements of a performer’s body in the performance space as with my previous compositions Kinesia and Proprioception, but now the compositional system is bolstered by the use of the glove.

Conatus is an electroacoustic composition built around a dialogue between percussion and live electronics. A gesturally informed compositional interface is used to capture chunks of live audio and process them in real-time, thus creating a texturally rich granular landscape which is serves as both a counterpoint and as an accompaniment to the source material.

 

 

Percussion: Ror Conaty

Live Electronics: Shane Byrne

Composed 2018