Words Trigger

Michaella Moon

This project connects RunwayML, Processing, MaxMsp and TouchDesigner to create an audio and visual rich performance.
The audio and visuals are all triggered by the text typed into the program. As the text is randomly typed into the program by the audience, RunwayML’s AttnGAN takes those words to create a collaged image. Микрофинансовая организация позволяет оформить займ на карту в МФО. Depending on the number of words typed in, Processing will grab that same amount of words and grab the colour values of random points on collaged image. These colour values is then converted into MIDI notes and notated onto a readable score the Cellist can play in real-time. This is updated every time a new text is submitted into the program. As the Cellist improvises off of the notes that is given to her, the MaxMsp patch will pitch detect the 36 possible notes she is playing (ranging from C2-B4) and trigger a TouchDesigner particle system to change shapes, colour and positions according to the different pitch detected notes.

A demo video showing how I used the text input of RunwayML’s AttnGAN to create an Ai generated image. I took random coordinates of the generated image (via Processing), added together the RGB colour values to convert them to MIDI values. In MAXMSP, I turn the MIDI values into notes on a score for the ease of the performer.

A demo video showing how each string’ affects the TouchDesigner particle system. By using OSC between programs, each string triggers a change in the particle system’s parameters. The G string of the cello triggers a burst of lines, the D string triggers the shape of the particles, the A string triggers alternated between 4 colours, E4 (on the A string) triggers an amplitude change in the shape. Although only briefly demonstrated in this video, the pitch detected notes also moves the center of the particle system depending on if the pervious note and the current note being played has a Major Second or Perfect Fifth relation to each other.