top of page

Experimenting has been a crucial part in developing the system. In fact, it helped to better define what was needed from the groovebox in order to make more user-friendly. Moreover, it helped to fine-tune the parameters that were already available.

​

The short film has a wide variety of morphing images with a common theme that can be described as “waves”. I associated shapes with sound waves, however the relation between the sound and the image is not literal. In fact, if the distance between the waves is stretched and shrunk, instead of a change in pitch, I preferred a variation of the LPF to translate the images’ movements. This way, the relationship between the moving image and the sound is maintained. However, since the LPF has a strong resonance, it effectively creates a musical tone. This change in the tone could also be thought as a change in pitch, finally resulting in a double link to the image: one “fictitious” and the other more “natural”, but still not literal.

​

Since the video has many elements (and not being always able to react to them), a random events-generators is built inside the system to both keep the user engaged and to reflect the video’s content. The random generators have multiple choices: from adding a single hit/note, to adding 16 of them at the same time. Also, it is possible for the user to save its own presets and to recall them both randomly or specifically.

20Hz

 

This project has seen me and other two sound driven people building our own Digital Audio Instruments using Max/MSP and jamming on a chosen video, following the theme of "Pattern".

​

My digital instrument has been very well received, both for its versatility and for its added functionalities. In fact, I was able to implement automation, envelope modification, randomization of steps and multiple waves for the synth engine to work on.

​

The main part of the research has been focused on the sound and, in particular, how to synthetize a drum set. The sounds for the drum kit can resemble both electronic drum sounds and analogue drum sounds. These hybrid sounds have been designed to represent the video’s both “digital” and “organic” image. The synth has been chosen3 because of its capabilities, which have been expanded in order to obtain a wider range of controllable parameters and timbres.

S.T.R.E.A.M.

​

Due to the ever-growing schism that seems to pervade the audio-visual post-production environment, this project caters to some of the needs of the audio figures in this field, providing them with a tool that creates the possibility to connect their world to the one of the image.

In order to investigate the implications of such eventuality, I have created a program using Max/MSP. Starting with the audio of a soundtrack, the system creates musical ambiences and soundscapes, adding a new layer of sound that resides in its own dimension. The very nature of this layer is uncertain and deeply related to the video. In fact, S.T.R.E.A.M.’s output can be sonically altered by the properties of the video, basing some of its settings on the video’s RGB channels and Luminosity.

This way, the system creates a musical ambience which can be intended as a bridge, not only between the various different families of sounds present in a movie, but also between the audio and the video components. Several examples generated by the program are included in the present research in order to exemplify some of the possible usages.

 

Previous works such as early studies from the 1930s, film soundtracks and digital projects, have inspired this project, which furthers concepts introduced by Benjamine Fondane, Brian Eno and Larry Sider. S.T.R.E.A.M. responds to the expansion of the growing nexus of Sound Design, Composition and Audio Engineering offering these figures a tool to help them create and process audio content, or simply to inspire them.

​

The birth and evolution of the project has been documented in https://branded.me/ferdinandovalsecchi/posts

The system is made of four feedback engines, composed themselves of feedback loops and effects. The engines' parameters react to the changes in the video (RGB channels and Luminosity of the image) in realtime. The audio produced by the engines is directly related tothe pre-existent audio of the video, since it is created by the feedback-loops thanks to the fed input.

bottom of page