Background && Concept
The experiences that can occur on a bed are both great in number, and dynamic in intensity. The action of sleeping alone can range from a calm, blissful slumber, to a chaotic, hellish nightmare. However these actions, as a whole, are silent, and there is no way for one to play a sound that immediately corresponds to his or her experience.
Our “Storm Bed” aims to give the experiences that can occur on a bed a simple, yet descriptive sonic language. The basic requirement for communication is a steady, calm environment – in this case, the sound of a forest. On top of this baseline, we add three additional layers: the sound of rain, the sound of wind, and the sound of lightning. Thus, our sonification takes a scene that anyone ought to be familiar with, and uses it as a language to convey a physical experience.
We used a pair of accelerometers, placed at either end of the bed, to obtain a numerical representation of a physical interaction. Those sensors then connect to an Arduino which takes readings 1,000 times over 3 seconds, and increments a counter in one of 4 “bins”, each of which corresponds to a different level of intensity. Based on a number of threshold values, we determine what our environment should “say”, and send data to a Max/MSP patch. This patch deals with triggering samples, decoding the Arduino’s data stream, and outputting the audio.
The group consensus is that the project came out very, very well. The experience of readily hearing your sonified actions is highly entertaining, and certainly gives our project a strong “toy” appeal. We also feel that with a greater amount of time, we could build a richer sonic language, and both tune and add sensors in such a way that we can obtain more robust measurements. All in all, we are excited about the results, and are considering using the concept as a starting point for our final project.
You can view a video Here.
You can view our code Here. (Note: The env samples were excluded [>100mb])