This story is over 5 years old.


In ‘AMYGDALA,’ Twitter Emotions Bathe Viewers in Sounds and Light

The Italian digital arts group fuse* returns with a quadrophonic, circular multimedia spectacle.
Screencaps by the author

In the human brain, there are two twin regions of nuclei known as the amygdala. Shaped like almonds, the amygdala is comprised of one nuclei in the brain’s left hemisphere, and one in its right. Together, they are said to perform roles in memory processing, emotional development and decision-making. The Italian digital arts studio fuse* explores these twin regions of the brain in their aptly-named new installation, AMYGDALA, which uses sentiment analysis (opinion meaning and emotion recognition) to generate the audiovisual spectacle currently on view at FLUX-US as part of this year’s edition of Arte Fiera in Bologna, Italy.


The AMYGDALA installation consists of 125,952 LEDs on 41 columns in the CUBO’s Media Garden, and is designed to work 24 hours a day for an entire year. The ever-evolving results of the sentiment analysis are then sent from the Media Garden to the Mediateca to be “archived” on 12 video walls in the form of “generative emotional graphics” that will, as fuse* explains on its website, “form the emotional memory of the three months in which AMYGDALA will be deployed."

Built on an open source library for sentence-based emotion recognition, the sentiment analysis algorithm splits emotions into six types: happiness, sadness, fear, anger, disgust and amazement. Fuse* explains that it carries out a text analysis for each single tweet at a rate of about 30 tweets per second.

“The text analysis elaborates the tweet word by word using a dictionary of over 5,000 lexical items, each of which has a score for each emotion on the basis of its meaning,” fuse* says. “What’s more, during the analysis of a tweet there are also heuristic rules, for example checking for any negatives in the text, or doubling the score if a word is written in capitals to increase its importance.”

AMGDALA’s quadrophonic (four-channel surround sound) component metaphorically represents the process of analysis and emotion recognition. This part of the installation is built on Max/MSP, with six sound textures representing the six emotions, all mixed through the Open Sound Control (OSC) communication protocol once the sentiment analysis data is received.


“In the first stage of data gathering, distortions, minimal reproduction delays (varying from 0.1 to 100 msec.) and strong decay effects are applied, thus obtaining coarse and barely recognisable sounds,” fuse* explains. “[A]s the emotions are slowly identified, the sound becomes clearer, revealing a melody corresponding to the resulting emotional percentages.”

Through a Max/MSP patch, the audio in the quadrophonic sound system revolves the six tracks around spectators. The idea, according to fuse*, is to disorient the spectator, while also marking the end of AMYGDALA's cycle. Which, of course, is a bit like the experience of social media and the Internet itself—a digital landscape where sight and sound overwhelm and confuse.

Despite its intended disorienting effects, AMYGDALA looks and sounds rather beautiful, as fuse*’s video illustrates. And it almost looks like a futuristic reinterpretation of ancient stone circles, which carried, if not emotions, at least their own sets of associations.

AMYGDALA from fuse* on Vimeo.

Click here to see more of fuse*’s work.


Trapeze Performance Matches Aerial Acrobatics With Light And Sound

Here's a Machine That Turns Twitter Data into Cocktails

This Light Installation Tracks Solar Activity in Real Time