The Works of

i❤computers

Drawing by Bryan Jackson

Twitter Ikebana

Summer of 2012

I programmed three Max/MSP patches that collectively were a concept software for this planned art piece by Bryan Jackson. Bryan’s artificial Ikebana was designed with a traditional format of three parts: heaven, earth, and human. The Ikebana was planned to be an thin aluminum tube that had earphones sticking out of it. This look was an artificial representation of a budding cherry tree branch. Depend on which of the three sections the earphones were a part of, specific tweets would be spoken by the computer and outputted to that specific group of earphones using a multi-channel audio system.

My friend Nic worked on the electronics and structure of the Ikebana. I on the other hand worked on the software, programming three Max patches for each section. We only worked on the proof of concepts for this project in order to get it started for future developers.

Process

The first challenge was figuring out what form of information to mine from Twitter. Initially I planned to sonify tweets and I researched three ways to do this: directly transforming text to music based on the techniques provided in this research paper [PDF], through the association method used in TweetDreams, or taking the emotion-based mining techniques from the WeFeelFine project and mapping those results to melodies of my choosing.

However, Bryan just wanted the tweets to be spoken by a speech synthesizer. We decided on the following types of tweets for each section:

Heaven

The Heaven section seemed the most straightforward way to analyze Twitter data: get the most popular tweet of the day and have it spoken by multiple speech synths to make it sound like a group of people are saying the tweet in unison.

Unfortunately, Twitter didn’t have a service that kept track of the most popular tweets of the day. Since I couldn’t query Twitter, I used Topsy’s API instead. This API provided me a JSON file of Topsy’s top 100 trending tweets by using the following request URL:

http://otter.topsy.com/top.json?thresh=top100&type=tweet&locale=en

It was at this point that I decided to use Max/MSP as my development platform. I was already familiar with this platform from my Music 152 class and the platform was suitable for the rapid development of multimedia projects. Using Max, I’d HTTP request the above URL, get my JSON file, and I would extract the first tweet using the dict object. I love this object because it automatically creates structured data for me when given a JSON file.

The next issue was speaking the text. Now you’d think Max/MSP, a program popular with electronic musicians, would have a standard object for text-to-speech. Nope. So instead I resorted to a third-party object called aka.speech. I was supposed to take the audio generated from each aka.speech and send it to a specific audio channel (a specific earphone), like I could easily with any other MSP objects. Instead aka.speech was no different than typing the say command in the OSX terminal, where the resulting audio goes straight to the speakers. This one little caveat would become the biggest hurdle in this project.

Despite this setback, I had three aka.speech objects playing at the same time. Here’s what this first pass looked and sounded like:

Once I exposed my code to the madness that is twitter literature, I noticed which patterns caused my text-to-speech to talk gibberish. I pulled in the regexp object into my patch to pull out URLs, the most obvious culprit, using John Gruber’s popular regular expression pattern. Other text cleanup included, but not limited to, removing the ‘@’ symbol and transforming HTML entities.

Here’s the result after cleaning up tweets:

When the time came to add multi-channel audio functionality to my project, I ended up putting in considerable time to record each aka.speech output, one at a time, and then play all the recordings at the same time on their respective audio channel. However, this hack proved too buggy and slow (since I had to wait for the each speech to finish during its recording) to even bother using it (I will spare you the messy details). Since this hack proved fruitless, I knew I would have to resort to making my own MSP object, but first, I had to finish the other two sections of the Twitter Ikebana.

Earth

The Earth section of the Twitter Ikebana would speak news tweets posted by official sources such as BBC’s Twitter account. I had a list of news accounts to search through, but which tweets would I pick to say? Was I just saying the tweets as they showed up?

I asked Bryan about this and he told me that the final Ikebana piece should have an interface that allowed the artist to choose a specific topic of tweets for all of the three sections to reflect. This search query spec simplified my work. I didn’t have to data-mine Twitter nor look for some ambiguous category of tweets that represented the zeitgeist of the time. Instead, I’d just let Twitter’s search feature do the mining for me and this meant I could drop Topsy because Twitter will give you the most retweeted tweet of the day given a specific search term.

Given the modularity of my work, this search interface task was saved for future developers, so that I could focus on my main objectives. As a substitute I used a search query from the top keyword on Jonathan Harris’ 10x10 project, which mines news sources for the word that best describes the zeitgeist at the moment. I would send that query to Twitter to search my pool of news accounts (Reuter, AP, etc.) using the following URL:

http://search.twitter.com/search.json?q=$1%20AND%20from%3AAP%20OR%20from%3ABreakingNews%20OR%20from%3AReuters%20OR%20from%3ABBCBreaking%20OR%20from%3AAJEnglish&result_type=recent

The $1 in the URL is replaced with the search term. Also notice that result_type = recent so that the tweets are listed chronologically instead of by popularity.

The resulting JSON file is sent through the same patch as the Heaven section, but modified so that the results would be queued up and said one at a time.

Human

Now what to do for the Human section? I still had no idea how to find personal tweets. The way this question was posed was just too vague. I needed a specific approach, so I read the WeFeelFine research paper [PDF]. The paper revealed that their core technique was looking up sentences with the phrase “I feel” or “I am feeling,” which was originally used in the installation piece: “Listening Post.” Looking for this simple two-word phrase, where the first word is “I” and the second word is a copula (“I am” or “I feel”), reveals intimate posts. Alas I had a precise method for my Human section. This part of the project was something like an anthropological study and I wrote a short piece about this intimate side to Twitter.

Given the search term and these phrases, I simply edited the URL in the Earth patch to create my Human patch:

http://search.twitter.com/search.json?q=$1%20AND%20%22I%20am%20feeling%22%20OR%20%22I%20love%22%20OR%20%E2%80%9CI%20hate%E2%80%9DOR%20%E2%80%9CI%20like%E2%80%9D%20OR%20%22I%20don%E2%80%99t%20like%22%20OR%20%22I%20am%22%20OR%20%22I%20feel%22%20OR%20%E2%80%9CI%20think%E2%80%9D&result_type=recent

Because the URL was so long, I had to use a different Max object to replace that $1 with the search query since the other object I was using had a limit.

The End

With my first draft of the three sections completed, it was time to create my own text-to-speech MSP object so that I had control of which audio channel the synthesized speech would go into. However, the summer came to a close and school was starting. Even though I was getting started with learning the Max SDK and OSX’s Speech Synth API, the workload of school put the project to an end for Nic and I. I turned in my code & developer notes to Bryan and I focused back to class assignments and DATspace.