Project 2: Judgmental Robot

Group members: Matthew Conlen, Michael Gisi, Lauren Korany

Context

For project 2, our goal was to comment on the concept of judging based on physical characteristics. Based on a cultural norm of ‘attractiveness’, we attempted to explore the nature of ratings through performance/interactive art. By using classifying code to implement machine learning, we can create a systematic judge, much like those in our culture. However, this program will mimic the act of judgment on our norm in a real life setting (as oppose to the internet). This would be set up in an area that would be known to be judgmental of physical appearance. This program becomes the negative creation of our willingness to subject those to their ‘attractiveness’ and blatantly states it’s reading for the public to hear.

The training classification code is based off the ratings of images on the website hotornot.com. These sites allow users to upload images to be rated on appearance.

Technical

The technical aspects of the project can be broken down into several sections.

Facial feature detection: We use the OpenCV library’s detect() function to find faces in an image. If a face is larger than a certain threshold (the person is close to the camera) we will run a feature detection function on it.
First, this function breaks the area identified as a face up into five horizontal sections and runs blob detection. We take the average locations of the blobs in each half of each section. This will give us, fairly accurately, the location of the eyes, nose, and mouth.

Facial Representation: Once we have found the location of the eyes, nose and mouth, we look at the ratios of the distances between these features. We use ratios instead of absolute distances in order to minimize the impact of faces being different distances away from the camera. We can then use a combination of these as input into a WekaData object to represent a face.

Facial Clustering: Our original idea was to classify faces based on their ratings from hotornot.com. Unfortunately we had poor results with this, and instead switched to a clustering method. We cluster all of the faces into three clusters, including one face, which is considered the “perfect face”. If a user’s face is put into the same cluster as the “perfect face”, we consider them to be “hot”. Out of the two remaining clusters, we consider the cluster containing the lower average hot-or-not rating to be the unattractive group, and the remaining cluster is considered average.

Interaction: We use the say() function that is built into the OSX operating system in order to give the computer a voice. This can be called from within processing using the syntax

  try
  {
    Process p = Runtime.getRuntime().exec("say phrase");
  }catch(Exception e){}

There are five distinct situations in which the program will talk, in each situation it picks from a list of appropriate phrases.
The five situations are:
• Classify a face as “hot”
• Classify a face as average
• Classify a face as not-hot
• A face is detected but is too far away (tell the person to come closer)
• No faces are detected (ask for attention)

We had to build in some logic in order to determine which phrase to use, to make sure that each face is only evaluated once, etc. This involved many intricate timers and Boolean flags and is not really that exciting to talk about.

Final Results

The installation consisted of a video recorder, tripod, platform, speaker, and mirror. The mirror reflected the user’s image back to them as the video recorder acted as a webcam and became the judging eye. The speaker was situated behind the mirror for the program’s voice.

The program was given condescending phrases to speak depending on the rating it decided on. When no one was around, the computer would beg for attention “Why won’t you pay attention to me? I am lonely, I need to look at someone, Everyone has left, I deserve respect”. This was implemented since many people would walk right by the installation, and this effectively caught their attention. The program invoked a lot of interest. Many people were scared to go within the radius that the lens could see. They asked how it worked, what it was for, and what its purpose was. As an interactive installation, the project was successful in drawing in the public. Some people felt insulted, while others found the program humorous. It was set up in Mason Hall due to the amount of flow between classes and the issues of trying to get permission from independently owned buildings on campus.

Code

Download the code here.

References

Hot Or Not (www.hotornot.com)

Are You Hot Or Not?. Jim Hefner and Roddy Lindsay.
http://www.stanford.edu/class/cs229/projects2006.html

Eisenthal, Yael, Gideon Dror, and Eytan Ruppin. “Facial Attractiveness: Beauty and the
Machine.” Neural Computation (2006)

Beauty Check http://www.uni-regensburg.de/Fakultaeten/phil_Fak_II/Psychologie/Psy_II/beautycheck/english/index.htm

More media

http://adaptiveart.eecs.umich.edu/2010/blog/wp-content/uploads/2010/11/DSC02947.jpg

http://adaptiveart.eecs.umich.edu/2010/blog/wp-content/uploads/2010/11/DSC02941.jpg

http://adaptiveart.eecs.umich.edu/2010/blog/wp-content/uploads/2010/11/M4V02958.mov

Leave a comment

1 Comments.

Trackbacks and Pingbacks: