Kevin Cheng  

Weird Science

December 12th, 2003 by Kevin Cheng :: see related comic

Affective Computation is the fusion of emotion and interaction. Try saying that out-loud really quickly. It’s like a tongue twister for a geek mecca. Currently, the MIT Media Lab is working primarily on the detection of human emotions (especially ones such as anxiety, frustration or boredom) and applications that utilize this data. In some areas, they are also looking into synthesizing (or imitating) emotion. Tim Bickmore recently developed a prototype to look into what types of bonds humans can form with their machines. Tom will discuss more about our everyday bonds that exist already with machines like robotic pets. I’d like to look at what how emotions can or cannot be applied to existing applications.Emotion is a very subjective term. Merriam-Webster defines it as:
a : the affective aspect of consciousness : FEELING
b : a state of feeling
c : a psychic and physical reaction (as anger or fear) subjectively experienced as strong feeling and physiologically involving changes that prepare the body for immediate vigorous action

A few words strike out at me immediately in that definition: “state”, “feeling”, “psychic”, and “subjectively”. All of these words suggest aspects where a computer would have difficulty analyzing in any useful form. Currently, the predominant method of detecting emotion seems to be biometrically – measurements of your sweat, heartbeat and other subtle biological signs that even we are not aware of. The idea is similar to old lie detectors and, like lie detectors, they are mostly accurate but make mistakes.

This inaccuracy immediately places affective interfaces outside of the realm of high cost systems such as military applications or aircrafts, where the cost of one mistake can be quite high both monetarily and in lives. Imagine, if you will, a military aircraft with a heads-up display (HUD) which reacts to your stress level or even tries to keep you on the alert when it detects boredom. Even today, numerous accounts exist of commercial pilots fighting with fly-by-wire cockpits for control of the machine when there is a misinterpretation of the situation at hand. Like Game 3 of the Deep Fritz vs. Kasparov chess match a few weeks ago, machines sometimes cannot see the danger of a situation that is plainly apparent even to an untrained human eye.

So if military applications are not appropriate, what about the other extreme? Entertainment, video games specifically, could probably utilize affective computing in a much more feasible manner. Already, experiments have been attempted, again in the MIT Media Lab, which integrate the shock of the player into their avatar in a modified version of Quake II. When the player is shocked, their character in game jumps back. Creatively speaking, the possibilities of this can be quite vast. From creating more intelligent agents that interact with you in a virtual world, to simply modifying your character’s appearance, affective computing has the potential to enrich the gaming experience just enough.

The applications between the extremes are always the more difficult ones to evaluate. Some have looked at using affective agents as teachers, using biometric data to determine if you are bored or frustrated and adjusting the content accordingly. Similarly, the idea has been put forth to integrate the technology with help agents like Microsoft’s Office Assistant. I see a number of potential problems with affective computing in day to day applications.

Firstly, as Webster explained quite succinctly above, emotions are very subjective. I have enough trouble understanding the people around me. Even if I knew, with absolute certainty, how a person was feeling, I wouldn’t know how what the right response was. “I’m bored,” says my girlfriend. “Let’s go to a movie,” I respond. “What? I don’t feel like a movie at all. I’m sick of watching movies,” she exclaims. You get the idea. So how is an affective agent could presume to know my emotions AND respond to them appropriately, I don’t know.

Emotions are also often tied with a number of factors. When I am stressed, it’s rarely due to one specific source. I’m stressed because the culmination of a paper being due, my bills being unpaid, a comic strip to draw and my mother’s birthday approaching are all occurring simultaneously. Thus, when I work on a computer, it’s quite easy to interpret frustration and wrongly assume causality between the application and my emotions. Even without external factors, we like to think of ourselves as multi-taskers despite being notoriously bad at it. We work on two Word documents, read e-mail and respond to 3 instant messages while we are running an affective tutorial on French. The French affective agent detects a level of frustration, because you just received an instant message informing you that you missed a meeting you were never informed about, and assume you’re frustrated because you didn’t quite understand that last grammar tip. It proceeds to go back and explain that concept to you again, further increasing your level of frustration.

Another factor that comes into play is the existing relationship one has with computers. Desktop and laptop computers in particular, have a very different relationship with the user than integrated computers such as what Aibo is using. Many of us treat our PCs as tools much like a hammer or a drill. I use it to get my work done. With the possible exception of iMacs, computers generally don’t form a bond with the user in a way that instills them to give the computer a name, for example. Reading emotions that we cannot even read ourselves could be viewed as a breach of this gulf we have set up. Users may feel violated of their privacy because emotions are often the one thing you can absolutely keep to yourself. As some products have illustrated, this gulf can be overcome sometimes with design but until the majority of the market views their computers as more than the tools they are, a trust barrier will always exist.

Affective computing is certainly an interesting field of study. Even if affective agents do not see any commercial usage in the world, the research could give incredible insight into ourselves. Like most technologies, I imagine the entertainment sector to be the first to effectively use any innovations that come from the research. Whether they can be used for more everyday applications depends on whether we are able to change our mental model of how we view computers and the way we work with them.

One Response to “Weird Science”
Peter Centgraf wrote:

A side comment for sure, but I can’t resist. I’ve named every computer I’ve owned since my first Mac at age 15. Somehow I could never ascribe a personality to my PCs like I did with my Macs, though. I could easily project emotional affectations upon my touchy PowerBase clone (Uranium) or my spunky white iBook (Shiny). Perhaps one reason that Apple’s machines have maintained their mystique is because they seem capable of mirroring their owners in ways that PCs cannot. My Mac provides a space for my tools, but it also is an environment in which I operate and a context for my creative endeavors. In that way it is much more like a home than a hammer.

Personally, if my machine really had the personality traits I imagine for it, the situation would quickly go from comfortable to creepy. Can you imagine living in a bipolar apartment? No thanks.

Leave a Reply

OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?