Agents and Decisions
I’ve been reading a thesis on Clippy, the Microsoft Office Assistant, written by an ex-freshman dormmate of mine, Luke Swartz (not to be confused with the paper on folksonomies written by another ex-freshman dormmate, Adam Mathes…it was quite the dorm). Luke offers a detailed critique of both Clippy and Microsoft’s agents and of computer agents in general.
He begins by explaining CASA (Computer As Social Actors) theory–that “people instinctively treat computers…as if they were real people”. Luke argues that Clippy, as an explicit representation of an unconscious response to computers, fails by trying too hard. My reaction is that people want to talk to computers, not to little people _inside_ a computer.
Next I noticed his discussion of “anthropomorphic dissonance”–the difference between “expected behavior given an agent’s appearance and its actual behavior”. Using natural language, a user expects the assistant to understand every sentence they enter, when technology is simply not up to the task. The same thing happened with the Polar Express movie, where characters were _close_ to human-looking but not quite right, which makes them creepy.
However, while technology may have not been up to snuff for Microsoft’s Clippy, it has progressed significantly since then, and new technologies promise to allow natural language to be attempted again. Notice Google Suggest, which draws dynamically from a much bigger and smarter computer to guess what you’re trying to say–the same technology could be used to interpret language in all its permutations. And even the Polar Express could be rescued with some simple tweaking, and a little understanding of what scares people and what makes them comfortable (most animated movies use this kind of post-processing, done by humans; Polar Express was an exception).
The most interesting part of the paper was the discussion on agent behavior and etiquette. As a CASA exemplar, a conversational interface must act like a kind, conscientious friend if we are to accept it as a work partner (see Milton Glaser’s #1 lesson: “You can only work with people you like“). The Office Assistant fails in several ways here–it looks over your shoulder constantly while you’re trying to work; it interrupts you in the middle of your work; it asks you the same questions over and over; and it doesn’t learn from its experiences. A human with these traits in the assistant role wouldn’t last long…
An agent with real manners, however, could be very helpful. I’m tempted to buy a “Miss Manners” book and read it to my sites; when they do something “rude” or “uncouth”, it’s time to redesign! I’m also newly interested in those “a friend is…” email forwards–cheesy they may be, but a friend like the one they describe would be much more likable than Clippy.
A final observation came from the seeming dichotomy that users like interfaces that are easy to use, but don’t like interfaces that try to help them a lot. These seem similar, yet interfaces that offer help are implying that we don’t know much, and interfaces that are “easy” to us improve our self-esteem since we must be smart for it to be so “easy”. Luke notes that “most advanced users…don’t need an ever-present help agent, and thus they may perceive the Office Assistant as trying to lower their status.” No one needs their computer insulting their intelligence.
This field is fascinating to me as it relates to my recent interest in “web site personalities”. A web site should act like a good friend, I’ve argued, and a visit to the site should be like a conversation with that friend–if it asks you a question, it should be in a friendly tone; if something goes wrong, it should apologize and explain itself to you; if it recommends something to you, it should be in a selfless and generous way.
This is an extension of something from the guys at 37signals, who say that “the site should be like a member of our team; it should be someone we would want to hire”. Check out the sign-up page for their new web app, Ta-Da List to see what they mean. It uses a very conversational, friendly, and explanative tone: “What’s your full name? And your email address (We’ll never share, sell, or use your email address in irresponsible ways.); Enter your email address again; Now pick a password; Type that password again; You’re done!” Agents seem like a powerful interaction tool, especially when expressed in an implicit, non-anthropomorphic way.
The most damaging attack I’ve seen on agents as an interface came from Clay Shirky in an article called “Why Smart Agents are a Dumb Idea“. Shirky argues that agents are bad because they 1) get worse as the task gets bigger, 2) “ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking)”, and 3) interfere with an efficient market of information (dynamic changes can’t be handled by the agents).
But in hearing Malcolm Gladwell speak today, I kept thinking “maybe we’re not so good at ‘thinking’ and making decisions after all”. Gladwell kept citing examples where our oh-so-weak human bias affects our decision-making in incredible ways, and I felt myself longing for an impartial computerized agent to make the decisions for me. Shirky’s example of a computer not being able to decide “why 8 hours between trains in Paris is better than 4 hours between trains in Frankfurt but 8 hours in Peoria is worse than 4 hours in Fargo” didn’t hold as much weight when I realized our bias toward Paris or against Peoria could be completely unfounded. Gladwell’s point was not that subjective choices should be eliminated, just that they shouldn’t be the _only_ factor in our decisions. An agent would help that.
I could be off on my own here, as I’ve often been alone in my embrace of an autotelic, freeform, flexible and seemingly-schizophrenic lifestyle, like the one that would be mediated by a third-party computer agent. But just as constraints can set us free from chaos, a well-designed, non-anthropomorphic computer agent that respects friendship conventions and has impeccable etiquette could be an incredible aid to a life in this new working world. Luke concludes his paper by saying “by better understanding how we interact with agents, we may better understand how we interact with each other”. That’s a great reason to continue working in this field.
3 Comments