Kevin Cheng  

Computer Knows Best

April 21st, 2005 by Kevin Cheng :: see related comic

I could spend time talking about in car navigation but I feel like we’ve [done that][1] to some degree. So instead, I’ll talk about iVan’s mention of “Computer Knows Best”.

Several months ago, I was at a clothing store buying some clothes, as one is prone to do in clothing stores. At the cashier, the total came out to some number that was lower than expected. The sales clerk looked at the total and shrugged, “it says it’s on sale so I guess it is.”

This little episode got me thinking about the reverse case. If you’ve ever taken an HCI class, you’ll likely have encountered either a power plant or aircraft failure case study. In the majority of these studies, somewhere along the way is an operator not believing the computer. “My fuel is dropping much faster than I’d expect, it must be a faulty gauge”.

How is it that, in one scenario, there is complete faith in the computer system’s validity and in the other, a complete lack of faith even in the face of potential disaster?

Perhaps it is the severity of the situation. Obviously, an airborne plane dropping fuel by the buckets is much more serious than the Gap losing $10 per shirt because of a misconfigured item. For the clothing store, even if the computer was mistaken, little harm is done. But then if the outcome is potentially severe, wouldn’t it be wise to assume the computer is accurately alerting you?

This leads me to think the severity is indeed a factor but not in the same way we might think. The more severe a situation, the more control we feel we need. Relinquishing control to a computer in such a situation, even for reporting errors, makes us uncomfortable. So rather than pay more attention to computers in a severe situation, we might do less.

On the way to CHI this year, I had an opportunity to chat with a pilot. He asked what kind of conference I was going to and soon after explaining the field, he soon got to talking about the systems he uses in the cockpit. One particular part of the system that I was curious about was the autopilot or more specifically, auto-landing. Landing is one of the more difficult parts of flying a plane, especially in challenging climate conditions and I’d heard that modern systems are capable of landing without any pilot assistance. This pilot agreed and added that it could land planes under dangerous conditions such as dense fog, when no manual landing would be possible.

When asked how often the auto-landing is used, however, he told me about once every six weeks to meet regulatory maintenance checking of the system - and only on a clear day.

Once again, relinquishing control is hard when it’s something so crucial. Despite logically knowing the plane could land itself, the pilot chooses not to ever use the system unless he absolutely has to.

Another potential reason for the seeming difference in reaction between the pilot and clothing store clerk is laziness. One is blindly trusting the computer, the other is not even with potentially dire consequences. Both are choosing the path needed to do nothing.

Knowing when people are more likely to trust computer systems is of critical importance in HCI. We don’t need users completely trusting computer systems but we should always keep a couple of key questions in mind:

- How much do I need my users to trust this system for it to be effective?
- How can the system facilitate that level of trust?
- Are my users lazy bums?

[1]: http://www.ok-cancel.com/comic/28.html “iFly”

8 Responses to “Computer Knows Best”
Sjoerd Visscher wrote:

It’s not about trust, but about responsibility. People prefer the responsibility to be in the hands of a person instead of a computer, except when that person is yourself.

The sales clerk loves his computer because he is now not responsible anymore for calculating the correct price. It’s not that he trusts the computer, but that he is confident he won’t get fired.

Bob Salmon wrote:

Something that I think might be an interesting application of “computer knows best” in car navigation systems is handling partial blockages due to accidents. For instance, if an accident has closed 2 lanes of a 3 lane road some traffic can still get through but the rest could be diverted onto smaller roads.

If all the cars, lorries etc. had some kind of in-vehicle system, and these systems were centralised in a Big Brother kind of way, X% of the vehicles could be directed onto the alternative route and 100-X% could stay on the main road. This might be possible in a low tech way with a poor police officer waving at passing vehicles, or maybe Big Brother is controlled by the police anyway.

When people get into cars they change their behaviour (or does it just bring out certain things they always have a bit) e.g. slowing down to stare at a crash as they pass, increasing the chance of another crash and holding up the traffic unnecessarily. They also have a herd mentality and either all turn off or all stay on when there’s an accident.

Sylvie Noel wrote:

There’s also a huge difference in the amount of training required to become a store clerk and a pilot or a powerplant operator. For the latter, knowing that they are experts may make them overly confident that they know better than the computer.

Pilots may continue to land the planes themselves because they relish the challenge (here we also have the problem of taking away too much complexity from a person’s job and making it dull).

Powerplant operators may not believe that a level is dropping too fast because they’ve been exposed to literally hundreds of hours of how that gauge works normally and it is easier for them to believe that the level reading is wrong somehow than that the pressure is dropping so quickly.

Wundt wrote:

There is one thing in common to all of the scenarios you describe, the tendency we, as humans, have to take the easier path and avoid looking foolish (e.g. Cooper’s concept that ‘not looking stupid’ is a driving goal in every user). The clerk did not want to question the computer because that would require more work and might result in the boss saying, “of course its on sale! why are you bothering me?”. Likewise, for the pilot or the power plant worker, if they believed the computer, it would mean they have to ’step up’ and fix a problem, it was easier to believe the computer was in error or that the problem would fix itself. And, the fear that if they made a fuss, they might look like an alarmist.

History is full of people who have watched as world burned around them. Think back to you Social Psychology classes, and the research on group dynamics and jury deliberations. This is not a new phenomenon, only a new medium.

DAvid Loiue wrote:

another point to remember from hci classes are ‘joint cognitive systems’ - that humans and machines are working together here and should do so in a human-centered fashion. in both the scenarios you talked about (sales clerk / dropping fuel gauge) the confidence in the information communicated by the system is binary - it’s either all wrong or all right. this isn’t really how humans work - we talk things out and express degrees of confidence. what if the both systems provided some indication of the level of confidence in their information - ‘fuel leak | confidence: 90%’ or ‘price $14.99 | source: authoritative corportate database’. This would make the interaction more human-centered and provide more information from which operators could make their decision.

julian wrote:

Another part of the problem may simply be how often the users actually witness computers malfunction.

It seems to me you’re assuming that the power plant operators never encountered a gauge malfunction before. Chances are the operators had in fact encountered gauge malfunctions, but had never encountered disasters leading to near-meltdown conditions.

Expert users of computer systems would pull upon their prior experience with computer malfunction and not trust the computer. Nonexpert users would assume any computer malfunction was simply their fault anyway, and when it’s not mission-critical, they won’t bother to investigate further.

Until everyday computers are more reliable, I’m not surprised that experts in mission-critical situations don’t trust computers. I don’t think I would, either.

daniel wrote:

I’d just like to add that another reason why the automated landing system on commercial jets isn’t used very much, is that there might be certain rules from when a pilot may use it (either from the airline or from the FAA). It’s not necessarily the pilot’s decision, unless the situation is critical, and automated landing is required.
I’ve heard that some fighter jets have similar systems, which can perform difficult carrier landings, but that it’s been decided higher up that pilots just shouldn’t use it.

In situations where the risk is great and the computer optional like landings, and unlike Gap’s faulty price readings, it’s nice to know that the pilot will be able to land the plane on his/her own, because the pilot hasn’t gotten used to letting the computer do it. It reminds me of a quote from an old movie:
“Oh, great. Computers will start thinking and the people will stop”
It’s not so much that trust between the user and the computer is lacking, but rather that trust between people is simply greater no matter the computer, its system, and its purpose.

Rob wrote:

Excuse the comment from the future, I’m reading back through the colleges. An interesting analogy to the aircraft autoland is the sophisticated automatic docking system built into Soviet/Russian Progress (unmanned) and Soyuz TM and TMA (manned) spacecraft. It was designed, installed… and then not used; instead the Soyuz was docked to Mir by the pilot, and the Progress was docked by an operator on the space station. Then, a few years ago, a Progress hit Mir while trying to dock, doing some damage. Since then, all Progress visiting Mir and the International Space Station have used the autodock.

On a related note, the only time the Soviet space shuttle went into space, it was unmanned. I find the prospect of something like THAT autolanding a bit terrifying, to be honest.


Leave a Reply


OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?