Kevin Cheng  

Testing the Test

June 4th, 2004 by Kevin Cheng :: see related comic

As I was completing a design project a couple of months ago, I wanted to conduct a few tests. In return, I also participated in several tests. As the test was being conducted, I often myself really struggling to refrain from providing feedback about the testing methodology, wording and techniques instead of concentrating on the tasks administered.In academic psychology programs, where more of the experiments involve people, the majority of subjects were found to be other students in the psychology department - presumably due to convenience and cost. Our discipline is closely tied to psychology and often, we fall into a similar trap. This problem is especially evident in academic environments or corporate ones where there are heavy time and budget constraints.

We often talk about budget usability testing, and the difficulties of finding representative users to test for us. As HCI experts, we are regarded as the “user representative” in some environments because we’ve actually interacted with them either through contextual inquiries, interviews or some other means of research.

Doesn’t that make us good usability test candidates when we fail to find suitable users? After all, we know the users best. What about when we don’t know the users - when we participate in a test for a project we are unfamiliar with?

It’s true that we often do have the most insight into the users’ context. We can assist a development team in making decisions without making a phone call to a user every 15 minutes. However, an expert can only see the world through the eyes of an expert. When was the last time you criticised the interface of a software application, or a ticket machine, or the placement of signage? Probably not too long ago.

That’s not to say we can’t evaluate a system of course. That’s part of our expertise. We use heuristics, we can perform cognitive walkthroughs of a system with the user context in mind and run a plethora of other analyses but sitting as a subject in a usability test is something we’re not that good at even for projects we are unfamiliar with.

One situation where it’s useful to involve a peer is piloting a usability test. In this case, we actually want the meta-feedback on the structure, style, order, and content of the test in addition to ensuring that the test is feasible.

In most cases, however, usability people make bad usability participants.

6 Responses to “Testing the Test”
jmalm wrote:

If you find yourself too much of an expert, try doing something to make yourself dumber — get loaded, go without sleep for 4 days, or wear out-of-focus glasses while taking the test. I think you’ll quickly find your “expertise” quickly deteriorates into something much more usable by the tester.

This behaviour can of course be simulated. Some HCI and usability people at my company use the “act like a dumbass” technique to achieve answers to obvious questions that would not normally be had.

Jay Zipursky wrote:

As a usability guy, I find I can point out potential problem areas, but I cannot predict how actual end-users will use or mis-use the system. They always surprise me in the end.

I would never participate in a usability test as a user-rep. I don’t fool myself into thinking I know exactly how they think, especially when approaching something new.

As for “acting like a dumbass”, I guess it depends on what kind of product you’re developing. However, it’s insulting to your users (who should have your respect) to frame it that way. My users are experts at their jobs no matter what their IQ is.

jmalm wrote:

In my line of work (which is the design and manufacture of industrial equipment), the approach most often taken is: if anything can go wrong it often does. We have had some of our equipment returned from field sites with blatant disregard of the field training or the labels on the product itself. Sometimes untrained people are responsible for many of the problems we see (our products require training given the specificity of their intended usage and the potential hazards involved in improper usage).

My comments were not meant as an insult to either test cases or usability experts. I do not think our users are dumb, and is a preconception into which many engineers in my field fall victim (me included in some situations during the long and often late-started usability design process). My use of “dumb” stems from when I do something unexpected to a machine, the design engineer runs over and says to me, “What, are you stupid or something?” I of course say no, but that doesn’t change his/her opinion right away.

RoskeHF wrote:

When not to use yourself as a test subject in usability tests:
(1) When the Application Interface is intended for domain experts of a field that you are not an expert in.
(2) When you as a “normal population” usability expert, like the other 85% of professionals, suffer from an inability to see the world from the perspective of others.
(3) Specifically: When you do not fit the general demographic characteristics of the expected user population, i.e. where most expected users will be 35 to 45 year old hispanic, female, high school educated, working mothers of 2, and you are male, 26 to 35, highly educated, asian.
There is no substitute for the REAL thing - Usability Experts as ‘Proxy Users’ are only better than nothing at all, if they resemble the end user population in as many important facets as possible. The most important of these are the type of work or profession, age (experiences and expectations with technology and services), educational & cultural background and language context.

Rachel B wrote:

There are many areas where our profiles match, or at least overlap, with the user group — my own profile would fit consumer, spouse, mother, and grandmother consumer profiles, as well as consumer for all sorts of “boomer-aged” products, services, and technologies. As a sometimes custodial grandparent, I have insights into toddler/pre-schooler/grade schooler computing habits and expectations, as well.

Does that make me a good tester? Well, yes and no. In some cases, I can definitely fill the role of part of a market segment. In other cases, I’m definitely the anomaly. At the risk of sounding sacreligious, what can make a HCI professional a good tester is the intuitive sense of what “should” be. When drawing on decades of workforce experience senses that a site won’t support business processes, that’s a good “first-round” test. When the mouse keeps wandering to certain area on the screen because the answer should be found there, or when thinking “it” must be here somewhere when information seems to be missing, this becomes a de facto test. This becomes a problem when we stop the test process there, and have only ourselves as subjects.

Kevin Cheng wrote:

At the risk of sounding sacreligious, what can make a HCI professional a good tester is the intuitive sense of what “should” be. When drawing on decades of workforce experience senses that a site won’t support business processes, that’s a good “first-round” test.

Rachel, I agree but this isn’t really user testing. What you’re talking about is more of an expert evaluation which as you mention, is a valuable filter prior to user testing. Discover the obvious issues before you spend money testing users and having them run into the same brick wall in every test and never going deeper. I still feel that as actual test participants, that very same ability to expertly evaluate is what hinders us from being able to become a representative user, even when our backgrounds align.


Leave a Reply


OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?