Kevin Cheng  

Nothin’ But a UCD Thang Pt.2

February 13th, 2004 by Kevin Cheng :: see related comic

HCI is not a unique discipline when it comes to Guru disagreements. Psychology, a big part of where we draw our expertise from, is a great example of a field rife with opposing views. Can personality be used to predict job satisfaction? Guion & Gottier say, “Not at all” then Barrick and Mount come back a couple of decades later and say, “Yes we can.”. Sound familiar? “Download times are important,” says Nielsen. “No they’re not,” responds Spool. In the end, we’re left to form our own opinions based on the data presented to us. At least in Psychology, they seem to have agreed on the job satisfaction question.

Usability and HCI could learn something from that. We are in a young and developing field; a field that intricately deals with the unpredictable facets of human thought, emotion, perception, ultimately – cognition. Ask an experienced HCI practitioner for some advice and the answer is inevitably, “it depends”.

  • How many top level navigation items are optimal? It depends.
  • Should I use sound? It depends.
  • What colours are ideal for getting attention? It depends.
  • And so on and so forth.
With the exception of a scarce few rules such as Fitts’s Law, very little in HCI is absolute. Gurus try to answer the questions on everyone’s mind to the best of their ability and research but because the right answer is “it depends”, gurus will inevitably fail to give a definitive response that works for everyone. However, this doesn’t mean what they’re doing is not useful.

On the contrary, their research has tremendous value because they answer some of the questions that come after, “it depends.” How many top level navigation items are optimal? It depends: Are you in an intranet? How many users are there? Are they experienced users?

Answer these questions and you may find that someone has researched with the same, if not at least similar, parameters. Perhaps a study UIE did on navigation involved intranets specifically but their overall user sample was less experienced than yours.

But this brings us back to the problem discussed earlier: gurus are not sharing all the information. While they do publish articles, the lack of evidence I discussed does not only bring about issues of credibility but also of context. Let’s assume that all their data was scientifically researched and statistically significant. One could argue that publishing the research details is a waste of time because most readers (or Jakob might say 99% of readers) only want the snippets and don’t have time to read full publications.

However, publishing the methodology provides users with ammunition on how relevant the findings are to their particular issue. Given that there are no absolutes, we must formulate solutions based on everything we know. The more we know, the better. Even more ideal would be to have all of this information together in one central repository.

We have the ACM Portal and the HCI-Bibliography; both great resources. Supposing we had a site where these gurus and other researchers could post their findings related to specific topics instead. Ideally, I’m thinking of a use case where a practitioner needs to know about any research done on “Navigation” for example. Obviously, they could go to ACM and find all papers that match this keyword. ACM is a professional organization, however, and papers are published based on a certain level of standard. The quality is higher but the barrier to creating such publications is also high. That’s partially why so many consultancies publish white papers on their own website.

My solution would be a repository that includes three crucial facets:

  1. Results and Methodology are both made available. Results should be viewable on their own, much the same way that UIE and Alertbox articles are now but if desired, the method behind the madness is accessible.
  2. Gurus are held accountable through a public feedback loop. The wonderful thing about the web is that you can always get the last word on your own website. We open our site and articles to comments from our readers because we enjoy the discussion but one certainly thinks more about what they’re saying when they’re susceptible to public responses. Further, a feedback loop would permit practitioners to ask for more details on a study. “Nielsen, I’d like to know if you have the demographic distribution of the users you sampled as I’m working on a related project but it has a very specific audience.”
  3. Low barrier to publication. Let anyone post their study into the repository.
Of course, there are inherent problems with this pipe dream. First, gurus would have to take the time to respond to feedback for it to be of any use. Of course, if you posted all the relevant data in the first place, that should not be much of a problem. Secondly, low barrier to entry also means low quality control. This problem might be mitigated with a rating system that permits the community to rate a study’s usefulness or validity. Equally, perhaps users could rate the gurus themselves on the quality of their studies and their responses.

Other ideas might include a subscription charge to help pay for the repository, and commission to consultancies for each article read by users.

The studies that consultancies, gurus and other organizations conduct are invaluable to HCI and usability specialists. Our field has few concrete answers and truths and it’s left to us to discover the closest thing to the right answer for the problems we face. Alone, or even in a single firm, very little information can be collected in a limited project time frame. Yet basing our assumptions on studies we have little understanding of could be even more dangerous than acting on no data at all. A central repository, storing all studies and related details, could help create a more consistent landscape across the industry.

8 Responses to “Nothin’ But a UCD Thang Pt.2”
Chris McEvoy wrote:

Thank you guys, I am truly honoured. And it makes me look a couple of stone lighter!

Meri wrote:

Isn’t there an additional factor in whether or not such a repository would be feasible? Surely the gurus will have to assess whether or not it will affect their competitive advantage? (discussed last week) Do you think they might be worried that if they publish everything, then another guru can just as easily hijack their research and draw their own conclusions? I suppose if everyone did this it could be a value-add, but if it starts to redistribute the advantages of increased funding, there might not be as many fans in the guru-domain…

KC wrote:

Yes, that’s definitely an issue. Although at the moment, you could still buy research papers from your competition fairly easily so if this repository was a subscription based model, it wouldn’t really be all that different.

Is it realistic that gurus would ever agree to such a system? Unfortunately not. I called it a pipe dream and I do believe it is one. =(

Jakob Nielsen wrote:

(KC: In the interest of showing both sides of a story, here’s Jakob’s e-mail reply to me, reprinted with permission)

Thank you for putting me in a comic strip. I particularly liked the T-shirt you designed for me in the NN/g logo color.

I am not sure I agree with KC’s comment that there needs to be more documentation for my research. NN/g has published 3,326 pages of research reports, with 1,961 screenshots showing what 688 users in 7 countries did with the designs we tested (281 websites and intranets, as well as 151 emails). There’s always a methodology chapter and our customers would complain that we were wasting their time if we were any more verbose.

KC wrote:

and here was my response to him:

“Hi Jakob,

Thanks for your feedback. I have little doubt that the research performed for your customers are more documented than those I refer to (i.e., Alertbox). I understand the need for Alertbox to be more concise because they a) cater to a different audience and b) are more summaries than reports on single studies. However, very specific numbers are often quoted in those articles and as an undisputed leader in the field, many look towards those as the gospel truth.

Unfortunately, many do not have the ability to make correct judgments about how relevant your points may be to their specific issues. While this problem is through no direct fault of your own, I feel more data would help people determine the context of your summaries.

Having said this, I invite you to stop by OK/Cancel next week, as I put the onus back on the readers and educational institutes as well. Not only do the answers have to be provided by authorities such as yourself, the readers need to know the right questions to ask.”

(I’m referring to Pt 3’s article yet to come btw. The timing of these e-mails were between Pt 2 and Pt 3)

Tom Chi wrote:

Here is my response to both of them:

“As per KC’s article, I tend to see usability as more of an engineering discipline than a science. Clearly there are some underlying connections to cognitive psychology, but the day to day reality of creating interfaces is closer to the world of engineering trade-offs than it is to analytical scientific optimization.

Thus the conundrum: while scientific laws hold in all cases, engineering best practices often only apply to the task at hand. e.g. Building a bridge over a river is significantly different than building one over a bay or over a lake of molton lava…

The web in 2004 is incredibly different than it was in 1994, and both are incredibly different than 1984 (fidonet, BBSs, floppies on sneakernet, etc). Application development has changed in similar proportion. The upshot of all this change is that our engineering best practices can quickly fall to the wayside (e.g. optimizing for web colors, 640x480, dialup), leaving even the most diligent practitioners in the dark as to how to proceed.

I think the best thing about your alertboxes is that you’ve continued publishing them and changed the focus of your studies to address the changing environment over time. The danger is that people often trot out older studies, and brandishing the numbers as scientific fact, try to curtail new directions and ideas. While many of these ideas may be poorly concieved, an important fraction of them are reactions and adjustments to emergent engineering realities. To lose these ideas is a shame, and I believe KC’s article calls for more context in part to highlight that our work is part of an engineering continuum and even amidst the small community of research professionals, we have not yet settled on many scientific facts.”

Ronnie wrote:

Great site Tom and Kevin, was just turned on to this from a fellow UE colleague. I haven’t left my computer for two hours (great quote, and arguably the beginnings of a statistical study on ’site stickiness’).

Having had some email interactions with both Jakob and Jared (both of which undoubtedly have no idea who I am), I can’t help but comment that at the end of the day, their arguments around the scientific/quantitative methodologies don’t really keep me up at night. I like to say that usability work is both art and science. If I had to pick one, especially when we are talking about the web, I’d take art over science any day.

You brought up in an earlier column the idea of Jakob’s rule of 5 people to discover 85% of usability problems (I think you said 80%, but I thought it was 85… again, whatever). Your point was how this rule doesn’t seem to account for the size issue (a one page site vs. Amazon.com). My response to this argument between Jakob and Jared … usability kills by degrees (the PhD kind).

I’ve tested HUGE sites with 5 users, and been very successful in identifying many problems. I’ve also had 400 people remote surveys on the same site done, and discovered that 5 users pretty much matched 400 folks. I’ve also tested with 10 users, and learned more than 5, but less than 15. Ever a lover of analogies: the more monkeys that throw poo, the stinkier the school bus on the ride home. But it only takes a few well flung turds to keep the kids away from the monkey cage.

I doubt an Amazon size site would, after 91 users or whatever the real magic number is, toss in the whole enchilada and start from scratch. I’ve never seen a business come even close. Better to apply good usability methodology and ITERATE. Yeah, I know, chapter 1 of half of these goofy books I have on my shelf. But how about this novel concept: really do it, iterate, steward the process and make it happen. Don’t just write a nice ‘your site is 63% sucky’ report. Commit your client to make some decent changes and come back and it again. How many projects actually do that? OK, more than once?

Usability should be about making things easier to use, not clicking stopwatches, checking off this or that heuristic checklist, counting how many users you had, and waxing PhD credentials. Talk to people, have companies hear what their users are saying. Usability testing is very very important (and still very unutilized) but let’s face it kids, it ain’t rocket science.

Thanks for the site… I’ll be back.

Adam wrote:

KC & Tom,

Thanks for the great discussion, as always.

To the topic at hand: I think the dream of an HCI resource the links of which you’re speaking of is fantastic. I also think that it’s not an impossible dream, and that it might simply require a first step to occur, sort of like the first HCI comic strip.

I was actually searching for research just the other day to justify some development decisions that I’m making right now, and I found myself wishing that just such a site existed. Your point of context sensitivity is very well taken, but I think there’s a huge amount of value to be gained from some sort of collective experience repository. I think Wikis are often a usability nightmare, but the collective ownership/knowledgebase idea is sound, and very useful. If the gurus want to participate, so much the better! Serious research is always appreciated, and needed.

As you pointed out, there are a couple of sites out there that try to do some of these things, but I have yet to locate the Holy Grail.

In all honesty, I think OK/C could provide the user community needed to make a site like this a reality. Many of the discussions on this site are just what the doctor ordered, and it strikes me that most of the readers here would love to participate in such an experiment in emergent knowledge gathering.

Anyway, thanks again for the great content.

- Adam


Leave a Reply


OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?