Kevin Cheng  

Nothin’ But a UCD Thang Pt. 3

February 20th, 2004 by Kevin Cheng :: see related comic

I’ve been spending a lot of time talking about the usability and HCI gurus; about how they don’t show their methodology enough for us to gain context of their useful research and how it would be ideal if we had access to all of this information in a subscription based repository. Instead of harping on them some more, I thought I would turn the focus onto the guru’s readers. We teach our children to interpret things they see on television or in movies, helping them differentiate what is real and what isn’t. Similarly, we should learn to interpret the articles and publications we read in a useful manner.Context is an important ingredient for gurus to provide when espousing their views. However, the responsibility lies on both sides of the fence. As readers, we must learn to ask the right questions.

It has been said that university doesn’t just teach you formulas and facts. The important thing you learn is a way of thinking. I feel this is true in a more general sense. When you read a newspaper, depending on your education, you will interpret the reporting differently. The more educated and informed you are, the more likely you will be critical and analytical about the articles. Something or someone has taught you to ask the right questions and determine the newspaper’s relevance and validity.

Reading advice or guidelines from these gurus (and for that matter, reading any of Tom or my own ramblings here) should be accompanied with the same kind of caution and education. Questions should be asked that, when known, seem obvious but sometimes are easily overlooked:

  1. When was this research performed? With the technology trends changing as fast as they do now, the date of a study is crucial.
  2. How many participants were in the study? If only a few users were used, this information needs to be stated up front.
  3. What kind of demographic information is available? Computer expertise, average age, education level and many more factors could play a role in the results.
  4. Where was the study conducted? Was the study in the laboratory or at the participant’s machine or off-site somewhere?
  5. What order was the test performed in? The ordering of a test may predetermine or affect some results. Most researchers will change the order between participants to minimize this problem but make sure what you’re reading is one of those.
  6. How did the researcher define specific terms? In many studies, specific terms are used frequently but may never be formally defined. If someone said, “we measured satisfaction,” what does that mean exactly?
    Many other questions exist, especially when the actual study is taken into consideration and more specific questions can be formulated.
Jon brought up the question of what HCI/Usability education programs do and should cover in the forums. I feel that, in addition to learning tools, techniques and theory, or perhaps through learning those aspects, the educator needs to ensure their graduates can ask the right questions. Not only would this skill help them differentiate the useful from the fraudulent, but any one of these graduates could become the next industry spokesperson who’s writing is read by tens of thousands of readers.

Articles like those from Jared Spool’s UIE and Jakob Nielsen’s Alertbox are not only read by industry trained professionals. Casual readers interested in the field, or a project manager or programmer who’s looking for some evidence or support for their interface, all seem to be frequent visitors (I have no website traffic to prove this, however).

If done right, our next gurus will be even more aware of the right questions to ask and in doing so, maybe they will also know the right answers to give.

9 Responses to “Nothin’ But a UCD Thang Pt. 3”
ronee wrote:

I just finished the HCI Msc program at UCL last year…and we actually had a few lectures on using existing HCI research in which we were encouraged to ask these types of questions

peterme wrote:

What you’re asking for, essentially, is for guru output to look more like academic papers. If you’ve read proceedings from CHI or HCI journals, the authors are required to make explicit such context.

While I agree in principle with your desires, I can tell you in practice that, unless you’re in an academic program, it can be quite time-consuming to put all that stuff in there.

One thing that’s worth noting is that in the for-pay publishing that the gurus do, they are often quite careful to describe methodology and context.

Mark Hurst wrote:

To add to what Peter wrote - there’s almost always a confidentiality issue around disclosing project details. The client paid for the work, so they have veto power on any specifics in case studies.

kasnj wrote:

I’d be happy if they’d just learn to clearly identify that they are talking about a static content/ecommerce/entertainment/web-based application/whatever site. When they make these booming “this is how things should be” statements without that clarification, we end up fighting the uphill battle to explain that, yes, vehicles should have four tires and at least 2 doors - if the vehicle is a car. However, you have asked us to build you a unicycle, so…


Todd Warfel wrote:

We need balance
We could learn from some of the methods used in HCI in standardizing report structures. Rolf Molich has been doing this with CUE over the past couple of years. Personally, I think it’s a great idea - to an extent.

Consider the audience
It’s important to keep in mind that academia is a bit different than the real world. At the same time, we have to keep in mind that much of what the journals require for publishing isn’t appropriate for most client consumption - lots of data points, t scales, etc. Trim the fat.

What can we do
We’ve been working lately to improve our reports by including some of the structure suggested by CUE (summary, table of contents, introduction (purpose), method, participant criteria, results, recommendations, and appendix).

We feel it’s important to have a report that first and formost clients can digest, but also that other industry professionals can look at and replicate the research if necessary.

christina wrote:

It strikes me that it isn’t the gurus that need a fixing, it’s academia. The reason the gurus get read is they write in normal human english– it’s part of the their sales job. however, no matter what they may say, it isn’t good business sense to give it all away.

meanwhile those whose mission is to give away knowledge give it away in an indecipherable fashion. When i started on search, I read through a dozen JASIS that had been lying around neglected form my ASIS membership. These magazines were full of fantastic information that I needed, except I kept stopping to try to chew my arm off to escape the language (it might have also had something to do with drinking so much coffee to stay awake.)

We blame the gurus because they are the ones we know, but it’s academia that’s to blame.
with this

moreoever, i just realized how much of academic– nonprofit– knowledge is locked behind pay-for-access. Isn’t that a bit counter-mission?

Joshua Kaufman wrote:

christina: It strikes me that it isn’t the gurus that need a fixing, it’s academia. The reason the gurus get read is they write in normal human english– it’s part of the their sales job.

Yes, it’s also the reason that advertisements are designed as they are. They know what will appeal to the audience, so that’s what they give them.

I wholly agree with Todd’s comments. Balance is going to be the key to answering some of KC’s original questions.

Moi wrote:

Are you aware of ?

Tommy wrote:

I think it falls on the heads of the readers in many circumstances to make critical judgments about material they are reading. If the material is corporate “we are great, and you should purchase our services based upon these results,” it is up to the HCI practitioner to realize this, and only accept that it is propaganda and not an actual research paper. Additionally, it is up to us to dive deeper into the material at points to answer other questions we as practitioners should take into consideration (e.g. while reading a book on design guidelines that states they base these guidelines, it is up to the reader to make sure the actual research is either referenced or described and evaluate that research before blindly accepting the guidelines). If we are unable to get answers to these questions, then we must realize the information we are reading may be bogus and the author may not be as much of a “guru” as they claim to be. In other words, it is up to us as HCI practitioners to weed out the true “gurus” from the others who claim to have the best guidelines but fail to actually back them up with solid research. By doing so, then we can actually distinguish the good advice from the bad advice and apply best practices. We should force the “gurus” to back up their claims by not accepting their claims or applying their techniques until they prove their advice to us (i.e. don’t give them money for potentially bogus results that they fail to back up by buying their books, attending their workshops, or advocating them as a guru to others).

Off my soapbox, and back to my cubical.

Leave a Reply

OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?