Kevin Cheng  

Nothin’ But a UCD Thang Pt. 4

February 27th, 2004 by Kevin Cheng :: see related comic

Ah the conclusion to our first story arc, and also to our double sized comics. Good riddance I say, for I have lost enough sleep drawing twice as much as usual. I hope youíve enjoyed our little fun with some industry players. The nice thing about having a fairly tight community is that you have a specific set of people that most will recognize. For those who were in the comics, please donít sue us. Weíll buy you a drink at one of the conferences weíll see you at.

Peter Merholz, Mark Hurst and Christina Wodtke, considered by some to be gurus themselves, all chimed in last week on the general thread thus far. Each brought up interesting points on why HCI and Usability gurus cannot or should not offer more evidence in their publicly digestible advice columns and articles. For my part, Iíll conclude our discussion with my take on these reasons and how valid I feel they are.

Itís Too Time Consuming

What gurus offer on their websites are free advice and free means theyíre not going to do as good a job on it. While thatís true, I find it hard to believe that the data and methodology backing the results they publish arenít already documented in some way. Can Jakob Nielsen really tell me exactly how long my first usability test is and not have the data supporting this fact already documented? Unlikely. Further, giving even the slightest context may be sufficient. I donít necessarily need all the gory details but at times, it really does seem like numbers and statistics are appearing from some nether region.

Competitive Advantage

Publishing the methods and raw data could take away your competitive advantage as others will use your data. Firstly, I would argue that publishing your methodology should increase your reportís credibility and hence, your competitive advantage. Secondly, putting my academic hat on for a second, wouldnít it be good to have others build on your research? From a business perspective, more people will hear about your research. From an academic standpoint, youíre at a minimum preventing research being performed on misconstrued data because people will use your research as their basis, whether you publish the methodology or not.

Confidentiality Issues

Many research studies are performed for companies with sensitive confidentiality agreements. Mark Hurstís point here is probably the most valid. If youíre not allowed to talk about it, thereís really not much you can do to get around that. I would hope if this were the case that the consultancies would try to get permission to at least post summaries of things such as users demographics tested.

The Paid Reports Have Details

Most of the gurus run consultancies and have white papers available for purchase. These reports have much more detail than the snippets online. I have no doubt this is true but Iím really only referring to the publicly available articles. Gurus have a great deal of influence, not only over those within the industry but also those that work with us or those that cannot afford a professional HCI or Usability specialist. Too often, I hear a developer or even a novice HCI person quote a guruís article completely inappropriately. If even just a little bit more data could be provided to give more context in these free reports, the findings could be applied better. Why should they do it on the free reports? Call it responsibility.

Academics Obfuscate their Papers

Academic research papers are deliberately made to be difficult to understand. Honestly, I have no idea if this is true or not. For one, academic papers are meant for a different audience in general so they tend to be more wordy. Also, those reviewing papers for publication have certain criteria and it may be in the authorís interest to sound more intelligent by using more confusing vocabulary. Then we can start pointing our fingers at review panels instead. Itís true that gurus write in much more straight forward English. My point was not regarding how easy or difficult it was to understand a particular paper, however. Granted, an undecipherable paper with good data is next to useless but Iíd argue an easy to read paper with insufficient data is much more dangerous.

The service of free research tidbits from the gurus out there has been invaluable. While there are factors that may hinder them from doing so, providing even a slice more context to the reader could serve to help their own credibility, aid in furthering valid research and ultimately, ensure proper application of their findings.

Gurus have power. Just like the Amazing Shneiderman learnt early on before donning his costume, ďWith Great Power, Comes Great Responsibility.Ē

12 Responses to “Nothin’ But a UCD Thang Pt. 4”
lbeau wrote:

Academic papers are *not* deliberately made to be difficult to understand. This is a ridiculous assertion. There are two main reasons why this may however *appear* to be the case :

1) Many academics simply can’t write well so their work is unreadable.

2) Much academic work is written for a particular audience who share a vocabulary and set of references. If you are outside this community then you won’t understand, and you never were meant to understand - this is not deliberate obfuscation, just efficient communication in a peer group.

Jake Cressman wrote:

I would add one more point to this list. So called usability guru’s don’t owe us anything. For the most part, this isn’t academia and the only people Nielsen owes citations to are his clients.

By the why, do you think the term “guru” means a practitioner with a great deal of knowledge, or a practitioner who tends to evangelize about their work regardless? I think it’s the second.

KC wrote:

From the first article in this four parter:

Gurus: n.

1. somebody who is prominent and influential in a specific field and sets a trend or starts a movement
2. a spiritual leader of or intellectual guide for a religious group or movement especially one being described as non-mainstream

So they only need to be prominent, as you said, Jake.

Regarding your point, I’d say you are simply restating a combination of the points in different wording. “Time Consuming” and “Paid Reports have Details” basically combine to say “If you paid, you’d get the goods”. You’re right, gurus don’t owe anything but if they are going to choose to publish some articles for free, a little care to ensure context couldn’t hurt.

In the past, research has been performed that built on those of Nielsen and other’s only to find later that the original research was not valid (again, I refer to Gray’s “Damaged Merchandise”). While there is no legal responsibility, I think there is a professional responsibility to give enough context. Again, I’m not talking full academic rigor. ANY context is better than none.

It also makes business sense, as I mentioned. Context gives credibility. Credibility brings in business. It’s amazing how many people just swallow information without any verification/validation and then pay for services based on this.

As I mentioned in last week’s article, the onus is just as much on the readers to recognize when an article has insufficient facts backing the claims.

Jakob Nielsen wrote:

It’s worth pointing out that Gray didn’t actually prove anything about the findings of early usability research. He published a paper that attacked various details in research performed in the early 1990s at Bell Communications Research, Hewlett-Packard Laboratories, and the IBM T.J. Watson Research Center. The main bone of contention was the difference between industrial research and a purist academic approach. The several industrial researchers that Gray attacked worked in environments that afforded less tightly defined experimental controls but also allowed them to study much more important issues relating to design projects in the real world.

Dr. Bonnie John (now Director of the Masters of Human-Computer Interaction Program, Carnegie Mellon University) said it best in her rejoinder: “case studies are not just small experiments.” Or, as Dr. John M. Carroll (previously manager of user interface research for IBM, now a chaired professor at Penn State) said in his rejoinder: “perhaps broad scope is a critical attribute of influential HCI methods studies.”

Leaving besides debates between researchers in papers published in the 1990s, the real proof comes from the fact that now, more than ten years later, the industrial researchers have mainly been proven correct. Virtually everybody who works in usability today have accepted the conclusions from Bell, HP, and IBM and found them to work well for a broad variety of projects.

Chris McEvoy wrote:

KC wrote:


You’re right that Gray was more pointing out potential flaws in industrial research. His paper simply served as a wake up call to be more careful. As you said, this was more than ten years ago and today, I feel some of the points are still valid. That is, readers need to be aware that what they read is applicable to what they are working on. To do that they first need to be able to ask the right questions but for the moment, those that DO ask the right questions are not always getting the answers and that can be dangerous.

I agree with Chris, however. We are all united under a common cause and hopefully, all aim to improve our industry as well as make a few bucks. We started this four parter not with the intent to attack academics nor gurus but to push the envelope and issue a call to arms to be more vigilant in how we provide, and how we interpret, information.

Jared M. Spool wrote:

First, let me say that I’m thrilled to be part of the comic and part of this discussion. It’s very cool. My kids got a huge kick out of seeing me in a comic strip.

I can’t speak for anyone else, but I understand the desire to have access to our data. If the goal is for you to decide if we’re right or wrong, you’ll need to the data to do that.

That being said, none of the reasons stated so far is why we don’t release our data to the general public. Most of the time, our data isn’t bound by client contracts and we don’t save it for sale. (In fact, in those rare occasions when we do release our data, it’s usually to grad students for their thesis projects.)

The main reason we don’t release our data is because it’s always missing a very critical element: your site. The data we collect from a handful of sites, while interesting, is incomplete. Without your site’s data, you won’t know if the data trends extend to the user behavior we’d witnessed if we had watched your users on your site.

The purpose of our research isn’t to establish rules or guidelines for design. When we determine that we observe users don’t mind long pages or perceived download time doesn’t correlate with actual download time, it isn’t because we expect designers to act on these findings.

Instead, we just want to point out that designers should be thinking about what happens on *their* site. They should use our work as a starting point on where to look for the specific user behaviors they need to design for. However, if they find that their users behave differently than those in our studies, they should go by what’s happening on *their* site, not by anything *we* found.

The web is really in an immature state. It’s way too soon to be saying that any definitive finding is applicable to all sites in all contexts. Designers need to be aware of the general findings in research like ours, but then do their own investigations to ensure that the findings are applicable to their situation.

You really don’t want our data. It’s messy, complicated, and, most importantly, incomplete. It’s really just a window into the design of a site. It’s a tool for the designer to know where to look at what’s really happening on their site.

Keep up the good work and keep asking the hard questions.

LL Spool J

Ron Zeno wrote:

The guru’s priority is to make money. That means the gurus must generate a great deal of self-hype, including arguments that influence people into believing enough of this hype that the guru’s actually get work.

It is up to every potential client, current practitioner, and information-seeker to decide how to interpret all the guru-produced hype. Perhaps some critical thinking would be appropriate to separate likely claims from the unlikely? Perhaps we should be swayed by fallacious arguments of authority, widespread belief, or fear?

My concern is not as much about what the guru’s are saying, but why people are swayed by them. Anyone want to by a bridge? Perhaps some swampland? This way to the egress!

KC wrote:

I agree, Ron. Which is why I mentioned the need to educate people. Sadly, we could get political and talk about how the American public seem to have lost their ability to be critical as well … but that’s another story.

Jared’s answers are very insightful and I appreciate the input. I think using UIE, Useit, etc as starting points in answering your own questions is a great idea. I’m glad you agree that they are not rules or guidelines. Alas, many people treat them that way.

“So-and-so said 76$ of users do that.”

I would suggest more disclaimers to say how incomplete data is but then that’s not really good for business. Perhaps something at leaat to say, “hey, this is really specific to our case and may not apply to you.”?

Jared Spool wrote:

We try to say as often as possible that you need to see what is happening on your own site when interpreting our results. We probably could say it more — in fact, we probably can’t say it enough.

Ron’s criticisms of ‘gurus’ is something I’ve often thought myself. First, I don’t like to be thought of as a guru. I’m just a guy running a research company that is trying to figure out how the web really works. A lot of people like what we find, but I wouldn’t consider it the be-all-end-all.

I’ve never liked the term guru and cringe whenever I’m referred as that. I think that once you start believing you’re a guru, you stop questioning everything you are doing. I question what we’re doing all the time — I think that’s how one makes sure one is still doing quality research.

Second, at UIE, we do need to make money. We don’t see money as the end goal. It’s purely the instrument we use to fund more research. Without the funding, our research efforts will dissapate, which I believe would not be good for the community. So, we work to raise as much funding as we can.

Ron is right that when you are trying to make money, for whatever reason, you have to produce a certain amount of hype. We’re always shocked when the hyped-content messages produces 2-3 times as many registrations as the useful-content-without-hype messages do. So, you’ll continue to see hype from us — only because it works.

I know that our audience is very smart. I know they question the things we say. I get a ton of email with really interesting, hard questions all the time. I answer as many as I can and some of them we turn into future research projects.

I assume that our audience is going to take the knowledge and put it to the right use. And, for the most part, they do.

LL Spool J (I’m really liking that.)

christina wrote:

“In a similar vein of cautious investigation one should also interpret with caution the reports of guruís and consulting agencies that necessarily must survive on commissions. Their bias can be to bolster research that supports their methods and dismiss data that discounts their methods. ”

This is an insight that has not played out in these conversations.

Jared Spool wrote:

Christina’s point is a reason why UIE is structured the way that it is. Only about 15% of our annual revenue comes from consulting to clients.

The rest comes from our events and publications. The bulk of our income isn’t based on any particular methodology or research result. This removes the pressure from us to always be right.

I can’t speak for other’s though.

Leave a Reply

OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?