Tom Chi  

The Jef Raskin Challenge

August 27th, 2004 by Tom Chi :: see related comic

Anyone who’s been following IxDG list in the past couple weeks has probably caught Jef Raskin taking the interaction design world to task on a number of points. Here are some excerpts of him from a recent discussion on UI guidelines:

The Windows UI guidelines are self-contradictory at points (e.g.
recommending noun-verb interaction but going the other way at times), and lead to the kind of interfaces we all hate. To be an HCI expert and to adhere to the Windows guidelines is, in my opinion, unethical.

and also:

“The Mac interface is hardly better than Windows. The differences are mostly trivial. I have written a number of critiques of it. You ask for alternative guidelines: Have you seen my book?”

I had mixed feelings while reading these comments. Certainly, I share some of Raskin’s frustrations about the slow pace of innovation in interaction design, but I also recognize the very real pressures that lead us to stick to standards. While most designers see standard interaction metaphors as helping to make an interface “intuitive”, Raskin feels that being “intuitive” simply means being “familiar” and is ultimately an unfortunate indicator that a product is merely status quo.Nearly a decade ago he put forward this thesis in his paper Intuitive Equals Familiar. Looking around ten years later, we find that interfaces really haven’t changed that much. We still work with a WIMP (windows, icons, menus, pointer) based desktop metaphor, and still have the same input devices. The only real changes have been the addition of web interaction metaphors (e.g. linking, paging, submit to server, etc) which some consider as a step backward.

While I have some ideas of my own, I’d like to pose the question to the readership. Are we really in a design doldrum? What are the right directions to take interaction design? Are we trapped because of hardware? User expectation? Risk-averse project process?

How can we respond to the decade-old Raskin Challenge?

15 Responses to “The Jef Raskin Challenge”
Andrei Sedelnikov wrote:

Concerning UI controls: they do change. At the beginning of GUI we have only a limited set of controls with limited abilities - namely an edit box, a combo-box, a list etc. Soon we discovered the need to extend them and to create new ones, in order to enrich the interaction. An then we get in trouble, since our established guidelines simply become to lose their sense, failing to cover all possible control types and behaviors.

Though chaos is, of course, not an answer. In my opinion, not the control themselves, but rather the aspects of their behavior and look, will become a subject for guidelines. These aspects will again be idiomatic by nature, but they will be very simple idioms (can be learned once), so you can construct different controls out of them and still the behavior of this control will be clear to user, being a sum of its aspects.

There are some illustrations to what I’m talking about in my blog:

User Interface Elements of the New Generation

Reed wrote:

Yes, we have been repeating the status quo. But I think Raskin is too deprecating of familiarity, and throws out another meaning for “intuitive”: the idea that something unfamiliar can be learned very quickly; that it’s function is apparent.

Folders and windows are great at providing an interface to a hierarchical filesystem. Seperate windows are not good for managing lots of web pages– the popularity of “tabs” in browsers shows this. Folders nested in windows breaks down when you have a non-hierarchical filesystem, and when you have too many files and folders. An interface (such as a file manager) becomes hard to understand (to intuit), when you start adding mysterious objects to the system whose function is only obvious if you already know how the underlying system works.

As new underlying technologies and program designs are invented, we need to find some new interfaces to reflect to us what is going on under the hood (to the extent that we need to be able to monitor and control what’s going on under the hood).

We can hang on to “intuitive” even if we aren’t interested in “familiarity” though– IMO “intuitiveness” can be achieved most easily by finding a balance between making functions visible (and related to the actual workings of the program), and keeping the interface simple and small.


Matthew Oliphant wrote:

Are we really in a design doldrum? Yes. No. It Depends.

What are the right directions to take interaction design? In whatever direction those who pay us want us to go.

Are we trapped because of hardware? Yes, and always will be.

User expectation? No.

Risk-averse project process? The number one biggest reason we are still designing for box-shaped, electronic things.

Make it easy and cheap to port the current heaping pile of data to a new OS platform and/or piece of hardware and interaction design can innovate.

Make it easy and cheap to train all the people who are required to deal with that heaping pile of data on a daily basis and interaction design can innovate.

Tom Chi wrote:

As new underlying technologies and program designs are invented, we need to find some new interfaces to reflect to us what is going on under the hood (to the extent that we need to be able to monitor and control what’s going on under the hood).

But we also have the capability to create new interaction metaphors even before technology changes. We needn’t wait for things to change “under the hood” per se. I think back to an old version of Kai’s Power Tools where they applied genetic algorithms to provide thumbnails of different parameter tweaks, so the user arrived at the final settings by providing the aesthetic approval which defined a path in parameter space. All this ran on 1996 machines, so it’s not that we are technology limited. Another idea that comes to mind is HyperCard on the old-school Macs. This provided a lightweight and speedy programming language that had interaction capability reminiscent of macromedia flash, but over a decade earlier and two orders of magnitude easier.

Make it easy and cheap to port the current heaping pile of data to a new OS platform and/or piece of hardware and interaction design can innovate.

Is it the OS that is keeping us from advancing? Like I mentioned just above, KPT was messing with graphically navigating genetic algo spaces, and others have been building interesting things (Glertner’s lifesteams, Card’s digital books, etc) all on existing OSes.

Perhaps the problem is that we have just snapped to a confortable node where we can get things done and risk of moving in entirely new interface directions is beyond our available time and budget. Or perhaps many of the problems that we solve don’t require a “revolutionary” solution, we simply need a workable one.

While these arguments are all very familiar, it seems strange that in an industry (the tech industry) that prides itself on constant innovation that this very forward facing aspect has somehow languished.

Adam Barker wrote:

Perhaps the problem is that we have just snapped to a confortable node where we can get things done and risk of moving in entirely new interface directions is beyond our available time and budget. Or perhaps many of the problems that we solve don’t require a “revolutionary” solution, we simply need a workable one.

I’ve been following the debate this week, and I’ll have to admit that I was a bit consternated by Raskin’s comments. There are certainly things about computing that are frustrating– there always will be. There are certainly interactions that can be improved upon– again, there always will be.

What I fail to understand is the perception that we need to make some sort of revolutionary leap from where we are to “The Right Thing.” Even assuming that there was such a solution, it’s sort of silly to think that it could be pushed out and adopted overnight.

Improvement happens iteratively. Each new generation of software, and each new generation of software practitioners, provides (hopefully) better interaction. Sometimes this isn’t the case, but I would argue that daily interaction with computers is far less frustrating than it was 10 years ago. For instance, Microsoft’s OneNote has finally provided the save-button free system of recording text. I don’t remember if Raskin was the big proponent of this, but it’s good to see such ideas percolating into mass-use software.

From my perspective, we’re moving in the right direction.

This isn’t to say that criticism and curmudgeonly commentary are not useful, rather that the situation is rarely as drastic as the vocal minority might have you believe.

As far as “revolutionary” solutions go, I think there’s a real danger in attempting to pursue new and exciting directions simply because they’re new and exciting. Developers, in particular, are quite often drawn to bright, shiny bits of ephemera that represent interesting bits of complexity. The majority of hyped, innovative interactions from the past 10 years have proven to be pretty much worthless. Two, in particular, leap immediately to mind:

“We should make the desktop 3D!”

“People should be able to talk to their computers!”

Granted, these are examples of interaction design that was done The Wrong Way, but I think the desire to be revolutionary is often mutually exclusive with doing things The Right Way.

I also have to grin every time someone mentions Kai’s Power Tools. I know the software is revered within certain groups, but I personally found it to be the hardest-to-use thing I’d ever encountered. Also, stating that it ran well on the hardware of the era is being rather generous. Obviously, my assessment of its usability is subjective, but I do believe pretty strongly that KPT could have been a much better product if they hadn’t tried so hard to be revolutionary. The capabilities are amazing, but is there some reason that the controls need to look like something out of an alien spaceship?

Wayne Greenwood wrote:

A standard should be followed unless there is a compelling reason to disregard it. A radically new control that might be just a teensy bit better than the existing standard control usually isn’t worth the time and effort it will take to design and ’sell’ the feature.

On the other hand, if a new design blows an existing standard into the weeds–by virtue of providing significant and compelling real-world benefits to the people who will use the product–then you should delight in hammering the first nail into the coffin of the old standard.

I find it helpful to recall that the list of once unassailable standards includes rotary telephones, medicinal leeches, 6-volt positive ground car electrical systems, and the concept of a flat earth. (Although the leeches appear to be making a comeback.)

Hal Taylor wrote:

I have a tremendous amount of respect for Jef Raskin, and have been working on reading his book (though I confess I’m not yet very far). I had some one-to-one exhange with him last week, as well. In the end, I found myself only partially in agreement with the “intuitive” vs. “familiar” debate. I agree that the term “intuitive” is a little problematic, although part of Jef’s linguistic argument (that proper usage should anyway be “intuitable”) would seem to be countered by’s primary definition for “intuitive” of

1 a : known or perceived by intuition : directly apprehended b : knowable by intuition.

Note that dictionaries are generally descriptive as opposed to prescriptive, so Webster is likely indicating how they believe the term to be used, and not necessarily what it should mean, in a perfect world.

I argued with Jef that while “familiar” would seem to represent much of what people interpret as “intuitive”, the former term does not capture things like affordances, logical mappings and a clear, consistent paradigm. Jef remarked that my “clear, consistent paradigm” was just something which increased familiarity. I used his mouse example to suggest that using a mouse is easily understood because of the analogous movement between cursor and hand. He countered that this is a form of familiarity, based on using one’s hand to point in other contexts. I feel that this stretches the concept of “familiar” so far that it begins to unravel.

I also felt that, while claiming not to place any value judgement on the idea of “familiarity”, he then claims that a “superior” interface could not be “familiar”, which indeed suggests value judgement to me. But if “familiarity” can be so abstract a thing as “analogous to pointing with your hand” and can be used to describe the way an application’s consistent interface paradigm (note I do not say metaphor) makes the application more easily learnable, how is this a trap, and how else does one go about designing something which is easily learnable?

I ended with a Norman-like example: if a switch controlled a projection screen such that pushing the switch up made the screen come down (and vice versa), I would, through repeated use, eventually become “familiar” with the control and would eventually stop making mistakes — I would learn the control. However, I would never begin to consider its backwards mapping “intuitive”. The possibility of this dichotomy indicates to me that “familiar” cannot adequately substitute for all aspects implied by current usage of “intuitive”

Hal Taylor wrote:

Oops - how did I manage to write so much and not address your questions?!? Sorry, Tom, and fellow readers. I guess I was still looking for an opportunity to resolve some of my thoughts on the “intuitive vs. familiar” debate.

Anyway: yes, I think we are in a little bit of an interaction design doldrum. I’m not sure it’s really such a traumatic thing; really, the overall pace in technology is probably too fast in some ways, and has doubtless begun to color our expectations.

That said, my suspicion is that biggest impedement to the process is a risk-adverse project process, which seems to be one of Jef’s chief complaints — he mentioned that clients want a “superior” interface but are unwilling to stray from existing formula.

I think that if new interaction approaches properly balance learnability and productivity, and support user goals, then user expectations will not be a limiting factor. However, lack of understanding about interfaces, quality, usability and potential among users may be preventing the market from pushing for better-designed interfaces

So, why do we have such a risk-adverse process, stifling interaction design? Perhaps partly because people are generally pretty conservative, despite our obsession with “new and improved” (which is often neither, in actuality). Partly because interaction design is still relatively nascent and its value has not yet gained significant recognition, even among those responsible for software products. And partly because, despite what I refered to above as too fast a pace in technology, it is in fact still a majot undertaking to develop a software project, and project responsibles are hesitant to gamble the high costs of development on something not well understood, especially when the market seems to be satisfied with “good enough”.

Kevin wrote:

It appears that the discussion is getting towards the point where “Intuitive” is defined by a behavior and not a concept on its own. In other words the argument appears to be moving towards the destination where intuitive means that the user grasps the interaction without training.
This is much like the psychology precept of salience. Those things that people tend to attend to naturally are salient things. How do we know if something is salient? People pay attention to it.
We seem to be saying the same about intuitiveness, if something is intuitive then people will “just know” how it works. If they know how it works without training, then it must be intuitive. By that definition than familiar is intuitive, at least it seems that way in my mind, or should I go back and eat my Wheatys some more?

Kevin D

Tom Chi wrote:

I also have to grin every time someone mentions Kai’s Power Tools. I know the software is revered within certain groups, but I personally found it to be the hardest-to-use thing I’d ever encountered.

Yeah, it’s true that they were a little wacky at times. I mostly brought it up as an example of the fact even old technology can support numerous interaction metaphors.

Reading through the comments, I’m reminded of the saying: “unless you are going to make it 10 times better, don’t touch it!” Which is not a bad mantra if you have a very established product with a large install base. It also, however, creates the possibility of enormous inertia.

Perhaps we could approach the question from the other side. We know that theoretically there must be superior metaphors that are as of yet undiscovered… so what are the right steps to discover them?

This question can be broken into a number of questions which I feel might cut closer to the heart of the issue. For example — what is *superior*? Superior could mean more flexibility and control once an interface is understood, OR it could mean extremely low barrier before it is understood. Superior in other cases might mean little or *no* flexibility (e.g. in applications where there are only a small number of things that a user wants to do and a million ways to mess up.)

Perhaps if we step back and consider what “superior” means for that action, we can make bolder steps forward. On the other side of the table, there are usability tests — which provides absolutely essential feedback to our product development, but which will also tend to skew to favor learnability over flexibity and limited, familiar, interfaces over novel ones.

Hm, the novelty might actually kill your product in the marketplace depending on the degree of unlearnability. Then the only recourse might be to make it look really really cool (like the apple dock) so that people will use it regardless of whether it is an improvement.

Ron Zeno wrote:

Are we really in a design doldrum?

No. There have been larger opportunities in the past than there are now. The easy work has been done, and there aren’t many capable enough to take on the harder work.

What are the right directions to take interaction design?

Improve the abilities of interaction designers if they are going to take on the harder work.

Are we trapped because of hardware?


User expectation?

Not expectations, but abilities.

Risk-averse project process?

Absolutely, but it’s relative. The benefits are too small, mostly because of the abilities of the designers.

“unless you are going to make it 10 times better, don’t touch it!”

Especially when you don’t understand the risks or costs of change, or overestimate the value of the change.

what is *superior*?

Yes, that’s the real question, but I think we first need to learn what is good first, then superior. Then we can tackle the bigger question: How do we ensure what we produce is superior?

Dave wrote:

Its interesting that there seems to be a concensus that we will always be locked in the box/mouse/keyboard (and/or that is a horrible thing in and of itself).

The PC itself is a revolutionary object and I know that there are organizations that are working towards different form factors for computation other than the class PC. Some of these are slight like an iMac (basically the same 3 components), some slightly more task/context specific like the Tablet PC, and there are others that we need to consider as well. Then there are the strong research labortories of the big companies and big universities that are doing interesting tangible and virutal interface work too.

That being said, even XEROX Parc had to inherit something and then change it into what they have now. They also had to wait for certaint things to happen beyond their control before Lisa could become Apple could become Mac … blah blah blah.

My point is that as transistors become smaller and memory cheaper (both random and stored) we have more and more variability to work with.

But let’s even take the other side for a minute … lets say we are stuck w/ “The Box” I agree with Tom that this is not as limiting as I feel I’m hearing from many. Some of the variations might seem too constrained, but I think they are quite radical in their own right. The dock in OS X, or adaptations that merge web and desktop metaphors like those in Flash MX components.

I do think that as interaction designers we have a responsibility to challenge the hardware we work with. Yes, in our practical day-to-day we can’t, but in our community, academic, and corporate research we should definitely be asking for me, and using our expertise to spin things.

Will these changes be instant? Adopted immediately? Of course not, but that doesn’t mean that there aren’t markets for early adopters, or niche task contexts (like the Tablet PC example) that we can’t exploit for the purpose of exploring new models and metaphors.

As to the question of what is intuitive and is “intuitive” really something we should be striving for; or is intuitive in some way related to “familiar”?

My example that comes to mind is in my own experience. The iPod. There is NOTHING familiar about the iPod interface. It follows no desktop or other method of navigation and yet when I picked it up (after trying an iRiver; which does try to use standard folder metaphors) I “knew” exactly what to do. Yes, I stumbled here and there, but even the stumbles led to a level of enjoyment of its own. I just “got it”. Now, my wife on the other hand who is pretty good at grasping things had a bit more trouble than I did, but still learned it and got it pretty quickly. In the end, “intuitive” to me had nothing to do with familiar, but everything to do with behavior … that is, could I use it without outside intervention.

The problem w/ intuition is that I feel there are too many social-psychological parameters in achieving it to be anything other than a bell curve type achievement. Just go for the bubble but you will always have people outside the bell that you will need to accommodate in other ways (i.e. w/ outside assistance. To me this bell is less than 50% (but I’m making that up).

Steve wrote:

Are we not in danger of getting ‘interaction design’ confused with ‘the design of interaction mechanisms/objects’?

As an interaction designer you have to work within all sorts of bounderies/limitations. Some of these will relate to the task/business process, whilst others will relate to the available tools/widgets.

‘Industry standards’ provide a frame work that can be both positive and negative - standards can always be followed incorrectly!

‘Familiar’ implies that you have come across something similar before - and it is this familiarisation that helps you to suss out an interaction mechanism.

‘Intuitive’ implies that the interaction mechanism is so obvious anyone can suss it out.

I think that interaction design evolution involves both familiar and intuitive mechnisms - after all there is still some commonality between Win 3.11 and Win XP . . .

So where could we see interaction design going from here? We have seen voice technology take some leaps fowards, but voice alone will not be the holy grail. Perhaps voice with touch screen to illiminate the mouse (plus a few new widgets better suited to touch screen)? So are we talking about hardware changes or widget changes as the next evolutionary step?

Who knows what the future will bring, eh? What I do know is that I can already produce easy to use UI using the current crop of widgets - my solutions make use of both familiar and intuitive mechnisms . . .


Jens Reineking wrote:

Just from the view of a user: What I’d like to have is an actual desktop. A table, perhaps with an angled surface which does provide me wiht the equivalent of at least 4 monitors worth of screen real estate. Perhaps the angle could be adjustable so you can change from desktop to whiteboard.
An on that, I just want to work with tools like a ruler, a pen. Touchscreen. The ability to move documents around an arrange them. Take scribbles. Drag a document to a normal monitor or some kind of ePaper or perhaps some handheld device for reading.
And perhaps even an integrated scanner (with OCR) on the left for the time you have to bring in something from paper.
You could define any number of customized icons/shortcout/menus anywhere on this desktop and activate them with the touch of a finger/pen/whatever.
Keyboard could be displayed on the desktop or be an wireless one.
Oh, and the surface had to be pretty much scratchproof. Or have an easy way to replace the upper layer (or have it some kind of self-regenerating material).

Hm. Not as a standard input when working with people in the same room. The constant chatter would drive me crazy. Or all would have to learn how to sub-vocalize and have this picked up by something that is definitely not a microphone.

Just the views of a user, feeling limited.


Andrei Sedelnikov wrote:

The iPod. There is NOTHING familiar about the iPod interface. It follows no desktop or other method of navigation and yet when I picked it up I “knew” exactly what to do.

The whole iPod interface can really be not familiar. But if we split it down on small elements it is consisting off, we’ll find the most of them are already familiar to us, isn’t it? And the rest can be understood by the power of constraints - when there is no other ways, we easily figure out the single existing one.

Leave a Reply

OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?