Tom Chi  

Meet me by the Spreadsheet at Noon

February 4th, 2005 by Tom Chi :: see related comic

So last week the news was that Jef Raskin secured $2 million in funding to build a radical new approach to the user interface. Poking around, we were able to find the AZA Demo (8 MB shockwave file) which features some of the concepts of the new interface.

In particular there is the concept of leveraging the natural human understanding of physical space using a data ‘geography’ to arrange and store your work. I found the demo speedy to learn, fun to navigate, and generally quite interesting. But what struck me soon afterwards was that such interfaces re-create many of the problems that physical spaces have.

The working area is an infinite plane filled with complex nooks and crannies where data is nestled. While humans do have a natural sense of space and geography, sometimes we still get lost. We also tend to misplace things. Over time, these two shortcomings will serve to complicate the interface significantly. The interface starts to become like my physical desk — which has a couple of loose piles, some bills, some coupons, CDs, pens, knick-knacks, etc. While I do make some effort to keep things from getting mixed up, within a month the desk is invariably messy. I can see my AZA interface starting to decay in the same way. Granted, there are people out there who would be very systematic and neat about using the interface, and would succumb to messy after many more months (if ever), but these sorts of people will be neat no matter what sort of program or desk you give them.

Even if you have kept a pretty orderly digital desk, there is always the chance that you have misplaced or misfiled something. In such cases, you will find yourself hunting around where the data *should* be, and potentially never finding it. It’s like when you lose your keys and you keep hunting around the kitched table because that’s where they are *supposed* to be. This is an interface you could lose your keys in.

For these reasons, I don’t see the concept taking over the desktop just yet.

28 Responses to “Meet me by the Spreadsheet at Noon”
Irek Jozwiak wrote:

There are (at least) two methods of managing the virtual space.

The first I would call browsing, which includes

  • finding an information (files, etc.) which location is known to us,
  • finding out what can be found in folders, containers, places (whatever we will call them).

The second is searching.

Your point refers to browsing, and I agree with your arguments. However, if we imagine a virtual space with searching capabilites, the result will be very interesting. You would just clap your hands and see all keys you have in your home. According to the physical space metaphore, searching would be able to rearrange the space temporarily.

I think we are not bound to recreate the problems of the physical space. Yet we use computers, aren’t we?

Chris McEvoy wrote:

But it’s something that you can actually explore and that’s fantastic for me.

Death to the desktop and bring on the Data Plain.

Chris McEvoy wrote:

And while I’m here, do you know that you could have been calling this site “Do It/Cancel“?

Axel wrote:

I actually thought of this style interface some time last year, AutoCAD has the same sort of directed zooming capabilities, it uses the scroll wheel not the arrow keys, which makes it much more intuitive, especially seeing as both the arrows and the mouse are designed for use with the right hand. but its cool to see the professionals have the same idea.

that said the interface is more suited towards a desktop (having a large scrollable desk as opposed to multiple virtual desktops) than file management scheme. ‘My Documents’ has, at last count, 35 thousand files sitting in 1600 folders in order to put that in a usable order in a zoomable enviroment a size/location heirarchy would naturally occur putting titles in spots which when zoomed in would resolve into files and more titles and so forth, in the end mimicking the traditional folders, essentially losing any usability advantage this enviroment offers and making it just eyecandy.

Dave Huston wrote:

I don’t understand why people feel a need for a new kind of interface for computers. Simplication of the current style of interface is all that’s needed. Adding more logical organization is fine. But I think it’s going to be a very long time before a better interface can be imagined.

John Blake wrote:

www.relevare.com is a site that uses a similar navigation. In their case, though, the amount of content to organize is much less than most individual’s desktops.

I liked the Relevare implementation of the idea because it allows the user to see the site’s structure at a glance. As the amount of information grows, however, I’m not sure how well it would hold up.

usabilist wrote:

The whole interface should definitely not be built upon the zooming paradigm. On his site, Raskin even writes:

… a demonstration of some ways that zooming can be useful in an interface.

But zoomability can really be useful to some extent. An “zoom in” metaphor can be used as a way…
… to get more detailed info about an object
… to get “into” a document
… to naturally view a bigger size image of a product in an online-store
The same with “zoom out”.

But I suppose we need a dedicated control gesture, to make zooming effective. One way is to extend the mouse by placing the zooming control (like on digital cameras/camcoders) before the scrolling wheel.

Dave wrote:

I guess I just don’t get it.
Zoom is a metaphor? Metaphor usually implies some type of analogy to the analog world. Like a desktop. There are desktops everywhere before the computer was fashioned.

But on a desktop everything has the same relative size. Or should I say everything is in the same scale.

Now a Maginifying glass is a metaphor for enlarging things that are tiny, but even then you don’t move past a covering element deeper like “zoom” implies.

Also, this sorta of information space navigation is not new. In the 90’s David Small was using zooming relationally a lot more effectively than what Jef put here.

My main thing is that zooming in generally might be serendipitous, but it is not tied to a metaphor.

I am however, not a complete nay-sayer, b/c I know that Jef is incredibly diligent about his research. I see he talks more about cog-sci and comp-sci research than about usability studies, at least in the limited documentatin made available. I am curious if a usability validation of some sort was done on this with applications in practice.

usabilist wrote:

Dave, you’re right, magnifying glass is a metaphor. But the process of magnification can be called “zooming”, too. From this point of view I consider “zooming” as a metaphor, simply not of a physical object, but of a real-world process.

Anyway. But if we use the magnifying glass metaphor to look inside the document, we get a new interface idiom, and rather a great one, which can be learned once. We should not follow the constraints of a physical world, let’s get advantages of a digital one.

Anonymous wrote:

What we need instead of an empty infinite data plain (plane), is a memory palace, a space with form and structure that can be learned and easily navigated. However, you’re correct, if you’re creatinga virtual space, why not try to solve some of the problems you get in the real space.

Dave wrote:

Usabilist,
I guess I question the “learnability” aspect. My experience with novice users (who actually remain “novices” for decades) is that the more you create levels of abstraction from the physical world the harder it is for them to learn and adapt.

I’d be interested in people’s examples of far reaching technological abstraction that have been successfully adopted to an almost ubiquitous level.

usabilist wrote:

Dave,

I suppose you’re right, but I’m afraid we cannot avoid it.

I’d be interested in people’s examples of far reaching technological abstraction that have been successfully adopted to an almost ubiquitous level.

Mouse?
“Click” on something?

Actually, I did not quite understand what do you mean under “far reaching abstractions”.

Noah wrote:

Hmm. I still can’t find Waldo.

Tom Chi wrote:

Alright, I’ll grant that this is more of a magnifying glass metaphor. Combine that with the sense of space and geography, and you end up with an interface which works something like a map. Maps are pleasing and useful, but if you gave me a map of Bulgaria and told me to find some obscure pond, I’d be lost. The only hope is the index which will give me the coordinates to take my magnifying glass to.

Now, I’m assuming that the final version of this interface will have some sort of text-based indexing capability. And to be fair, if I was the one who created the geography (I, the creator of Bulgaria), then I might need the index a little bit less. Still the geography of my apartment has been created by me, and even at this small scale (~1000 objects), things most definitely get lost.

Now give me an index for my apartment… that would be sweet.

Dave wrote:

Actully that is more like looking at a map of Bulgaria and as you zoom, you get to places to read the history, pol. sci. and anthropology articles.

Admittedly it would be interesting to see demos that are more generalized to a real world context from entry to disparate tasks/functions.

Shadow of Herb Simon wrote:

I also don’t get it.

I remember using Flash about 6 years ago in grad school to create 3-D prototype UIs (almost identical to this demo) that took physical space as a model for the users’ browsing/searching behaviour. And I know there were many folks who had done similar work long before that because I had to research their work before producing my own prototypes. So what’s the news here?

This little demo is akin to someone having recreated nested folders and point-and-click and then claimed to be on to some new UI model.

Sorry this is all very old news with (at least at my grad school) very well known usability, cognitive and utility problems that, as long as we’re still using flat screens and physical input devices, reduces this model to little more than an tired gimick from late 90s UI design.

So have I missed something? I’m still scratching my head.

Bob Salmon wrote:

Whoa! My poor ancient under-spec etc. Windows machine crawled when attempting to show me the demo. I guess if I turned it into a Linux box with no X it would fly. Bah humbug! Seriously - are all the bells and whistles worth the CPU cycles and the memory? Are people appreciably more productive in this environment?

With the data plane etc, what happens if the user is blind? Could you use something like (background) music or sounds to give a sense of location, separation between different things etc? Has anyone done any research on this kind of thing? I imagine it’s easy for sighted people to make well-meaning speculation that’s completely bogus (such as mine above).

Sorry if this is a beginner’s question, but how customisable are data visualisation systems usually? For example, if I’d like to create a virtual world where all my status reports were by a waterfall, and all my working documents were in an orchard, would that be possible? One (poor) approximation to this might be a virtual desktop with different wallpaper per screen - is a closer approximation possible?

I agree with earlier posts about the limitations of current input/output devices, a 2D plane etc. Until you get more immersion there’s going to be only so much you can do, but then as one of my lecturers at college said in the early 90s when VR was the Great Shining Hope of computing, is a secretary going to put on a headset just to type a letter?

X wrote:

I already have a natural data ‘geography’. However, instead of crazy arbitrary distances, every distance is fixed, so when I zoom one tier in, I’m inside a “folder” viewing things too small to see before. - actually, call me crazy, but isn’t the behavior toggle to disable this labelled “zoom”?

Is there something to be achieved in this beyond the actually quite scary idea of scrolling so far left in the nation of MyPicturesFolderatoria that I will end up in MyMoviesFolderistan - another usability funspot will be pondering how to properly notify me I’ve crossed the line. The current fixed tier-depth system has a neat way (although it lacks the laterality that I FEAR)… the containing ‘box’ shrinks from the previous ‘concept’ or ‘folder’ I was in, and the new containing box zooms.

Maybe I just use a mouse differently than your average user (ha ha ha), but I had a horrible time trying to achieve self-set navigational goals. I easily got lost, and progress towards goals was quickly mislaid and then reset because of my CRAZY MOUSING. Of course, if you think my mousing is crazy, remember someone is going to try and manipulate a single click interface holding down both mouse buttons and dragging everywhere.

Additionally, Windows (gasp!) seems to have a tough enough time rendering my simple fixed depth desktop all the time (and I have a gaming PC). The response time for some tasks is simply unacceptable. A sprite based, static system is poorly responsive to me… a dynamically generated, rendered system is going to be… phenomenally responsive to John Q. Professional how, exactly? With the 10 gHz PCs that Moore is hiding in his closet?

Something new and something FLASHy does not a better UI paradigm make. Weren’t most of the early website horrors the result of people going, “Oh, look, !” because they were the definition of new and flashy? Not to say there isn’t some better way to interface, but I think this interface is just deconstructing problems that have already been solved to come up with… well, the solutions that are already here.

X wrote:

My previous comment’s last paragraph had “Oh, look, <frames/animated gifs/MIDIs&rt;” but without the correct ensymbolification, so they’re apparently being confused for HTML tags. A usability issue! Another is the preview button not properly representing the previewed text (in its fixed width box versus full page box) in Firefox.

My end user expectation was broken. Now I’m afraid to press cancel in the fear it may do something like ‘accept but try and say takebacksies.’

Bob Salmon wrote:

I don’t think I made my point about blind users particularly well. I think it’s similar to what X said about being technology driven rather than user driven i.e. just because you can doesn’t mean that you should.

Just because increased computer horsepower and groovy software lets you create a new thing, is it an advance? Disabled users already have a hard enough time using the systems we inflict on them (and other users), so I get quite cross when new developments make it even worse.

Most (if not all) visitors to this site are in the IT high priesthood, and while we might have our religious wars about languages, tools or job titles, we’re still in the club of the included.

There are many types of people who are excluded. For some it’s by things we can’t control, like not having enough money to afford even the cheapest PC and slowest internet connection, and no access to public computers such as in libraries. We have only tenuous control over the situation these people are in via how we vote, or how we give to charity.

There are others excluded by language - not everyone speaks English well enough to use software designed by and for a UK/US population. We start to have some influence here - how easily-translatable is your application? How tied is it to text or other cultural things such as red for danger?

The last group of the excluded I can think of (there may be many I’ve omitted) can get access to standard PCs, understand the (human) language the PC assumes, but still can’t use the *@($£ thing because they can’t see as well as the hardware and / or software designers assumed, have worse motor control than they assumed, worse memory, worse ability to plan or whatever.

You’re a computer geek and so can’t do the doctor thing and make people well, or the engineer thing and provide people with clean water, but at least make the world a better place by leaving your vanity projects on the shelf and make something useful. That doesn’t just mean useful to you, but to as many people as possible.

Sorry to rant, but HCI can get up its own backside sometimes.

Reed wrote:

This discussion about ZUI’s is great, but we should recognize that it’s one tiny aspect of Raskin’s new interface.

The more important aspects are in how you interact with a text document using the keyboard. I think he’s got some good ideas, but I wonder if the same principles can be expanded beyond the keyboard.

Eitanko wrote:

I think the demo was moved here
http://www.raskincenter.org/main2/img/zoomdemo.swf

Tom Chi wrote:

Wow. A lot of people posted while I wasn’t watching. To respond to the shadow of Herb Simon — this is news not because it is a completely new UI idea… it is news because an interface guru has been given 2 million dollars to productize and popularize this with real people. (alternate example: in 1984, the WIMP metaphor on Mac was not new to PARC researchers — but bringing such an interface to the masses *is* significant)

To respond to some of Bob’s concerns, an interface like this seems easily internationalizable/localizable, but pretty much a non-starter for blind users. It is also not great for low vision users since knowing which data nubs to zoom on requires the ability to ‘recognize’ nubs.

To respond to X: yes, I also had some trouble with how the mouse worked on this interface, esp when zooming out. Also agree that folders are an abstracted kind of geography and that this more literal geography it not quite there yet. Still, despite the success of files and folders, I’m always ready to look at new approaches. The rise of search has been an interesting one that does not destroy hierarchies, but often makes them irrelevant as a navigational aid. Hopefully in the future we’ll keep pushing for new strategies to make it easy for the right data to be around at the right time.

Tom Chi wrote:

Oh… one more thing. I totally understand that this zooming bit is just one piece of a bigger interface picture in Raskin’s new approach. I just wrote about this one because there is a demo for people to try. So it is easier to get a feel for than a description of how things *might* work.

Shadow od Herb Simon wrote:

Tom said:

this is news not because it is a completely new UI idea… it is news because an interface guru has been given 2 million dollars to productize and popularize this with real people.

Fair enough. But this 3D spacial navigation model is fraught with many well known inherent cognitive and usability problems, and there was nothing in the demo to even suggest (not even a teaser) how these problems will be overcome or even addressed.

So yes it is nice that he got $2 million (a shoestring for new product development BTW). However, from the information available (perhaps he’s keeping many details quiet) he is simply flogging an old failed idea, and for some reason folks are reacting as if its a stroke of brilliance.

Honestly, its time the design community started reacting with more healthy skepticism and become more critical–its the only way to help the genuinely great ideas to rise to top, as well as for us to be taken more seriously as professionals.

Shadow of Herb Simon wrote:

I totally understand that this zooming bit is just one piece of a bigger interface picture in Raskin’s new approach

I am very anxious to see what the bigger idea is. I believe we are approachng the limits of WIMP and current information visualization models.

Alan Hogan wrote:

I tried the THE demo. That’s awesome. I can’t see it “taking over” soon, but it is a great idea IMO and can’t wait to see the first real-world result. In fact, the only thing I didn’t really like about the demo is that when zooming out, the ‘vanishing point’ is right under the mouse. That means if you wanted to zoom out to see something on the far right, you need to put your mouse to the left! That’s a minor quibble but something that would need fixed before I’d want to use that type of an interface.

Gabriel wrote:

I think that flickr’s tags page are a implementation of the demo in the real world


Leave a Reply


OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?