Since I’m busy relaxing during Thanksgiving, I’ve got time for just one anecdote about Notes. When I worked with KC I did a decent amount of consulting, so being on the road I would oftentimes use the Notes Web Client. Back then it used to try to load all your mail at once and render them on one LONG page. Well if you have a couple thousand mails, then we are talking maybe 20,000-50,000 table cells to render. Only Internet Explorer could actually do this and it took about 20 minutes.
I couldn’t help but wonder whether anyone at Notes had ever tried to use this on any realistic data set? Even with a couple hundred mails it would be super sluggish. I mean we’re not talking about rocket science, just start up the web client and point it at your real email account.
Anyway, after some time they did eventually fix this - making it paginated. The interface was still suboptimal, but I remember being overjoyed that I could check my mail in less than 20 minutes! Score one for customer delight.
We’ve had similar experiences with our (usually back end engine) code: nasty behaviour that’s only obvious when the input data’s large. Things like non-linear e.g. quadratic processing times because of inserting N things into a linear list rather than into a tree. Simple computer science, but there’s a temptation to think “Oh, this isn’t worth doing something tricky about as there’ll never be much data in it.” Then customer X or consultant Y works around some problem by creativity / abusing the software (depending on your point of view), and hey presto! Sloooooowww.
(Of course, this is all fixed now
It raises another point, which is code that changes its behaviour when the volume of data increases. If you’ve got tree structured data and want to display it in a GUI, if the tree is small enough (where small means “the value of parameter P is less than N”) it’s helpful to display the whole tree, as that’s what the user expects and it won’t take too long.
If the tree then gets too big (P >= N), you’ll never get anything done as the data retrieval / screen update takes forever. Then you might do something like allow a query for a node, and then display the path from the root down to the node. Hardly any big picture stuff, but at least you can use it.
Trouble is: what is P and what is N? Are they constant, or should they be customisable? Also, if the user doesn’t know about this adaptability and adds the one node that breaks the camel’s back, suddenly everything changes to a new display format, new ways of interacting etc. Should you explain what’s going on, and if so, how?
It’s true. All the processor speed in the world will not help you if you have coded up a lousy algorithm. What has befuddled me is that in daily usage computers continue to be so slow. In microcontroller based hardward design, you run an 4MHz processor and everything happens so fast that you need to slow it down 100x for users to interact with the system. Of course you also program in assembly or C, and the types of problems you solve are more targeted.
Still, modern processors are 500x ‘faster’ — you figure with all that processing headroom that computers at least wouldn’t feel ’slow’. I mean I implemented a complex motion control system on that micro and slowed it 100x. Is modern software really 50,000x more complicated than my control system code? And if it is, why does it need to be?
OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?