I don’t like just ascribing new terminology when the old one did just fine, but I think we should consider one particular change in terminology when talking about designing interfaces for complex systems. I have no idea what actually constitutes a complex system, or when a system turns into a complex system, but once it does; once it is a complex system, as interface designers, we need to draw a distinction between the activities of those who interact with the system. Plenty of people have drawn the distinction between users and administrators or operators. However, I believe we need to draw distinction between operator tasks: those who observe and those who repair: System Observers and System Administrators.
Interface designers need to think about ways of empowering users to keep a watchful eye on the technology, since their natural senses aren’t any use. The problem of sensor-disconnect becomes profound for users who must monitor really complex systems. By thinking about them, not as administrators that must diagnose and repair a problem, but rather as observers that must be able to perceive a problem, or an opportunity for that matter, we can focus our interface design efforts. In other words, I believe we should stop trying to tie the activities of identifying a problem to the activities of diagnosing a problem.
For example, parents know that they can let the children play, and even roughhouse in another room while they work on something else. They can hear the children playing, and even fighting, but don’t necessarily drop what they’re doing to make sure everything’s alright. But, depending on how the children were already acting that day, what the parent’s mood is, and unique characteristics of the sounds coming from the other room (yelling, shrieking, laughing, thumps and thuds, or even a crash), the parent will first go check up on the kids before running to get bandaids or try to break them up. There’s a distinction in those activities between perceiving the situation and diagnosing it and ultimately mitigating it.
However, for complex systems, we do not enable users to do the same. The tools we provide them are designed to inform them of every little parameter we can create sensors for. And then we compound the problem by linking those sensors, via thresholds, to alarms that force them to disrupt their work. As a result, we are forced to rely on simple perceptual design methods to improve dashboards, such as highlighting and prominence techniques; or we have to rely on automation to rapidly analyze all those sensors to recognize problem – essentially moving the observation task away from the human to the machine.
Dashboards, reports, and alarms are all tools that enable an administrator to diagnose a problem, or know that a particular problem demands their attention. But that’s not what they need – they first need a better way to perceive the system itself. I believe that if we, as interface designers, separate the tools that we create into those that enable a user to observe and those that enable them to administrate and diagnose, we can ultimately create a stronger connection between the user and the complex system.
The Bugatti Veyron Grand Sport might just be the most beautiful car in the world, and almost certainly is the most technologically advanced production car in the world, but its temporary roof, used when it starts to rain unexpectedly, is what impresses me the most. Just take a look at it. The roof quite literally unfolds like an umbrella! I suppose it’s probably too big to be used as a pedestrian umbrella, but what a remarkable and elegant solution.
Update: there’s a video of it (turn your speakers down first – includes loud music for some reason). The roof is visible at about 1:00 in the video.
One of the great Internet-related pastimes is making out-of-blue predictions of what some upcoming product will be. I’m guilty of this armchair fortune-telling too. Engadget’s reporting that HTC is holding a press event on June 5th, and so I’m gonna take a stab at what the device is:
I’m guessing it’ll be a hockey-puck-shaped device that uses an accelerometer to detect what direction it’s facing. I bet it’ll be something that you spin to select menu items.
Why do I think that’s the case? The invitation posted in the article certainly suggests that. But really, the better question is, why has it taken so long for acceleration-based handheld navigation to come about?
It’s not because the technology was lacking – digital accelerometers have been around for at least ten years, and I remember a UIST paper by Jun Rekimoto from 1996 that covered just that sort of interaction.
Could it be cost? Not likely I think. At least seven or eight years ago, I remember coming across Till Harbaum’s guide to adding a tilt sensor to a PalmPilot. I built one, and it cost me a just a couple of bucks. I used it to try to experiment with Rekimoto’s ideas back then.
So, why haven’t we seen more devices that exploit accelerometers for navigation? I wish I knew. It’s always seemed to me that there was an untapped wealth of interactions when a physical object knew in which direction it was facing. By pairing that with a reconfigurable display that rendered menus and information based on that knowledge and in the correct orientation I sketched up many interfaces that I thought were exciting and natural. But, in the end, what always turned me off of designing portable devices around this sort of interaction, is that I could never come up with a configuration that wasn’t fully-attended by the user. I’ve always believed strongly that the measure of any portable device’s interface was how well a user could operate it while only glancing at it in brief intervals or even not at all. It’s been my biggest complaint about the iPod’s interface, it’s why I’m not optimistic about the iPhone, and why I like the Treo so much. I can’t wait to see what HTC has up its sleeve, and I’m looking forward to seeing more acceleration-based interfaces on portable devices. I just hope no one tries to use them while driving.