Note: I still feel that this post is as relevant as ever.
Suppose you're a surgeon, soon to conduct a delicate surgery on a heart patient. You hold tightly to your rusty and trusty chain saw as you climb a rope into an operating room which is located at the open roof of the hospital. It's windy and there's a lot of birds, but that's the way it's always been, so you are sort of used to it already - why change it?
Or, suppose you're a painter, soon to paint a house. You grab some blueberries for the colour and a tiny sponge with which you will slowly wipe the house blue. It's a bit awkward, not to mention terribly slow, and the quality is bad, but that's the way it's always been, so why change it? "But that's absurd! Nobody does things in such a difficult way!", you say. Well, suppose you're a software developer...
You grab your keyboard which is a direct, only slightly evolved electronized descendant of mechanical typewriters from around the year 1900 so. The key layout is deliberately awkward and slow for typing, because in the past it was necessary to prevent the mechanical parts from getting stuck when hitting the keys - this was achieved with changes to the key layout, by basically distributing the most often used keys far apart from each other, creating a non-optimal key layout and slowing down the typing speed as a side effect.
You sit next to a screen where the computer displays you an interface which is based on a very much unchanged windowing paradigm from the 1970s. It may look a bit neater and cleaner, but it is the same thing. You have a powerful machine, capable of creating virtual worlds in more or less photorealistic 3D representations, with surround and even three-dimensional sound, yet you will only utilize clumsy 2D interfaces.
Instead of your fingers, your voice or your whole desk, you interact with the computer using a pointing device which resembles a rodent. You don't touch the screen, you touch the rodent-like apparatus.
When you create a program, you write it as if you were writing a novel on a sheet of paper, using a mechanical typewriter. You command the computer by laying out explicit instructions in linear, time-increasing order, in essentially one dimension (like one tape of a Turing machine), even though the presentation is two-dimensional (like a page in a word processor).
You do not utilize even a fraction of the visualization and usability capabilities the computer could offer you. Of course things could be even worse. You could still be entering data into the computer using paper cards with holes punched into them.
All this is partly due to the burden of the past influencing the thinking of today - we're so used to the things the way they are that we do not stop to think how things could be if they were re-invented, if everything were to become an upgraded version of whatever they are now. This problem is especially visible within the software industry, where one has to move, move and move continuously, like a shark which needs to swim all the time in order to stay alive. There is so much motion and overall hoopla but no occasions to just slow down and sit down and think about things without hurry. These idle moments are the ones which create innovations, but unfortunately there are so few of those moments available.
I say it's time to stop for a moment, and think how to shed that burden of the times gone by and how to create something new, something which heeds the great pioneering work of the past but which does not just blindly repeat it for tradition's sake, something which is not a living piece of history in a new, shinier package.
Some researchers, especially in the ubiquitous computing field, seem to feel the same frustration with having to use the "wrong tools for the right job". As a result, they have approached the aforementioned problems with fresh ideas, especially for human-computer interaction. Here are some neat and stimulating examples of such research, to show you that there is indeed hope for better tools!