Human computer interfaces are all about “bandwidth”, ie. the rate at which users can communicate with their computers and the richness of the information that they can convey.
The keyboard was a hugely important innovation because it upped the speed at which people could interact and convey thoughts and intentions. The mouse was also an incredibly big gain in UI bandwidth allowing manipulation of shapes within a 2D surface. Touch interfaces were too although somewhat less than the mouse. On our phones we now have sensors for sound input, vision input, position, orientation and movement and these have also upped the UI bandwidth considerably.
But we always need more, much more. The thing that will make new media such as VR work in a productive way will be to model the user’s body accurately and to recognize its movements and therefore the user’s intentions. Super-fine grained modelling of hands and gestures is a big one and potentially a huge gain in UI bandwidth.
Via Emlyn O’Regan
Originally shared by Kevin Kelly
Extremely fine gesture control via micro radar chipa. This is input device after keyboards. https://www.youtube.com/watch?v=0QNiZfSsPc0
Bandwidth and fitness for purpose: for example, I die a little every time I discover that information I need is locked in a rambling 20-min, 100GB video instead of 100K of text and PNGs that I can scan in 30 seconds.
LikeLike
I’m referring primarily to outgoing bandwidth from mind to computer. Our modes of input going the other way are already fairly rich and varied.
LikeLike
In that direction, the old 1980s mouse vs keyboard-shortcut choice is still an interesting model. Using Ctrl-V (or Ctrl-Y in Emacs) to paste text is more economical both for bandwidth and for human motion, but requires advanced training; reaching for the mouse, running it up, opening the “Edit” menu, and clicking on “Paste” requires more bandwidth and more human effort—including a serious risk of repetitive-strain injury if repeated hundreds of times per day—but requires less training and skills retention. Each pattern is optimized for a different goal.
Similar tradeoffs will be happening with future human->computer interactions: there will always need to be a powerful, low-bandwidth expert mode, and a restricted, high-bandwidth n00b mode.
LikeLike
The mind has plenty of capacity to learn actions which are non obvious and not really discoverable without training or documentation.
Touch was interesting innovation in UI because it introduced a range of gestures which I think of as the equivalent of keyboard shortcuts. Some are obvious (once seen) and therefore “intuitive”. Pinch to zoom being the classic one, while others are completely arbitrary and can’t be easily stumbled across or learnt. These tend to vary across platforms. For example the two, three or four finger swiping actions in iOS. Each has its own subtle variation on meaning, a bit like pressing various combinations of the ctrl, shift and alt keys.
LikeLike