Human computer interfaces are all about “bandwidth”, ie. the rate at which users can communicate with their computers and the richness of the information that they can convey.
The keyboard was a hugely important innovation because it upped the speed at which people could interact and convey thoughts and intentions. The mouse was also an incredibly big gain in UI bandwidth allowing manipulation of shapes within a 2D surface. Touch interfaces were too although somewhat less than the mouse. On our phones we now have sensors for sound input, vision input, position, orientation and movement and these have also upped the UI bandwidth considerably.
But we always need more, much more. The thing that will make new media such as VR work in a productive way will be to model the user’s body accurately and to recognize its movements and therefore the user’s intentions. Super-fine grained modelling of hands and gestures is a big one and potentially a huge gain in UI bandwidth.
Via Emlyn O’Regan
Originally shared by Kevin Kelly
Extremely fine gesture control via micro radar chipa. This is input device after keyboards. https://www.youtube.com/watch?v=0QNiZfSsPc0
