If you’ve been keeping up with technology, you may have heard the term Natural User Interface, or NUI. Its use in marketing material has somewhat blurred the definition of it, but it’s probably best described by stating the term’s objective, which is to allow humans to interact with computers using instinctive, or natural, elements. The success of any NUI can be measured by the amount of time a user can move from being a novice to an expert. The best designs will require little to no learning by the user to effectively interact with the computer. Pinch to zoom and panning with your fingers on a touch screen are a good example.
Now that the stuffy definitions are out of the way, we can talk about what NUI really means to us. Since the early days of computing, humans have been abstracted away from the computer by peripherals like keypunch machines, keyboards and mice. The use of these devices cannot be considered “natural” to a human being. Afterall, Mavis Beacon has made a killing from teaching people how to type. A natural user interface aims to eliminate this layer of abstraction. To do this well is not simple. It requires the use of both hardware and software to enable the computer to constantly monitor its interaction environment, and recognize the input it’s being given by the human user. Effectively, we need to teach the computer “how to type.”
In the span of just a couple years, several devices have emerged to take on the NUI challenge. A stand-out in the NUI world has been the Microsoft Kinect. Originally relegated to sports and exercise games on the Microsoft Xbox 360 with its release in 2010, it was soon opened up by the open source community and later by Microsoft with the official release of the Kinect for Windows SDK. Now, the Kinect is being used in applications spanning many industries from medicine and science, to grocery shopping.
I first became involved with the Kinect while trying to come up with an idea to present at an Accusoft Innovation Challenge in late 2011. At the time, I had been passively following the Kinect “hacking” stories. At the time of the innovation challenge the Kinect for Windows SDK beta had just hit, so I decided to jump in and give it a try. My first couple of attempts were visually exciting , but technically boring. But, eventually, I was able to create an application that combined a natural user interface with the imaging power of ImageGear for .NET. Take a look at the ImageGear Demo Using Microsoft Kinect.
The NUI world is expanding quickly with each new player taking a slightly different perspective. The soon to be released Leap Motion device promises to be a close range, high resolution device that could eventually replace the keyboard and mouse. This device promises precise tracking of hands and fingers down to the millimeter level. I see this device almost as a complement to the Kinect, where both could be used in the same application. The possibilities are exciting!
What new technologies in the NUI world are you looking forward to in 2013? Share in the comments section below.