Note that the Oculus Rift is still for developers only and won't be sold to consumers until there are applications available and the system itself is further refined.
Oculus Rift-style augmented reality (where virtual objects are superimposed on a view of real-world environments) and virtual reality (where the view is of an artificial but life-like world in which the user can interact with hands and feet) will be used for gaming, socializing and professional applications. One example of the latter is in the field of medicine, where physicians could use the technology to remotely control robots performing surgery on patients in other locations.
Extremely accurate motion control like what Leap Motion offers is not only a winning application for in-the-air-gestures, it's a perfectly necessary and inevitable one.
I expect Facebook to acquire Leap Motion and permanently build it into the Oculus Rift goggles.
Even if that doesn't happen, in-the-air gestures will go mainstream as soon as augmented and virtual reality go mainstream.
But that's not the only natural fit for in-the-air gesture technology.
The biggest technological change of the next five years will be that virtual assistants will become ubiquitous, and services like Siri, Google Now and Cortana will someday be the primary type of user interface for interacting with our apps, the Internet and one another.
Providers of those systems will engage in an arms race to improve their services. One key area of differentiation will be for the software agent to detect your mood as you use it, and also to automate communication. Let's look at those one at a time.
All three major virtual assistants are self-learning to some degree. They pick up facts about you and your life here and there, scanning your calendar, email and contacts.
This getting-to-know-you capability will improve: The assistants will try to learn more about you by offering you things and seeing if you like them or not. And integrated in-the-air gesture technology will help them do that.
The assistants will assess your reactions to things through a variety of inputs, including your tone of voice, the actual words you say and your facial expressions. And with the help of in-the-air gesture technology, they'll even read your hand gestures and body language.
In other words, they'll perceive the nature of your reactions to things just like people do.
If they detect that something they do delights or frustrates you, they'll adjust what they do in the future. Today's in-the-air gesture technology will play an integral role in how virtual assistants "read" you.
In-the-air gesture technology will also be deployed to help people communicate with one another.
You'll notice a trend in messaging apps where the input required of the user keeps getting simpler. Some messaging apps accept only emoji (simple cartoon characters that replace or abbreviate language) or even the word yo.
Sign up for CIO Asia eNewsletters.