Did you know the first “brain-tweet” was sent out this year? How about that we may someday be customizing windshields with widgets? In the not-to-distant future, we may be interfacing with computers in exciting and innovative new ways.

In the grand scheme of history, it wasn’t long ago that the first telephone conversation took place. Relatively speaking, that makes the personal computer an invention of yesteryear, and social networking only a blink of an eye later. Just imagine what’s coming in the near future…

The future of how we interact with computers is exciting to say the least. What once seemed like nonsense outside of Hollywood and Science Fiction is now starting to find it’s way into reality, and some of the technology is a bit overwhelming. Have a taste of what the future of interface design has to offer:

Heads Up Displays

Although Heads Up Displays (or HUD’s) were originally developed for military aviation so that pilots could keep their heads up, HUD’s have found their place in many more applications. Today they can be found in many cars and in a wide variety of experimental scenarios.


Some of today’s cars are already offering HUD’s that display information such as speed or RPM’s directly onto the windshield. There are even helmet mounted Heads Up Displays available for motorcyclists now. So far we’ve only dabbled in the field of vehicular HUD’s though.

patent from Microsoft reveals that the company may look into creating windshield HUD’s in cars that display all sorts of information from temperature to email. Maybe someday we’ll even have windshields with enhanced night vision, or even customizable widgets.


There are many new eyewear HUD products on the horizon including a pair of specs being developed by Brother, and eye-gesture glasses developed by German researchers at the Fraunhofer Institute for Photonic Microsystems.

Applications for these devices include navigating with augmented reality software, assisting engineers and doctors, or even something as simple as watching a movie or browsing the internet… only you get to do it hands free.

It appears that many of the newer eyewear HUD products may start emerging as soon as 2010, but the exact specifications and pricing is a bit blurry. Until then, if you’re a DIY kind of person you might be able to hack a wearable computer with a heads-up display of your own like this guy.

Gesture-based Interfaces

Gestural Interfaces allow computers to recognize natural human idiosyncrasies and actions. For example, there are quite a few gesture-based systems that decipher emotions in human faces or the “hidden” language of hand motions.

Often times, gestures act as a more seamless way to communicate with machines. For example, the iPhone bump application allows two users to exchange contact information by literally “bumping” their phones into each other. Such an action could be compared to bumping into someone, or swapping business cards, and feels more natural to the end user.

Likewise, the Palm Pre has a “gesture pad” that recognizes basic thumb swipe patterns: swipe from right-to-left to go back, throw an application off the screen to exit, or slowly drag up to bring up a global navigation menu.

You may already be aware that there are tablets that can learn your unique handwriting patterns and transcribe written text with a pen into plain text for use computer documents. What may blow your mind though is that a group of scientists have a working model of a new system that does thiswithout the pen. That’s right, you scribble your thoughts into thin air, and a computer transcribes it into editable text.

Whatever the application, gesture interfaces that recognize human body language instead of archaic data entry are here to stay. They’re intuitive, user friendly, and it may even be appropriate to call them “fun“. Have you seen the latest iPod’s? Just shake them to shuffle your music library!

For more information, you can pick up a copy of Designing Gestural Interfaces: Touchscreens and Interactive Devices, by Dan Saffer.

Spatial Motion Interfaces

There have been a couple of very promising developments coming from the Entertainment industry for spatial Motion Interfaces: interfaces that translate movement captured in a three-dimensional space into inputs on a device. Almost everyone knows about the Nintendo Wii’s motion controllers. Sony and Microsoft are also hopping on board, introducing their own technologies in the coming years.

The PlayStation Motion Controller is Sony PlayStation’s response to market demand for a motion controller, one-upping the Wii’s Motion controller by tracking distance on top of motion and rotation. Perhaps even more exciting in the field of spatial Motion Interfaces is Microsoft Xbox’s Project Natalwhich uses no controller whatsoever, instead tracking the human body as the means for controls.

Outside of the gaming industry, Toshiba has been developing their own hardware that appears to be taken straight out of Minority Report. They hope that someday their technology will become more available in the mainstream markets.

Augmented Reality

GPS systems, though useful, have begun to lose their luster as they find their way into more devices. What if instead of showing an overhead map of the area with an overlaying route, your GPS revealed directions directly on a live video feed of your current location?

That would be cool, huh?

Such is one of many potential applications of augmented reality systems: live views of real-world environments combined with computer generated imagery. It’s not just your imagination. In fact, some devices including a hefty number of smart phones are already finding themselves victims to AR software (Maybe you’ve heard of the Wikitude Travel Guide)

Augmented reality isn’t limited to navigation of course. There are already applications like Yelp for the iPhone that streams user reviews of restaurants over the camera feed; or Nokia’s Point and Find that allows users to find relevant information about objects simply by pointing your phone camera at it; and many other practical ideas that may become a reality in the near future.

Other Sensory-Based Interfaces


Telepathy may be the works of science fiction, but with the use of new neural computer interfaces, there may be a time in the future where sending thoughts becomes common practice. It was actually earlier this year that the first tweet was sent via brain from the University of Wisconsin’s Neural Interface Lab.

Another company, Braingate, has developed a similar technology that has allowed paralyzed participants to check email, or even play a game of pong using only their mind.

The technology works by implanting a small microchip in the users brain which analyzes pulses as inputs for the devices being used. Of course, the technology is still in it’s infant stages allowing the average user to write at approximately 10 characters per minute, but the applications for such a technology are limitless. Disabled users who previously have had little or no access to email or the internet can use this technology to communicate like never before.

It is hoped that someday this technology will go beyond the trivial game of pong and even help those who are paralyzed by creating a connection between the brain and muscles where a spinal cord injury otherwise prohibits communication. Such a connection may allow paralyzed users to someday move certain muscles again, and perhaps even walk.


Vocal Interfaces aren’t exactly new, but we keep finding new applications for them. From cell phones that recognize basic commands and names, to video games that respond to speech (such as the game “Tom Clancy’s: EndWar” which can be controlled entirely by voice commands), we’ve seen some innovative applications thus far.

MIT recently developed a wheelchair with a voice interface that not only responds to speech, but also saves detailed maps in memory and can take the user to their desired location via simple voice command. Another relatively new application of voice interfaces includes Google Mobiles “Search by Voice” commands.

Surfaces Become Smart

Last but not least, interface designers are tapping into something almost as ubiquitous as air itself: surfaces.

If you want to see a truly inspiring look of what the future may be more like, you’ve gotta take a minute to watch Microsoft’s vision of the future. If it doesn’t make you want to live in the future, nothing will.

Okay, so maybe were a ways off from this, but there are a definitely few conceptual ideas worth getting excited over. For one, CRISTAL is a smart surface that takes on the form of a common table. What’s not so common however is that this table can control many of the electronic devices in your room, such as TV’s, Sound Systems, Lights, Radios, and even DVD Players.

There’s also a group of MIT students who have developed a prototype system that could potentially turn any surface into a smart surface using a webcam and projector. Pick up a newspaper, and watch a video of the headline news directly on the paper. Need to dial a friend? Hold out your hand to let a number pad appear before your eyes. It’s a concept of course, but definitely one I could get behind.