Augmented Reality as documentation and the "context" button

I've been a little skeptical of many augmented reality apps I've seen, feeling they were mostly gimmick and not actually useful.

I'm impressed by this new one from Audi where you point your phone (iPhone only, unfortunately) at a feature on your car, and you get documentation on it. An interesting answer to car user manuals that are as thick as the glove compartment and the complex UIs they describe.

Like so many apps, however, this one will suffer the general problem of the amount of time it takes to fumble for your phone, unlock it, invoke an app, and then let the app do its magic. Of course fumbling for the manual and looking up a button in the index takes time too.

I've advocated for a while that phones become more aware of their location, not just in the GPS sense, but in the sense of "I'm in my car" and know what apps to make very easy to access, and even streamline their use. This can include allowing these apps to be right on the lock screen -- there's no reason to need to unlock the phone to use an app like this one. In fact, all the apps you use frequently in your car that don't reveal personal info should be on the lock screen when you get near the car, and some others just behind it. The device can know it is in the car via the bluetooth in the car. (That bluetooth can even tell you if you're in another car of a different make, if you have a database mapping MAC addresses to car models.)

Bluetooth transmitters are so cheap and with BT Low Energy they can last a year on a watch battery, so one of the more compelling "Internet of Things" applications -- that's also often a gimmick term -- is to scatter these devices around the world to give our phones this accurate sense of place.

Some of this philosophy is expressed in Google Now, a product that goes the right way on many of these issues. Indeed, the Google Now cards are one of the more useful aspects of Glass, which otherwise is inherently limited in its user interface making it harder for you to ask Glass things than it is to ask a phone or desktop.

The car app has some wrinkles of course. Since you don't always have an iPhone (or may not have your phone even if you own an iPhone) you still need the thick manual, though perhaps it can be in the trunk. And I will wager that some situations, like odd lighting, may make it not as fast as in the video.

By and large, pointing your phone at QR codes to learn more has not caught on super well, in part again because it takes time to get most phones to the point where they are scanning the code. Gesture interfaces can help there but you can only remember and parse a limited number of gestures, so many applications call out for being the special one. Still a special shake which means "Look around you in all ways you can to figure out if there is something in this location, time or camera view that I might want you to process." Constant looking eats batteries which is why you need such a shake.

I've proposed that even though phones have slowly been losing all their physical buttons, I would put this back as a physical button I call the "context" button. "Figure out the local context, and offer me the things that might be particularly important in this context." This would offer many things:

  • Standing in front of a restaurant or shop, the reviews, web site or app of the shop
  • In the car, all the things you like in the car, such as maps/nav, the manual etc.
  • In front of a meeting room, the schedule for that room and ability to book it
  • At a tourist attraction, info on it.
  • In a hotel, either the ability to book a room, or if you have a room, hotel services

There are many contexts, but you can usually sort them so that the most local and the most rare come first. So if you are in a big place you are frequently, such as the office complex you work at, the general functions for your company would not be high on the list unless you manually bumped them.

Of course, one goal is that car UIs will become simpler and self-documenting, as cars get screens. Buttons will still do the main functions you do all the time -- and which people already understand -- but screens will do the more obscure things you might need to look up in the manual, and document it as they go. You obviously can't ever do something you need to look up in the manual while driving.

There is probably a trend that the devices in our lives with lots of buttons and complex controls and modes, like home electronics, cars and some appliances, will move to having screens in their UIs and thus not need the augmented reality.

Add new comment