Augmented Reality Ten Years Out

I just looked up and realized it’s been almost exactly ten years since I did the augmented reality experiment I called Installation at the Media Lab. The future isn’t here yet, but I’m told it’s coming.

[vimeo http://vimeo.com/32844341 w=480&h=360]


Nothing of note happened for a long time.

Then in 2008-9 there was a brief explosion of interest in augmented reality apps like Layar and Yelp’s monocle when people noticed that their phones could tell what direction they were pointing. That was kind of neat. There was also one you could point at the sky and tell what planes were overhead.

Probably the coolest is WordLens, which translates foreign language text it finds in the environment. It’s interesting that this one is probably the neatest of the bunch and depends on camera alone.

But none of it has stuck. Why not?

It’s not that useful (yet?). Beyond the technical challenges, it’s worth wondering what we wish we had overlaid on video of our environment in the best of circumstances. I often wish I had people’s names available, but that would have to be discreet, so the phone’s out of the question. Directions work best on a map.

For places that want you to be able to see information about them, tagging makes sense. I see tags and QR-codes in a lot of windows now, but I have never once witnessed anyone use one.

Some of the most interesting applications of AR would require it to operate well in close quarters. You can imagine being able to preview art, furniture, or remodeling in your home. You could leave notes in very specific locations.

Phones can’t support that right now. The near-field resolution of what phones can offer isn’t good enough to do tight work. GPS won’t help at all and the combination of accelerometer and compass isn’t quite enough to register to local environments without its feeling sloppy. Orientation is OK, but position is nearly impossible. People are going to have to do more with extracting information from video to make convincing solutions.

What I was working with in 2001 was a magnetic-transducing sensor system called Flock of Birds. It had a range of +/- 3 feet and it was pretty accurate within that both in orientation and position. Still lacking was any depth sensing of the environment, which is essential for applications like virtual surface tagging so that your tags stick to surfaces instead of passing right through them.

Cheap depth-sensing has become available for indoor environments as in the Kinect. So maybe there’s an avenue there for using continuous depth data to keep pretty good track of local camera movements.

What the success of the Kinect does show is that augmented virtuality in which we insert something from the real world (like ourselves) into a virtual environment (like a game) might have more immediate currency than AR.

Time will tell.

Leave a Reply

Your email address will not be published. Required fields are marked *