I’m writing a book about augmented reality, which forced me to confront a central question: When will this technology truly arrive? I’m not talking about the smartphone-screen versions offered up by the likes of Pokémon Go and Minecraft Earth, but in that long-promised form that will require nothing more cumbersome than what feels like a pair of sunglasses.
Virtual reality is easier. It can now be delivered, in reasonable quality, for a few hundred dollars. The nearest equivalent for AR, Microsoft’s second-generation HoloLens, costs an order of magnitude more while visually delivering a lot less. Ivan Sutherland’s pioneering Sword of Damocles AR system, built in 1968, is more than a half-century old, so you might expect that we’d be further along. Why aren’t we?
Computation proved to be less of a barrier to AR than anyone believed back in the 1960s, as general-purpose processors evolved into application-specific ICs and graphics processing units. But the essence of augmented reality—the manipulation of a person’s perception—cannot be achieved by brute computation alone.
Connecting what’s inside our heads to what is outside our bodies requires a holistic approach, one that knits into a seamless cloth the warp of the computational and the weft of the sensory. VR and AR have always lived at this intersection, limited by electronic sensors and their imperfections—all the way back to the mechanical arm that dangled from the ceiling and connected to the headgear in Sutherland’s first AR system, inspiring its name.
Today’s AR technology is much more sophisticated than Sutherland’s contraption, of course. To sense the user’s surroundings, modern systems employ photon-measuring time-of-flight lidar or process images from multiple cameras in real time—computationally expensive solutions even now. But much more is required.
Human cognition integrates various forms of perception to provide our sense of what is real. To reproduce that sense, an AR system must hitch a ride on the mind’s innate workings. AR systems focus on vision and hearing. Stimulating our eyes and ears is easy enough to do with a display panel or a speaker situated meters away, where it occupies just a corner of our awareness. The difficulty increases exponentially as we place these synthetic information sources closer to our eyes and ears.
Although virtual reality can now transport us to another world, it does so by effectively amputating our bodies, leaving us to explore these ersatz universes as little more than a head on a stick. The person doing so feels stranded, isolated, alone, and all too frequently motion sick. We can network participants together in these simulations—the much-promised “social VR” experience—but bringing even a second person into a virtual world is still beyond the capabilities of broadly available gear.
Augmented reality is even harder. It doesn’t ask us to sacrifice our bodies or our connection to others. An AR system must measure and maintain a model of the real world sufficient to enable a smooth fusion of the real with the synthetic. Today’s technology can just barely do this, and not at a scale of billions of units.
Like autonomous vehicles (another blend of sensors and computation that looks easier on paper than it proves in practice), augmented reality continues to surprise us with its difficulties and dilemmas. That’s all to the good. We need hard problems, ones that can’t be solved with a straightforward technological fix but require deep thought, reflection, insight, even a touch of wisdom. Getting to a solution means more than building a circuit. It means deepening our understanding of ourselves, which is always a good thing.