The future of reality didn’t look like all that much the last time I saw it, to be honest: two 3D stick figures dancing, one making faster and more excited movements, the other a slow, sensual hip swing. Both of them were face-to-face, trying to get in sync with each other in a sparsely decorated virtual room displayed on a smartphone screen, like a small, bare bones, ultra low-stakes version of Tron.
But looks can be deceiving. And it’s not so much the figures or their little virtual world, but the team of real people who made them, and the underlying technologies, that could change how the rest of us experience reality.
That team is part of New York University’s (NYU) Future Reality Lab, a group of around 40 personnel, including NYU graduate and undergraduate students, who are developing a whole host of new technologies at the frontiers of virtual reality, augmented reality, and mixed reality.
“The Future Reality Lab is answering the question of: What is the future of everyday life when people’s senses are enhanced with visual, audio, and sensory experiences?”
Their overarching goal is to try to understand and usher in an era when virtual characters, worlds, and objects are no longer penned in our smartphones and computers, but one where they appear in front of us like holograms in the real world, brought to life through a new class of wearable devices.
“The Future Reality Lab is answering the question of: What is the future of everyday, ordinary life when people’s senses are enhanced with technological, visual, audio, and sensory experiences?” said Ken Perlin, the lab’s founder, director, and a longtime computer science professor at NYU. “That’s really our mission: What’s the future of normal reality?”
Perlin and his other faculty colleagues guide their students and in-house researchers as they build cutting-edge demos of what our future computing and digital media experiences will be like. This year, graduate students in Perlin’s course will be building their own shared virtual reality spaces, for example.
It’s all new territory, not just for the researchers but the for the world. The Future Reality Lab was established just over two years ago, funded primarily by philanthropic gifts from outside corporate benefactors including Verizon, Facebook, and Bose.
Perlin and his colleagues work out of the old Forbes magazine building in Manhattan, a low-rise gray box with a stately columned entrance — almost bank-like — on the corner of 5th Avenue and 12th street. NYU took over the building after Forbes moved its headquarters to Jersey City in 2015, part of the overall wave of old print media titles vacating historic office spaces as circulation declined and advertisers moved online.
But the building’s history makes it a fitting locale for the Future Reality Lab because much of what its researchers are working on has to do with the evolution of media formats.
“The one superpower that people have is language,” Perlin said. “I say the word ‘elephant,’ and you see a picture of an elephant in your mind. We’re going to keep building tools to enhance that capability, and this is just the natural evolution of those tools.”
Or as the Future Reality Lab website describes it, rather headily: “In our optimistic view, the future can be a place where language itself will eventually take on new and rich visual dimensions, a sort of combination of Harry Potter and Harold and the Purple Crayon,” the former the seminal boy wizard who could conjure things out of thin air, the latter a children’s book about a child with a special crayon that could make drawings come to life.
It sounds far-fetched, but it may not be that far off. Perlin thinks that AR-optimized devices like the Snap Spectacles (the company is taking preorders on its $380 new version, its third iteration, for fall shipping) or the rumored Apple AR glasses will go mainstream in the 2020s.
And there’s reason to believe his prediction: Perlin is a computer graphics and multimedia legend, having worked on the design of the original Tron movie from 1982. (The Future Reality Lab has a number of old school 1980s coin-operated arcade games scattered around, including a Tron one) He also developed “Perlin noise,” a technique for making textures in computer graphics that appear more randomized, authentic, and natural, so that your video game’s mountains and leaves and grass look less Tron-y and artificially smooth, and more true-to-life, speckled, dirty, patterned, and randomized.
Perlin told OneZero in a phone interview that he got the idea to create the Future Reality Lab in 2014, following the computer gaming giant Valve’s first demos of its prototype virtual reality headset technology. The tech would go on to be included in the consumer-grade HTC Vive VR headset that was released to the public in late 2015.
That 2014 demo was a seminal moment for Perlin because he learned that Valve’s VR technology contained what’s been referred to as a positional tracking system — basically, a smaller-scale version of the motion capture technology developed for big budget Hollywood films, and one that doesn’t involve covering the user in a suit of white balls.
Instead, Valve worked with the South Korean electronics giant HTC to develop a VR headset, hand controllers, and accompanying room sensors that can collectively tell where someone who is wearing the headset is moving and gesturing throughout the room.
This allows for not just more immersive virtual reality, but collaborative virtual reality, where multiple people wearing multiple headsets can inhabit the same shared virtual space, permitting them to work together to make 3D drawings in midair or manipulate and pass around 3D objects.
In fact, the Future Reality Lab developed its own educational program that does exactly these things, called ChalkTalk. It’s still restricted to bulkier, more expensive, enterprise-focused VR and AR headsets, but the researchers think it’s only a matter of a few years before similar experiences trickle down to more affordable headsets and other devices.
“This is a very interesting time in history.”
They’ve already switched a lot of their development from the HTC Vive, which must be plugged into a nearby high-powered PC in order to work, and the bulky stand-alone Microsoft HoloLens headset, to the Oculus Quest, a new, lighter, self-contained VR headset from the Facebook subsidiary that developed the Oculus Rift VR headset. (The lab has virtually every conceivable modern VR or AR headset, including the Magic Leap and even newer hardware from other electronics makers.)
“This is a very interesting time in history,” said Michael Gold, an entrepreneur in residence at the Future Reality Lab who works with Perlin and the other researchers and graduate students. “We have kind of a few things that are happening now that have never happened before.”
Those things include powerful microcomputers in our pockets in the form of smartphones, which also have their own depth sensors, fast wireless connectivity thanks to broadband Wi-Fi, and the incoming 5G cellular network, and huge computer server farms that can connect to the smartphones and process the data all those devices are passing on.
This means that smartphones and other similar, compact “edge” devices, in computer geek parlance, can get smaller, sleeker, and more wearable and affordable, while still performing ever more powerful mixed reality computing tasks, leaning on the servers to help them to get it done.
For example, the stick figures I saw are far more complex than they appeared at first: They’re procedurally generated virtual characters. That means fictional characters made with computer graphics whose forms and movements are not pre-scripted, but which can create unique responses to stimulus on the fly, much as we humans do.
The program in which they were developed is called “Autotoon” (a portmanteau of Auto-Tune, the audio modulation program, and cartoons) and is the outgrowth of technology Perlin has been working on for over two decades, well before the Future Reality Lab was founded.
It’s also the name of a new spinoff company that Gold is currently founding, where he will serve as CEO, alongside Ben Ahlbrand, a graduate student at the Future Reality Lab who will serve as Autotoon’s chief technology officer, and Perlin, who will be its chief scientist.
Autotoon offers “the ability to be able to synthesize human motion in real time and then allow an artist to — with a number of sliders — tune that,” Gold said.
Indeed, once Autotoon is further developed and refined beyond simple stick figurines, the Future Reality Lab intends to release it to artists and other researchers sometime in 2020. It will allow a wider community of creators and researchers around the world to make their own procedural characters who can then appear across all sorts of media and devices, from computers to smartphones, in virtual reality and augmented reality headsets and glasses, and in any similar future wearable devices.
“Once you step outside of the privileged classes of people and let everyone create and share, society benefits,” said Ahlbrand.
Ultimately, Gold and Ahlbrand see a near future of intelligent virtual beings, holograms of a sort, who can appear in front of us wherever we go in the real world and with whom we can interact, not just through our voices or our motions, but through our very emotional states, how we are feeling and acting.
Imagine an Amazon Alexa with a virtual body, one that could see and interact with multiple people at once — and rarely answered “I’m sorry, I don’t know what you mean by that.”
But before Alexa and others like her can run, they first need to walk, or, in this case, dance.