With help from artificial intelligence and special sensors, next-generation robots will be able to perform tasks ranging from understanding speech in a noisy environment to sniffing out phony wine
Even the smartest computers cannot fully understand the world without the ability to see, hear, smell, taste or touch. But in the decadeslong race to make software think like humans—and beat them at “Jeopardy!”—the idea of endowing a machine with humanlike senses seemed far-fetched. Not anymore, engineers and researchers say.
Capabilities powered by artificial intelligence, like image or voice recognition, are already commonplace features of smartphones and virtual assistants. Now, customized sensors, machine learning and neural networks—a subset of AI that mimics the way our brains work—are pushing digital senses to the next level, creating robots that can tell when a package is fragile, sniff out an overheated radiator or identify phony Chardonnay.
Hype around AI is running high, and much of the research is in early stages. Here, we look at 10 working models and prototypes of AI with sensory abilities.
Robots aren’t good at handling glass bottles or clear plastic cups. That is because most visual systems use infrared beams, known as depth sensors, to determine the shape of objects, and they shine right through transparent materials, capturing only vague shadows. Engineers at Carnegie Mellon University paired a depth sensor with a standard color camera to fill in data gaps by catching hues of red, green and blue around the edges of see-through objects. They then retrained the system to recognize these subtle visual cues and enable a robotic arm to adjust its grip. “Your vision is more similar to the way the color camera works,” says David Held, an assistant professor at Carnegie Mellon’s Robotics Institute. “You don’t send out lasers and see how long they take to bounce back.”