Quick-Thinking AI Camera Mimics the Human Brain – By Dhananjay Khadilkar on June 22, 2017


The device will use artificial neurons and synapses to improve self-driving vehicle and drone performance

A Google self driving SUV on the streets of Mountain View, California. Credit: Kim Kulish Getty Images

Researchers in Europe are developing a camera that will literally have a mind of its own, with brainlike algorithms that process images and light sensors that mimic the human retina. Its makers hope it will prove that artificial intelligence—which today requires large, sophisticated computers—can soon be packed into small consumer electronics. But as much as an AI camera would make a nifty smartphone feature, the technology’s biggest impact may actually be speeding up the way self-driving cars and autonomous flying drones sense and react to their surroundings.

The conventional digital cameras used in self-driving and computer-assisted cars and drones as well as in surveillance devices capture a lot of extraneous information that eats up precious memory space and battery life. Much of that data is repetitive because the scene the camera is watching does not change much from frame to frame. The new AI camera, called an ultralow-power event-based camera, or ULPEC, will have pixel sensors that come to life only when the camera is ready to record a new image or event. That memory- and power-saving feature will not slow performance—the camera will also have new electrical components that allow it to react to changing light or movement in a scene within microseconds (millionths of a second), compared with milliseconds (thousandths) in today’s digital cameras, says Ryad Benosman, a professor at the University Pierre and Marie Curie who leads the Vision and Natural Computation group at the Paris-based Vision Institute. “It records only when the light striking the pixel sensors crosses a preset threshold amount,” says Benosman, whose team is developing the learning algorithms for an artificial neural network that serves as the camera’s brain. An artificial neural network is a group of interconnected computers configured to work like a system of flesh-and-blood neurons in the human brain. The interconnections among the computers enable the network to find patterns in data fed into the system, and to filter out extraneous information via a process called machine learning. Such a network “does away with not only acquiring but also processing irrelevant information, thus making the camera faster and requiring lower power for computation,” Benosman says.

Article continues:

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s