Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Google, A.I. and the rise of the super-sensor

Mike Elgan | May 22, 2017
Remember when the Internet of Things involved a trillion sensors? Turns out we may need only one.

These two projects -- Google Lens and the Google-funded CMU "synthetic sensor" project -- represent the application of A.I. to enable both far fewer physical sensors and far better sensing.

A.I. has always been about getting machines to replicate or simulate human mental abilities. But the fact is, A.I. is already better than human workers in some areas.

Think about a traditional office building lobby. You've got a security guard at a desk reading a magazine. He hears the revolving doors turn and looks up to see a man approaching the desk. He doesn't recognize the man. So he asks the visitor to sign in, then lets him proceed to the elevator.

Now let's consider the A.I. version. The sound of the revolving door signals another person entering the building (the system is keeping an exact count of how many are inside). As the man approaches, cameras scan his face and gait, identifying him positively and eliminating the need for antiquated "signing in." The microphone also processes the full range of subtle sounds he makes while walking. This, combined with thermal, chemical and other sensors, concludes that he is unarmed, eliminating the need to process him through a metal detector. By the time he reaches the door, the A.I. sends a command to unlock the door and allow him into the building. The record of his visit is not written in ink on paper, but in a searchable electronic form.

The best part is, any number of new applications could be written to detect different things, all without changing the physical sensors. Those same sensors in the lobby could replace lighting controls, smoke detectors and thermostat controls. They could alert maintenance when the windows need cleaning or the trash needs emptying.

It's easy to imagine applying this kind of all-purpose sensor-plus-A.I. system beyond the lobby to the boardroom, offices, factories, warehouses and shipping systems. Even as the cloud A.I. rapidly learns and increases its capabilities, companies will be able to build custom virtual sensors as the need for them arises.

Cameras can be deployed where appropriate, such as in the lobby, with other, non-camera sensors deployed where not, such as in the building's bathrooms.

Best of all, the super-sensor revolution will be widely distributed. The physical sensors are very inexpensive, and the A.I. is available through the cloud as a service, not just from Google but from a wide range of providers.

The rise of cloud-based A.I. as a service has been obvious for a couple of years now. But this month, we've seen one of the most profound changes this revolution will usher in. With a few inexpensive cameras, microphones and other sensors, we'll be able to create just about any sensor in software on the fly at low cost.

The old model of the "trillion sensor" IoT is dead, killed by A.I. and the rise of the super sensor.

 

Previous Page  1  2  3 

Sign up for CIO Asia eNewsletters.