Connected devices at CES raise security, privacy and safety questions
It seems as if almost every exhibitor at CES was showing things that connect to other things.
LG showed off washing machines and kitchen appliances that send messages to smartphones. Schlage announced a Bluetooth-enabled smart lock that enables iPhone users to use Siri voice commands to enter their house. Kolibree and Oral-B both showed off connected toothbrushes, and there was even a baby pacifier called Pacif-i, billed as the “worlds first Bluetooth smart pacifier.”
Fitness bands like the Basis Peak that send your activity and pulse to your phone, and to the cloud, were all around. Vital Connect showed off a Band-Aid size patch that can send your heart rate, body temperature, posture and EKG to health providers via smartphones. Automakers showed off cars that can “phone home” to transmit data that monitors systems in real time.
And, of course, drones were everywhere. These flying machines have wireless controllers and the ability not just to fly through real clouds, but to transmit data to virtual ones.
Together, these and thousands of other connected gadgets are referred to as the “Internet of Things,” or IoT. Eventually, the Internet of Things will be bigger than the Internet of people, since there are a lot more devices in the world than humans. (And by the way, humans aren’t the only living creators to be connected — thanks to pet trackers like the Tagg GPS Plus, we also have an Internet of dogs and cats.)
Like the Internet of people, the IoT has its own privacy, safety and security risks, which are not lost on regulators from Washington, D.C., and individual states.
Federal Trade Commission Chairwoman Edith Ramirez was at the Consumer Electronics Show and pointed out that connected devices often share “vast amounts of consumer data, some of it highly personal, thereby creating a number of privacy risks.”
There are also heightened security risks. At last year’s Black Hat security conference, researchers demonstrated how it was possible to hack cars, energy management systems and smart locks.
Safety issues abound as well. A hacked car or drone or even a connected robot could wind up threatening life and limb. So far, the hacks against Sony, Target and thousands of other institutions have caused embarrassment and loss of money and privacy, but not physical injuries or loss of life. But if “things” are hacked, the stakes could be a lot higher.
Risks associated with the Internet of Things are not lost on Intel CEO Brian Krzanich. Intel is betting on IoT by creating chips and devices for drones, smartwatches, robots and other connected devices. During his CES keynote, the Intel CEO even showed off a button-sized wearable computer called Curie, which can be sewn onto clothing.
In an interview, Krzanich acknowledged the risks, but said “there’s a lot of research going into how to really improve security right now.” He pointed out that “every technology advancement brings great value and great potential, but brings some level of risk and our job is to manage the risk.”
When asked about the risk of drones, Krzanich said that there is software on some GPS-equipped drones to prevent them from flying near airports and cameras and sensors to avoid colliding into other drones or buildings. Still, there are risks that can’t be avoided, like someone flying a camera-equipped drone over someone’s backyard or putting drones that don’t have GPS or collision avoidance software in the hands of owners who are using them irresponsibly.
I wonder if police departments are gearing up for drone abuse enforcement. If not, they should be.
Certainly federal regulators — from the Federal Communications Commission to the FTC to Homeland Security to the Federal Aviation Administration — are looking at how to protect the public interest when it comes to the vast array of connected things.
The FCC needs to think about the use of radio spectrum because the IoT is competing with broadcast, data, voice and sorts of other demands for the limited amount of available radio frequencies. The Department of Homeland Security is rightfully concerned about the potential of devices to be used to harm people or deliver explosives or other threats, especially if they get into the hands of terrorists. The FTC is responsible for helping to protect our privacy and has plenty on its plate when it comes to the potential abuse of all the data these “things” are collecting and transmitting. The FAA is working on how to regulate drones to make sure they don’t crash into airplanes, each other or people on the ground.
Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, worries that government regulation could go too far, especially at the early stages of technologies where over-regulation could wind up interfering with innovation.
“The better alternative to top-down regulation,” he argues, “is to deal with concerns creatively as they develop, using a combination of educational efforts, technological empowerment tools, social norms, public and watchdog pressure, industry best practices and self-regulation, transparency, and targeted enforcement of existing legal standards.”
In general, I agree with Thierer, but I still think there is a role for government to protect the public not against all these connected “things,” but against the people who misuse them.