Home inSight Can We Make the Internet of Things Secure?

Can We Make the Internet of Things Secure?

Shoshana Bryen and Stephen Bryen
SOURCEAmerican Thinker
Google Home (left) and Amazon Echo.

In the simplest terms, Internet of Things (IoT) is the addition of some internet connectivity to everyday objects.  Security cameras, for example, previously had to be hardwired.  Now they are generally WiFi-connected, allowing camera information to be transmitted to the security control system and allowing the security control system to broadcast its collected information to a remote command center or even to a tablet or smartphone.  Then, if the camera has PTZ (pan, tilt, and zoom) functions, the user can redirect the camera, zoom in on an anomaly, or follow an object.

There is hardly a new product that does not try in some way to offer IoT capability.  The simplest products gather information from the broader internet and relay it to the user.  A “smart” refrigerator can tell you when your grapes are getting low or close to spoilage.  It can order grapes for you and have them delivered, or tell you where grapes are on sale and how close to your house the sale is.  A “smart” TV can search out genres of programs for you based on preferences you pre-load, or by deriving recommendations by tracking your use behavior on the internet.  A “smart” TV can become a point of sale device linked to Amazon, eBay or other outlets, letting you order on impulse while watching your favorite sports or house-hunting program. (“We can deliver a pizza now!”  “How about calling Joe at Friendly Realty?  He can find you a great home at a terrific price.”)

As artificial intelligence (A.I.) gains ground, home and business assistants will answer your questions or even make suggestions.  Alexa from Amazon already has a large user base, with Google and Apple coming along.  “Would you like me to turn on the lights downstairs as it is past 9PM?”  “Can I recommend a really great restaurant that just opened near you?  I can make a reservation for you; just tell me when you would like to try it.”  Or “Keep in mind that you need to take into account local taxes when figuring prices for your latest product.  Do you want me to calculate that for you?”

Intelligent assistants will start doing a lot of the work that paid help once provided, will do it 24×7 without complaint, with minimal overhead, and will not only be cost-effective, but can also be a profit center.  For example, a really great sales digital assistant will not only call customers, but be capable of managing a conversation, promoting new offers, providing technical help, and even asking for customer opinions and integrating findings into a master package for the company.  These go far beyond current-day answering systems. (“Press 1 if you want to speak to a nurse, 2 to make an appointment, or 3 to collect the dead body.”)

This is an environment wide open to mischief, and the mischief is starting.  Suppose I turn on your smart TV camera (yes, you have one) and record activity without your knowledge.  Suppose I misdirect your GPS and send you off in the wrong direction or to the wrong destination.  Suppose I create a fake traffic jam ahead (this has already been done) and make you take a dead-end detour.  Suppose I order products you did not buy.  Or deliver a pizza, an Uber, or a new car to your front door.

 And that’s only the beginning.  Suppose I invite you to a meeting at a certain time and use it to carry out a kidnapping or worse.  Now we are getting to the really dirty stuff.

The truth is that, aside from your common sense, there isn’t much to prevent the misuse of IoT.  In fact, most IoT devices are intentionally not secure.  They don’t require a user ID, and they have no built in system to sense outrageous or fake commands.  There are no security standards for IoT devices and none known to be in the works.  Most of the hardware and local software for these devices is produced offshore, creating countless opportunities to plant bugs into IoT systems, as already has happened with smartphones.

But even lacking rudimentary security, IoT systems will gain significant market share.  People want them even though they are security hazards.  So how do you get security into devices that are inherently risky?

It is possible to create protections in hardware by introducing some biometric access tools – e.g., face or voice recognition.  This will make it harder to get into these devices locally (the place where they are actually used), but if the devices are used remotely (e.g., for turning on the heat or checking on the babysitter), biometric security won’t accomplish a great deal.

Because people using IoT are churning out large amounts of actionable data (like your pizza preference) and because that information can be captured and exploited without your authorization, there is an enormous privacy issue looming.  While they naturally claim to protect your privacy, companies including Google and Yahoo can scan your email to extract your preferences and then use that information directly for their own marketing or sell to others.  This is called “monetizing privacy,” and there are no rules or standards that protect individuals.

It follows, then, that IoT providers have to become truly responsible for security.

The answer is not only in technology, although IoT clearly needs a verification system; the courts will be increasingly important in determining the rules for future internet privacy.  In general, American courts have not been friendly to privacy issues, partly because there were national security questions and the court understood that government needed information to prevent terrorism.  But with personal use of IoT exploding, there needs to be a rebalancing, and there has lately been a shift in court attitudes.

If the rules for IoT change, the technology will follow.