Algorithms are essential to IoT.
Connected devices autopilot our cars; control our home’s light, heat and security; and shop for us. Wearables monitor our heart rates and oxygen levels, tell us when to get up and how to move about and keep detailed logs on our whereabouts. Smart cities, powered by a host of IoT devices and applications, control the lives of millions of people around the globe by directing traffic, sanitation, public administration and security. IoT’s reach and influence in our everyday lives would be inconceivable without algorithms, but how much do we know about algorithmic function, logic and security?
Most algorithms operate at computational speeds and complexities that prevent effective human review. They work in a black box. On top of that, most IoT application algorithms are proprietary and operate in a double black box. This status quo may be acceptable if the outcomes are positive, and the algorithms do no harm. Unfortunately, this is not always the case.
When black box algorithms go wrong and do material, physical, societal or economic harm, they also hurt the IoT movement. Such mistakes chip away at the social and political trust that the industry needs to ensure the wider adoption of smart devices, which is key to moving the field forward.
Opaque algorithms can be costly, even deadly
Black box algorithms can result in significant real-world problems. For example, there is a nondescript stretch of road in Yosemite Valley, Calif., which consistently confuses self-driving cars, and at present, we still don’t have an answer as to why. The open road is naturally full of risks and dangers, but what about your own home? Smart assistants are there to listen to your voice and fulfill your wishes and commands regarding shopping, heating, security and just about any other home feature that lends itself to automation. However, what happens when the smart assistant starts acting dumb and listens not to you, but to the TV?
There is an anecdote circling the web about many smart home assistants initiating unwanted online purchases because Jim Patton, host of San Diego’s CW6 News, uttered the phrase, “Alexa ordered me a dollhouse.” Whether this happened at this grand scale is beside the point. The real problem is the dollhouse incident sounds very plausible and, again, raises doubts about the inner workings of the IoT devices to which we have entrusted so much of our daily lives, comfort and safety.
From the IoT perspective, the intangible damage of such occurrences is considerable. When one autonomous vehicle fails, all autonomous vehicles take a reputational hit. When one smart home assistant does stupid things, all smart home assistants’ intelligence comes into question.
The data elephant in the room
Every time an algorithm makes a wrong decision, its purveyors promise a thorough investigation and a swift correction. However, due to all these algorithms’ proprietary, for-profit nature, authorities and the general public have no way of verifying what improvements took place. In the end, we must take companies at their word. Repeat offenses make this a difficult ask.
One primary reason for companies not to disclose their algorithms’ inner workings — to the extent that they can fathom them — is they do not want to show all the operations they conduct with our data. Self-driving cars keep detailed logs of every trip. Home assistants track activities around the house; record temperature, light and volume settings; and keep a shopping list constantly updated. All this personally identifiable information is collected centrally to let algorithms learn and flow the information into targeted ads, detailed consumer profiles, behavioral nudges and downright manipulation.
Think back to the time Cambridge Analytica effectively weaponized 87 million unsuspecting users’ social media profile information to misinform voters and could have helped turn a whole US presidential election around. If your friends list and some online discussion groups are enough for an algorithm to pinpoint the best ways to influence your beliefs and behaviors, what deeper and stronger level of manipulation can the detailed logs of your heart rate, movement and sleep patterns enable?
Companies have a vested interest in keeping algorithms opaque because this allows them to tune them to their for-profit purposes and amass enormous centralized databases of sensitive user data along the way. As more and more users wake up to this painful but necessary realization, IoT adoption and development slowly approach a grinding halt, and skepticism builds a mountain in front of the algorithmic progress that never was. What are we to do?
The transition to the ‘internet of transparency’
The most urgent focus should be on making what algorithms do more understandable and transparent. To maximize trust and eliminate the adverse effects of algorithmic opacity, IoT needs to become the “internet of transparency.” The industry can create transparency through decoupling AI from centralized data collection and making as many open source algorithms as possible. Technologies like masked federated learning and edge AI enable these positive steps. We need the will to pursue them. It will not be easy, and some big tech companies will not go down without a fight, but we will all be better off on the other side.
About the author
Leif-Nissen Lundbæk, PhD, is co-founder and CEO of Xayn. His work focuses mainly on algorithms and applications for privacy-preserving AI. In 2017, he founded the privacy tech company together with professor and chief research officer Michael Huth and COO Felix Hahmann. The Xayn mobile app is a private search and discovery browser for the internet — combining a search engine, a discovery feed and a mobile browser with focus on privacy, personalization and intuitive design. Winner of the first Porsche Innovation Contest, the Berlin-based AI company has worked with Porsche, Daimler, Deutsche Bahn and Siemens.