IoT as an Ethical Challenge

This is the first in a series of posts addressing ethics in IoT, through a range of domain based case studies. It is based on preliminary research conducted by the VIRT-EU project (Values and Ethics in Innovation for Responsible Technology in Europe (ICT-732027) team from the IT University of Copenhagen - Rachel Douglas-Jones, Ester Fritsch and Irina Shklovski and was developed in collaboration with Thor Hauberg who works in Intelligence as a Principal Expert of IoT.

IoT as an Ethical Challenge
The rapid development of the Internet of Things is making existing ethical challenges of digital technologies more acute. In this blogpost series we discuss the unintended consequences of technology use and the hidden costs of technological development or adoption, and will end-up with a series of considerations that managers can employ. It would be misleading to say that IoT is a single revolutionary technology. It is more appropriate to describe IoT as a series of loosely related digital innovations which, as prices decreased, made it possible to imagine and to realize new services and products at ever larger scales of adoption. It is fair to say, however, that IoT relies on pervasive data collection, storage and processing to function.

While pervasive and unobtrusive data collection, long-term data storage and unintended data usage are not completely new developments, they now emerge as concrete problems representing ethical challenges that IoT developers, as well as users and policy makers, have to confront. At the same time most of the traditional strategies for dealing with these issues individually are either partially ineffective or simply completely useless. Take for example the case of Garadget that shocked its customers by bricking one user’s hardware after a negative online review, or the increasingly frequent cases of companies that make intimate toys also collecting data on their use “because of a minor security bug.

This series of blog posts will explore the ethical challenges facing IoT. We start from specific cases that exemplify how, in different domains, IoT solutions confront us with potential ethical issues that require immediate attention.

How to Avoid Sleepwalking into the Future
The current political landscape in Denmark has a strongly positive orientation towards technology. During 2017, we have seen the introduction of the world’s first Technological Ambassador, cementing the gravitas of international technological companies relative to nation states. We have also seen the the establishment of a Council on Disruption (Disruption Råd), ostensibly to guide future progress. Furthermore, Denmark now hosts a branch of Silicon Valley’s Singularity University, which celebrates companies that exploit the business potential of digital technologies.

However, while there is considerable optimism and excitement in Denmark and elsewhere around the potential for IoT within a broad range of industries, there currently are few fora in which its consequences and implications can be discussed. Equally, few tailored mechanisms exist to help organisations and designers in thinking through the implications of the technologies they design and develop.

In 1986, political theorist Langdon Winner noted that “the puzzle of our time is that we so willingly sleepwalk through the process of reconstituting the conditions of human existence”. We argue that it is not sufficient to continue to sleepwalk into our technological futures. Instead, we need to grow our collective capacity to have local and sectoral discussions about both the short-term consequences of a newly introduced IoT device, as well as the longer term and broader, societal concerns.

What is IoT?
The phrase ‘internet-of-things’ does not refer to a specific technology, but to a type of digitization, where physical objects becomes digital and connected. When these products are commercialised they are not branded as “IoT”, and therefore much of what is commercially available as IoT solutions are not thought of as such e.g. Tesla, iPhone, Amazon Echo, iPhone connected intimate toys, bluetooth and wifi speakers.

For our purposes, an IoT solution consists of two main parts, an ‘edge’ part and a ‘core’ part. The ‘edge’ part is the physical object where technologies such as sensors, actuators, robots, drones, 3D printers etc. can be applied. The core (platform) is where the data is stored, processed and put to use. When misbehaving devices gain notoriety and public attention, these are problems of the ‘edge’, the most visible and easy to conceptualise dimension of IoT, the ‘things’ that work differently than previously imagined. However, the broader negative potentials and possibilities that we want to draw into the discussion here also arise from practices around the ‘core’. These concern the larger negotiations of data ownership and exchange, the power to control performance at the edge and the concerns around privacy and surveillance.

"Current discussions about ethical challenges in IoT typically focus on either one or the other, potentially missing how they are interconnected, how they implicate each other, and how ethical discussions need to happen at a range of sites within the full life-cycle of the solution"

Beyond compliance (Good intentions that might pave roads to hell)
Why talk about ethics in relation to the IoT? Ethics, when the term is brought up, is typically associated with a complex field of philosophy, and it can be difficult to see how a discussion of ethics is relevant in a fast moving business environment of technology development. However, our point is that making a space for ethical reflection in the development of IoT technologies is important. Ethics concerns our sense of right and wrong, an everyday assessment familiar to us all. Ethical stakes in IoT development cannot be pinned down to behavior and choices that comply with existing and upcoming regulations. Compliance is the ethical baseline, not the ethical ceiling and in these posts we discuss matters beyond compliance that address urgent concerns and a sense of right and wrong.

We use the word ‘ethics’ to denote a broader conception of the kinds of potential outcomes and problems that might arise from design, development and business decisions in IoT innovation. We don’t just mean to draw attention to a problem and ways to solve it, but to dig into why the problem occurs and whether the proposed solution is addressing the right problem in the first place.

Over the past year alone, we have seen a large number of well intended products having quite unintended consequences. Consider for example the recent case of Lockstate that managed to brick hundreds of smart locks with a faulty firmware update and suggested that customers either break the locks to their homes or wait for a new one, something that might take up to two weeks. Some of these consequences are amusing, sometimes startling, but more often than not, concerning.

We argue that a broader focus on the implications of IoT is important, and aim to provide a set of initial questions that might support developing the capacity to see ethical issues before they become significantly problematic.

Competition, Costs and Consequences
IoT has captured industry imagination as a future market with enormous potential. Across a range of sectors, there are huge expectations for the effects emerging products could have, but the risk of failure is not merely a risk to an individual company but to other actors who would wish to introduce products in that area. Failures in this arena, however, are quite frequent for two reasons: The pressure to launch new products, and the lack of digital expertise.

First, the pressure and competition to produce innovative technical solutions limits the kind of attention that might be paid to ethical concerns. There seems to be a broadly accepted anxiety that design decisions informed by ethical consideration are more complex, and therefore may slow down development processes in a fast moving environment. Second, due to the complexity of the IoT space, when companies with a particular area of expertise move into connected products they often not only have problems in attracting the relevant developers, but also have no existing capacity to identify all potential problems. For example, Loreal’s expertise in the beauty sector has provided it with a significant market share and a thorough understanding of the ethical issues associated with its traditional products (e.g. sourcing, types of testing).

However, they likely have little ground from which to understand what it means for a hairbrush to become a connected product, constantly used not only to brush hair but also as a mode and platform of communication. The range of issues they now need to take into consideration, by shifting from a hairbrush to a connected hairbrush include the dynamics of social practices and relational management associated with mediated communication that their users will have to navigate. They also need to consider what kinds of data these brushes are collecting, what of that data ought to be visible to the user and how to manage these data ethically. Finally, they must now think about cybersecurity on top of how well the bristles massage the scalp.

The expertise required to consider these new issues is vastly different from that acquired over the years of work with beauty products. The unintended consequences of launching a connected hairbrush can potentially create significant public relations as well as security problems leading to reputational damages or financial compensation. While this may not affect large companies such as Loreal, failure to manage risk of this sort could have disastrous consequences for startups or SMEs.

Financial compensation is particularly critical in this very moment with GDPR coming up in May of 2018. The fines for moving on the edge of this regulation are high - up to 4% of annual global turnover - and it will be difficult to implement all of the mandated safeguards and reporting.

However, GDPR further brings an important point. Although legislators are slow, they eventually will regulate to protect consumers and employees, and enforce this regulation financially. The costs companies incur in adapting to these new rules are significant, because most legacy systems did not consider the potential ethical issues of their personal data collection and processing practices. Had it been a consideration at the time of design and implementation the total cost of subsequent compliance would have likely been much smaller.

Looking at the costs through a lens of ethics, however, reaches beyond mere financial challenges, law and compliance. A lot of more or less hidden costs are at play. These include human dignity, potential discrimination, privacy, or environmental costs of development just to name a few. Costs and potentials of new IoT technologies are inevitably interconnected and not straightforward to balance out. For example, in the case of IoT technologies in the environmental domain some inventions may hold a potential to reduce energy consumption while demanding a lot of resources in order to be created. Some costs are strongly connected to unintended consequences of IoT technologies such as the potential for a pacemaker to be hacked with fatal outcomes. Thinking about costs and consequences along with pursuing the potential of IoT technologies is an imperative for both developers and adopters of new technology.

Looking for Ethics in IoT
The concerns that we discuss in this blogpost and the following series of more specific examples will provide a detailed view on the ethical challenges ahead. To help business executives to navigate in this space, we strongly encourage relevant reflection happening in a timely manner, both for companies developing new technologies, but also for companies considering the acquisition of new technologies. The argument that we make here is relevant not only to customer-facing products or services and the unintended consequences these might bring about, but also for the systems and services that companies themselves integrate into their value chain.

Across the cases that we go on to discuss in this series, there are a set of baseline questions that we pose. We have selected these questions based on the common blind-spots we have observed in the research we have conducted as part of the VIRT-EU project.

1. Measurement and Data Use: What is measured, and how is it measured? Why measure this? Is this really addressing the issue at hand or is this measured just because it can be? What will happen to data collected, how will it be used, stored and what are the potential secondary uses of it?
2. Expertise: The IoT infrastructure spans the field from sensors to cloud computing analytics. It is often impossible to have expertise in the potential vulnerabilities in all of the domains involved across the chain. What knowledge is important to implement necessary assessments?
3. Consequences: What are the potential issues and potential responsibilities that emerge as a result of design and market decisions in the development of this product?
4. Futures: What kind of future do we want, and how do we make spaces for ethical discussion? Without spaces in which to ask questions that concern longer term and societal implications, devices enter the market with little consideration of their broader and societal consequences.

Tags: 
big data, data and society, IoT Ethics, privacy, social networks, social practice, technology