1963-2018 - 55 years of Research for Social Change

  • 0
  • 0

Back

Human Rights and New Technologies: Setting the Agenda for Human Rights-Centred Innovation

15 Feb 2019

  • Author(s): Molly K. Land

Human Rights and New Technologies: Setting the Agenda for Human Rights-Centred Innovation
This contribution is published as part of the UNRISD Think Piece Series, From Disruption to Transformation? Linking Technology and Human Rights for Sustainable Development, launched to coincide with the 37th Session of the UN Human Rights Council and the 70th anniversary of the Universal Declaration of Human Rights. In this series, experts from academia, think tanks and civil society engage with the topic of linking technology and human rights, and share their experience at the front lines of policy-driven research and advocacy aimed at leaving no one behind in an increasingly digital, automated world.

For technology to have a transformative effect on human relations, we must be far more mindful of who builds it, for what purposes, and what kinds of power and privilege are embedded within it.

Molly K. Land is Professor of Law and Human Rights at the University of Connecticut School of Law and Associate Director of the Human Rights Institute at the University of Connecticut.

Technology and human rights?


It is a truism by now to say that technology presents both risks and opportunities for human rights. It does indeed do both. But it is far more accurate to say that without affirmative efforts on the part of all those interested in technology’s impact on the enjoyment of human rights, the deployment of new technologies will only serve to replicate and exacerbate existing inequality. For technology to have a transformative effect on human relations, we must be far more mindful of who builds it, for what purposes, and what kinds of power and privilege are embedded within it.

The problems that technology presents for human rights can be well illustrated by Lea Shaver's case study on prepaid water meters installed in Phiri, a poor neighbourhood in Johannesburg, South Africa, where residents often lacked the resources to pay water bills. The new meters were presented by Johannesburg Water as a technological solution to a financial problem. Each household was allocated a small monthly ration of free water, which could be supplemented by buying tokens for the meter. Although the water meters did conserve water and increase payments to the water company, they also led to intense hardship for residents whose scarce resources were still not sufficient to buy the tokens they needed and who consequently ended up going without water for days or weeks at a time. Despite its progressive jurisprudence on social and economic rights, the South African Constitutional Court dismissed the legal challenge to the water meters brought by local activists.

The case of the Phiri water meters illustrates three important lessons we must learn in order to reorient the development and deployment of technology toward human rights goals.

Technology replicates and reinforces the status quo


First, even seemingly neutral technology will replicate preexisting inequalities and marginalization. It does so because technology is generally designed and implemented by those with privilege, and those privileges are built into the technology itself. For example, Johannesburg Water allocated households in Phiri 6 kiloliters (kL) of free water per month, assuming households of eight residents, and hence 25 litres of water per person per day. In reality the average property in Phiri housed not eight, but sixteen residents—effectively cutting the allowance in half. Those who decided at what point the meter would cease dispensing water made assumptions based on their own experiences, which reflected environments of wealth.

Technology built by private actors will favour commercial interests


Second, technology that is built by private actors for commercial gain will tend to favour those commercial interests. The rationale for installing the meters seemed, I am sure, reasonable and appropriate to those who implemented it—to limit waste and ensure that residents pay their water bills. The free monthly allowance was set at approximately one third of what residents had used prior to the meters, with the explicit goal of limiting consumption.

But when residents could not afford the prepaid tokens for additional water, they suffered extraordinary hardship. Those already living on meager resources were forced to make impossible choices, limiting basic hygiene for themselves and their families to ensure that the free water allowance lasted a full month. When the goal of water provision was oriented on cost recovery rather than the provision of basic needs, the poor inevitably suffered.

Commercialization of technology adds a further dimension of risk. Introducing new technology in a commercial context typically requires investment, which can then “lock-in” error. For example, in the Phiri case the significant cost to the company of installing the meters became a legal rationale in favour of retaining the new system. During litigation, the City of Johannesburg argued that uninstalling prepaid meters and replacing them with other mechanisms would have a substantial impact on the water company’s “finances and sustainability.” Thus, the technology became a self-fulfilling prophecy in part because of its cost. Once installed, it was locked in, along with any harms to rights that it caused.

Technology undermines accountability


Third, technology makes it much harder to hold duty bearers accountable because it obscures who is doing what. Technology makes agency invisible and “naturalizes” activity that might otherwise give rise to concern. In the water meters case, for example, access to water was not “disconnected” by the utility company—rather, the water meter simply required payment prior to access. This obscuring of agency is even more apparent with technologies like algorithms employed in the context of bail determinations or social media content moderation.

Technology also replicates existing inequality because poor and vulnerable populations are less likely to challenge its effects. Conventional water meters still found in white gated communities in Johannesburg had built-in protections against hardship—water did not have to be prepaid in advance, so a household that exceeded its supply could continue drawing water on credit. A household could be disconnected only after legal process and an opportunity to contest the decision. One might ask why prepaid meters—which do not include such safeguards—were installed in a neighbourhood as vulnerable as Phiri? My guess is that prepaid water meters in rich neighbourhoods would not have been tolerated. Public outcry and political pressure would have prevented their installation. There was public resistance in Phiri, but it was much more easily ignored by Johannesburg Water.

Conclusion


In thinking about the relationship between technology and human rights, I want us to take seriously the General Assembly’s pledge, part of the 2030 Agenda for Sustainable Development, to ensure that “no one will be left behind.” Ensuring that no one is left behind in innovation requires practices for sustainability. But sustainable rights-based development requires sustainable rights-based technology. And that requires changes in our approach to how we incentivize, develop, and implement new technologies.

Among other things, we need to institutionalize the practice of technology impact assessments by both public and private actors alike. Risk assessment must be integrated into processes of technological development and design, so that risks to human rights can be addressed before they are locked in. We also need to build technology with, rather than for, practitioners. Too often, those who are designing technology do not represent the diversity of the populations who will be affected by their products. Sustainable technology must also meet the specific needs of practitioners and human rights defenders, particularly their needs for privacy, security, and protection from harm.

I believe technology does indeed hold great promise. I am deeply skeptical, however, about whether technology will serve as a catalyst for challenging the fundamental power structures in the world. And the risks and benefits of innovation are unlikely to be shared equally. For technology to promote human rights objectives we must build it for the world we want, not the one we have.

Comment

blog comments powered by Disqus

 

 

This article reflects the views of the author(s) and does not necessarily represent those of the United Nations Research Institute for Social Development.