The Ethics of Invention by Sheila Jasanoff is not a book on ethics, but rather a book about the complicated relationship between Technology, Law, and Policy. By the title, one might think that this is yet another “Techlash” book written to be read by Tech Luddites. Rather, the book argues for a middle ground between uncontrolled enthusiasm for technology and the timeless, often understandable, hatred for technological progress. The main focus of the book is to highlight the tendency of humans to give or delegate power to technological systems, which end up governing human behavior without even them noticing it happening under
their noses.
Examples and case studies are abundant in this book. To explain the point, she takes us through how traffic signals came to dictate the laws of the road. “Inanimate lights backed by invisible experts and unseen electrical circuits have stepped into discipline behavior that was once risky”. No one questions it, and we often just go about our day being governed by them and reshaping our world over time. Although, it is left to the reader to ponder if it is truly possible to build a democratic consensus around these issues. She compares the power given to technological systems with legal constitutions. We can comprehend our delegation to the lawmakers but often the delegation to technological systems does not compute.
She highlights three major fallacies that we fall prey to while designing policies and how the over/under-reliance on these impacts people.
- Technological determinism: Once a new technology is invented, it possesses an unstoppable momentum, reshaping society to its insatiable demands”. It is a belief that innovation is always beneficial or good for society and should be pushed as far as possible. She takes the example of refrigerants being beneficial in the short term but causing Ozone Depletion in the long term. Or how automobiles led to enormous progress for humanity in the short term but probably caused so many unaccounted externalities in the long term.
- Myth of technocracy: “only those with specialist knowledge and skills can manage and control technology”. There are two points she makes here. The first is a critique of the technical risk assessment. She argues that technology is value-laden from start to finish and the inventor’s desired end clouds our judgment and often forces externalities to be classified as “unintended”. To expand on this, she says that “Experts’ imaginations are often circumscribed by the nature of their expertise.”. It is often our first instinct to rely on experts but they often “overestimate the degree of certainty behind their positions” on a matter and “blind themselves to knowledge coming from outside their closed ranks.” She goes through the example of the Challenger shuttle and the 2008 financial crisis.
- Unintended consequences: “If technological mishaps, accidents, and disasters seem unintended, it is because the process of designing technologies is rarely exposed to full public view”. She asks if it is fair to use such a fuzzy word like “unintended”, can there ever be intended consequences of a technology, and if so shouldn’t policy be designed to tackle these in the first place? Another problem raised is that intention is fixed at a specific moment in time, morally, which would not be static in the long term and is bound to change and who is responsible to track these changes.
The rest of the book looks at problems of “risks, inequality, and human dignity” that need to be addressed if our society is to progress and responsibly grow alongside our technological innovations. Ends with a bunch of questions, do we exist to further technology or technology exists to further our goals and ambitions. Do we rely on technology to solve climate change or do we accept and deal with the fact that we have mismanaged it in the first place? We need to dispel our thought process from the three fallacies and start agreeing that “ethical analysis, political supervision and long impeded systematic thinking” can be applied to technological innovations.
She argues that we need to rethink tech policy, where we first look at risk assessment to anticipate and rather bring in values into our decision-making at an earlier stage. At the same time, she asks us to not rely on “Trickle down innovation” and look at collective ownership and collective good rather than treating all tech as extractive and solely for profiteering. Another caution is to keep ethical discourse public and draw a balance between private discourse so
that it does not spawn further public alienation between views of experts hired by private bodies and democratic institutions.