Tuesday 28 August 2018

Unintended Consequences, Part 2: Papers on the Collingridge Dilemma

It's been over six months since I threatened to start reading on the Collingridge Dilemma. It's always a bit of a challenge, since this is rather outside the scope of my day job (despite my various cross-disciplinary links) and must therefore take a back seat to more pressing activities, and since it's also outside my sphere of expertise, making it more challenging than, say, reviewing prehension literature. Anyway, after a short search of the Library Website, with the terms "Collingridge Dilemma", "Technology Control Dilemma" (a variant name for the Collingridge Dilemma, apparently) and "Constructive Technology Assessment" (a solution that a few of the papers referred to) from the last five years, I picked out 17 papers that seemed appropriate. Not exhaustive by any means - but a first cut to get a feel for what's being said in the area.

Caveat Lector
The standing warning applies, as always: I'm talking here rather outside my discipline, so this is a lay perspective. There are plenty of people writing on Responsible Research and Innovation (RRI) or Engineering Ethics and the like that could give you chapter and verse on this more effectively. Let's just take this for what it is: an engineers' response to a short survey of the literature on topics around the Collingridge Dilemma.

Outcomes
For the most part, the papers I came across were an interesting bunch, if sometimes prone to be hypothetical. Sometimes, engineering projects take a on a life of their own, because of the complexity and emergence involved in their development - when running engineering projects, I tend to feel like I'm something closer to a shepherd or farmer than a machinist. Possibly because I'm in academia, of course, but certainly there's a difference between sitting down to a well-defined task where you have a very good idea of what the solution will look like, and the sort snaking path of trying to address an ill-defined problem with no idea of what the outcome will actually be. It's like trying to plan your route from a map: you theorise the ideal plan, then you get on the ground and discover the ford swollen from rainfall, the bull in the field or the collapsed rock face (does it show that I'm a rambler?) and suddenly you have to rework your whole route. The thing I don't like about abstract discussion of engineering projects is that they miss this out: you end up with neat looking diagrams of stage-gates and the assumption that every decision is drawn from a rational and exhaustive analysis of all possible options, rather than back of an envelope calculations and the things that happened to be handy when you needed a fallback because someone was ill or you had to leave work early or a decision had to be made quickly because someone was going to be away.

Anyway, here are a few of the more interesting that I've had a chance to read:

Stilgoe, Owen and Macnaghten [1] provides an interesting discussion that focuses largely around the abandoned field trial from the SPICE geoengineering project. This is particularly interesting not just for the framework (being one of the few examples I could find of a case study of ethical assessment of a developing technology), but because the concerns expressed about the wider project included concerns about the consequence of doing the research at all - that the very existence of the project might be produce "moral hazard", encouraging people to stop worrying about climate change on the basis that geoengineering would just solve the problem. And the counterargument that by not doing this research there is the the opportunity cost  - that we might not buy vital time to reduce greenhouse gas emissions. This tension goes to the heart of a lot of innovation problems, of course - the dangers if the doors are opened, the lost potential if the door is left closed. They go on to consider a framework for RRI based on four key concepts -  anticipation, reflexivity, inclusion and responsiveness. Perhaps most interesting is that by tying this in with a case study, this actually provides some very concrete reflections upon the process of trying to apply RRI frameworks, and the attached stage gates.

Genus and Stirling [2] offer an interesting critique of Collingridge's work - including the observation that many of those discussing the Collingridge Dilemma don't engage with all of Collingridge's writings on the matter. Which is a helpful reminder to me that I haven't actually read Collingridge's The Social Control of Technology, which feels like a bit of an oversight on my part.  Anyway, one of the things I most liked about this paper was that they brought out the messy nature of product development:

[T]o the extent that RRI approaches can fully embrace Collingridge’s contributions, they will need to grapple not only with contending qualities and principles for rationalistic decision making but also with the fundamental realities (foundational for Collingridge) that the governance of research and innovation are fundamentally about ‘muddling through’ in the presence of steep power gradients and strongly asserted interests." [p67]
They don't go as far as putting forward a practical framework for RRI, but do make some recommendations, most notably that:
His [Collingridge's] prescriptions of inclusion, openness, diversity, incrementalism, flexibility and reversibility might all now be better expressed in terms of qualities other than ‘control’–including care, solidarity, mutualism, non-consequentialist notions of accountability and responsibility itself [p67]
Van de Poel [3] has a take on experimental technologies that takes the idea of engineering as a large scale social experiment literally, and considers assessing this through the "bioethical principles for experiments with human subjects: non-maleficence, beneficence, respect for autonomy, and justice". Van de Poel contrasts this with the "technologies of hubris" (a phrase credited to Jasanoff [4]):
Jasanoff speaks of ‘technologies of hubris’, i.e. those ‘‘predictive methods (e.g., risk assessment, cost-benefit analysis, climate modelling) that are designed, on the whole, to facilitate management and control, even in areas of high uncertainty’ [p668]
Which is a good point: like much engineering decision-making, lots of tools rests on the requirement for (reliable!) crystal ball and limitless capacity to process information. I'll come to this a in a moment.

Finally, Pesch [5] discsuses Engineers and active responsibility, noting that:
Engineers have to tap on the different institutional realms of market, science, and state, making their work a ‘hybrid’ activity combining elements from the different institutional realms. To deal with this institutional hybridity, engineers develop routines and heuristics in their professional network, which do not allow societal values to be expressed in a satisfactory manner. [p925]
This is again an interesting point, and one which chimes with my thinking about the significance of engineering institutions as the interface between their members and society.

These are a small selection of the papers I've found: those I've had time to read to date. I'll cover more in due course, I daresay (give me another six months...). But the main question is: what have I learned? And does it have any bearing on matters like the Tracking People network?

Conclusions... To Date
Collingridge's Dilemma interests me because it relates to the whole issue of engineering failure and its inevitability when dealing with experimental and novel designs. The problem is always the same: absent precognition, predicting how a system will behave becomes increasingly challenging as the complexity of that system and its influence increases. Even modelling the mechanistic behaviour of a system becomes difficult as it gets more complex - uncertainties begin to add up, to the point where extensive testing or redundancy are required before unleashing the system upon the public. 

This is even more problematic when the consequences we're talking about aren't just physical behaviours of designed and natural systems, but individuals and groups who will tend to respond to the way things evolve, and may deliberately obfuscate their behaviours or hide information. Predicting the social consequences of technologies are fraught with peril, and I don't have any good solutions to that. There's a link here with John Foster's excellent The Sustainability Mirage [6], in which he notes that the uncertainty over the future and scientific modelling gives us the wiggle room to equivocate - we just adjust the underpinning assumptions until we get the outcome that we want.  I guess this applies to technology and social impact as well. It's not hard to get responses everything from "this will save the world" to "this will destroy the world", the SPICE project being an interesting case study for that very reason. 

Responsible Resarch and Innovation, Value Sensitive Design, and Constructive Technology Assessment are all posited as potential solutions - I'll need to look further into these. I suppose they are a variant of Design For X - looking ahead to virtues a design should embody (quality, accessibility, cost, environmental impact), or modelling the potential consequences of design decisions at a given lifecycle stage (manufacture, assembly, use, disposal). I guess that the same major challenge must apply - we are at best "boundedly rational", and can only cope with so much information - just throwing more and more information at designers with greater and greater complexity doesn't guarantee better decisions: it just leads to greater risk that something will get dropped.

With the Tracking People network symposium coming up in a few months, it makes me wonder whether Tracking People really is concerned with a Collingridge Dilemma. I mean, we aren't really talking about the development of new tracking technologies, are we? Electronic tagging, GPS location, face recognition on CCTV, privacy concerns over internet data - these are all established methods of tracking: we're long into the power part of Collingridge's Dilemma. We can see the consequences - the challenge is how to deal with them.

Which raises another challenge - how do you know when a technology is still "new"? I don't think it's as obvious as that. Did Twitter or Facebook or Google know what they were getting into when they started out? Maybe they did. It's just that sometimes the disruptive innovation doesn't become apparent until it's well into hindsight. Then again, perhaps there's an argument for saying that we didn't know how disruptive these things would be, but we can see that - for example - AI or robots are going to be. But even these are very broad categories. Any given researcher or engineer is probably working on a very incremental step within the field.

There are of course, a few issues around tracking that arise from all this. There is the potential moral hazard of tracking: the risk that we over-rely on the technology and use it as a cheap alternative to incarceration, or supervision of patients or children; or that we create a data free-for-all that allows all kinds of unexpected and nefarious uses of data, either through breaches or obscure data management practices. Or unexpected big data mash-ups of information from multiple sources. And there is the opportunity cost of not tracking: that we fail to give people the increased independence, or continue to carry the cost of a large prison population and so deprive other potential recipients of taxpayers' money.

This is about to get rather more practical for me, as in the SUITCEYES project (http://suitceyes.eu) we are talking about using technologies like location tracking and facial recognition. All this raises related issues: what I don't have is a sound solution! Still, at least it's given me some points to reflect on...

References 

1. Stilgoe J, Owen R, Macnaghten P.  Developing a framework for responsible innovation. Research Policy. 2013 Nov;42(9):1568–80.

2. Genus A, Stirling A. Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy. 2018 Feb;47(1):61–9.

3. van de Poel I. An Ethical Framework for Evaluating Experimental Technology. Science and Engineering Ethics. 2016 Jun;22(3):667–86.

4. Jasanoff S. Technologies of Humility: Citizen Participation in Governing Science. In: Bogner A, Torgersen H, editors. Wozu Experten? [Internet]. Wiesbaden: VS Verlag für Sozialwissenschaften; 2005 [cited 2018 Aug 24]. p. 370–89.

5. Pesch U. Engineers and Active Responsibility. Science and Engineering Ethics. 2015 Aug;21(4):925–39.

6. Foster J. The Sustainability Mirage: Illusion and Reality in the Coming War on Climate Change,  Routledge, (2008)

No comments:

Post a Comment