Saturday 10 June 2017

Methodological Challenges in (the Design of Devices for) Tracking People

I'll be giving a talk at a workshop for the AHRC Tracking People Network next week, so as ever - I thought I'd sketch it out up here first, and get my thoughts down. Whereas the first two focused on scoping the landscape and legal issues respectively, this one concentrates on technological and methodological challenges. So, we'll be hearing about the technologies used and how errors can arise, and some of the methodological challenges with doing research in this area. 

I'm taking a slightly different angle: the methodological challenges in designing tracking devices. It's the key difference between a technology that works in the lab, and a product that is successful in the field. I apologies if this comes off as a bit of a brain dump - my aim is to get the thoughts down here, and then trim them back for the presentation.

Let's start with the complexity of tracking systems; the problems this presents to designers and engineers; and then three design tools that might help: user-centered design; sociotechnical systems analysis; and critical design. It's worth saying that I don't think any of the issues discussed here are unique to tracking devices: many will apply to almost any product. But by their nature, tracking devices are complex systems with multiple ans sometimes unwilling, stakeholders.

Tracking Systems as... well, Systems

Perhaps the best way to explain this is with a systems view of the problem. Any tracking technology is a system - GPS, for example. You have an electronic system that sits and listens out for the time signals from the GPS satellites, and by comparing the differences works out how far it is from each satellite and triangulates this to identify a location. So the little GPS unit that sits in your phone or atop your Arduino (to show my colours!) is already just a subsystem in a larger system. If those satellites go down, you've got a problem.

Moreover, the GPS unit just returns co-ordinates (more or less). It has no idea what they mean and doesn't transmit them anywhere except to its output pins. So you need a microcontroller to interface with it, perform operations on this data and decide what to do with it. This will, of course, depend on the application. Maybe you want co-ordinates broadcast continuously. Maybe you want to offload them once a week via USB. Maybe all you want is an alarm broadcast if the device's co-ordinates are outside a given range. And if you're broadcasting, then there needs to be something to broadcast *to*: some system to receive, and store the data. And unless you happen to be broadcasting to a system that is also in your direct possession, then you're relying on communications infrastructure to transfer that data for you.

Yet there's more: this device will need power. A battery, probably - for tracking applications, I doubt you'd want to plug into the mains -  and batteries can run down, and need to be charged up. So the unit isn't going to be entirely self-contained and independent of its user. That's not necessarily bad, but it does mean that you've got to worry about battery life, whether the user can be relied on to do the charging, and what happens if they don't. 

Which brings us to another issue: you need to have some sort of system outside the device itself to do something in response to the data. I mean, unless you're a homebrew hacker who is just playing around to learn how to use GPS, then you're tracking for a reason, and you generally want behaviours to change in relation to tracking - either so that people will contain themselves within an area; or that support can go out if they stop moving around or end up somewhere unexpected; or that they will exercise more because they know they haven't walked far enough, and so on and so forth. So, the success of a device doesn't just depend on the tracking technology, but on the user's behaviour, and the external devices it fits into.

This applies to almost any product, of course, but it poses a particular problem in this case because for applications in criminal justice or health-related tracking (particularly those who have dementia), the user may not be a willing part of the system. Here the Deleuzian concept of assemblages and the language of "territorialisation" is particularly apt. Being tracked means being colonised by a system, whose broader elements you don't control, whether you like it or not.

It gets worse: who's the user? The person or organisation doing the tracking? The person being tracked? And if you're not paying out of your own pocket, then what about the funder? What about family friends, and relations who might immediately be affected by the presence of tracking - for better or for worse? And then you run into the classic problem of user-centered design: designing a bespoke system for one set of users is challenging, but you can potentially sit down with them and thrash out an agreeable solution. But bespoke design is expensive, and there is no guarantee that that will be a great solution for anyone else. In most cases, you want economies of scale, and that means trying to grasp preferences across populations and demographics and even borders.

The issue here is that tracking technologies inevitably entail a complex network of systems interacting, and they all raise the potential for things to go wrong. Designing a tracking device has a lot of the characteristics of a "Wicked Problem": lots of complicated interacting parts and parties whose interests don't necessarily align. Which is an uncontroversial conclusion: I mean, we wouldn't be running this network if these issues didn't crop up. But it does give us a particular handle on the challenges faced by engineers and designers. So let's dig into those a little.

Bounded Rationality: An Awful Lot to Think About

There are a few useful perspectives we can use to think through this issue.

First up, we have the Dual Nature of Technical Artefacts: that artefacts have an objective, measurable physical nature and a subjective intentional nature. It is the fit between this physical nature and the design's environment that determine how well it meets the intentional nature - and therefore whether it is a "successful" design. Of course, with the intentional nature being subjective, the same object may be a great design for one person, and a terrible design for another, even when used in the same environment. And this is one of the reasons why we have so many different makes of car, or mobile phone, or computer: different people have different needs and different priorities. It also means that a design that looks great to you as a designer or engineer may be awful for your intended users. And of course, when you have multiple users all with their own priorities and points of view, you may find that a design that is great for one of them, is awful for another.

Inevitably, you have to make trade-offs, and not just between stakeholders, either. For any given person, the ideal system will be probably be lighter, stronger, more comfortable, more beautiful, more functional and cheaper than can be achieved in real life. Sometimes you're lucky, and you can get a Pareto improvement, and make every important characteristic for everyone better, and only give up on some of the less important characteristics, but that's the exception, not the rule. Usually, you have to decide which takes priority. Will you sacrifice functions for low price? Will you pay more to keep some functions in? And what happens when the priorities of different users conflict?

This, brings us on to the next problem: specialisation. The days of the artisan in product development - where one person worked with the end user and crafted a product from start to finish - are long gone. This may be the norm in the Maker community, but most product development involves discipline specialists who each work on their own aspects of the design. This is something highlighted beautifully by Louis Bucciarelli in Designing Engineers, when he points out that in the design of solar panels, the electrical engineers view.  the product as a series of flows and components, with no mass or physical existence, while the structural engineers view them purely as blocks of material with mass, needing to be held in a given position against given forces due to gravity, the wind, movement to track the sun, and so forth - with no consideration of the flows between them or the electrical considerations. This isn't a bad thing - it's an inevitable part of developing complex products and allows people to play to their strengths. Yet it also means that the physical and the intentional nature are ever more fragmented.  You can see this in the V-model of systems engineering: start with the overall needs, and translate these into requirements - divvy these requirements up amongst the relevant systems, and once you've designed the subsystems, start to combine and test them.



This means that each subsystem is designed with only a subset of the overall intentional nature in mind. That's not necessarily a problem, but it does make it difficult to trace through the potential consequences of decisions.

The decisions made early on in design impact everything downstream in the product's life - manufacturing, assembly and distribution costs and processes; environmental impact; ease of disposal; ease of use for different demographics; ease of maintenance and repair; robustness and resilience to changes in other systems - and these impacts are often uncertain; and changes get harder and harder to make (or at least, more and more expensive) the further you get into the process:



Ethically, this manifests itself in the Collingridge Dilemma: that the consequences of a new technology are difficult to foresee until it has become widely-used; by which stage it is very difficult (not to mention costly) to change because it has become entrenched. This is true with tracking technologies: you won't know how they will be used or misused until they're in widespread use; by which time it is very difficult to put the genie back in the bottle. Moreover, there are interactions here: the very complexity of the systems that contribute to the "success" of the tracking device mean that they may change long after the initial design is complete, and it's very difficult to predict how they will move on.

Finally, there is the problem of Bounded Rationality: individuals only have limited mental processing power and can only attend properly to so much information at any given time. The idea of performing optimal trade-offs in your head between hundreds of competing requirements is naive at best. And asking designers and engineers to just think about more things makes this worse, particularly when you're dealing with a complicated network of interacting systems being designed by different people, with different experiences and priorities, trying to balance the conflicting needs of multiple stakeholders.

So, there are a lot of challenges here. How do designers' address them? Well, there are a few tools in the designers' arsenal.

User-Centered Design

User-Centered Design (UCD) is one approach to this. UCD is more of a philosophy and a set of tools than a single approach, but it emphasises understanding the user and placing them (rather than, say, the technology) at the heart of the design process. It *isn't* asking users what they want: that might be part of it, though users often don't know what they want, or can only give responses by reference to existing products. In most cases, Users don't have the technical skills to develop the product themselves (especially given all of the issues raised above).

Rather, UCD is about developing an understanding of your range of users, their habits, tastes, aspirations, environment - and how whatever is being designed will fit into it. At one end, this can be forming fictional personas and use cases representing typical scenarios based on interviews, surveys and direct observation. This gives the designers something logical to think through: "how would this user respond if the design does this"? At the other end of the scale, it can be participatory design: actually involving users in design decisions or ideation. Somewhere in the middle are the consulting of users for analysis purposes - getting feedback on ideas. It's generally reflected as an iterative loop, as specified by ISO 9241-210:


Ideally, users will be directly involved in every stage: observed and interviewed to get requirements, involved in discussions to represent trade-offs. This creates its own challenges: recruiting and getting time with users can be time-consuming and expensive, especially if the design keeps changing. Plus, as we noted above, "users" are a diverse bunch. By "users" we mean stakeholders, and the opinion of different individuals may conflict even when they represent the same "class" of stakeholder (in this case: tracker, tracked, outside user of data, funders, etc)

This dovetails very easily with the V-model of Systems Engineering we saw above (which, after all, also involves identifying needs, specifying requirements, generating designs and testing the as they are integrated), though as you can imagine, with an iterative loop for every subsystem in the architecture and for the system as a whole, this can get very cumbersome. Of course, with a good grasp of the users' needs, you don't need their input to evaluate every requirement. Provided you've broken down the requirements among the architecture correctly, you're sorted

It's even more challenging when working with vulnerable populations such as children or dementia patients. It's one thing to work with users on the design of a new mainstream health tracking app, where the target users are mobile, able to come to you, and generally have no communication difficulties.

Our own experiences engaging children in the Together Through Play and MyPAM projects highlighted this issue. Children's designs were strongly anchored in existing devices, and feedback was generally very positive - they didn't like to be too negative, and in general they expect the adult designer to know the answer. People like Janet Read have spent a lot of time dealing with this sort of issue, and developing methods for engaging children, which are well worth looking into.

Personas are a way of addressing this, though since the people they represent aren't present to make their points or argue their case, it's easy to "fudge" your assumptions to get your favourite idea through. Proxy users are another approach - asking parents or carers to get involved, though even they may not be able to give a direct answer.

Of course, what we're trying to do is iterate early, when change is cheap, rather than waiting until we've got 10,000 units in assembly to suddenly have to make changes, so some user involvement is better than no user involvement. You also need to recognise that users may not know what problems will arise: something that looks great and feels comfortable in a 2 hour focus group might be excruciating after being worn for twelve hours, for example.

Sociotechnical Systems Analysis
Given the challenges of involving users, you want to make sure that you're getting valuable information from them. That means making sure you iron out problems you could work out in other ways (through anthropometrics, for example), but it also means trying to make sure that you discuss all imporotant angles with them. One way for doing this that I value is the Clegg and Challenger's Sociotechnical Framework:



This was intended for analysing organisations, but applies pretty well to any sociotechical system. It identifies six pillars, each of which can affect the way a system behaves - as can the interactions between them. For example, in the context of tracking, say, a patient with dementia:


In this case, we can see that the system has some technical goals: identifying the patient's location at any given time; and alerting the caregiver if they move away from the area they should be in (a hospital ward, for example). Of course, the Goals will differ for different stakeholders. For the patient's family, the goal may be to ensure the patient's safety, or to reassure themselves of it; for the organisation housing the patient (be it a hospital or a care home), they may wish to minimise the cost of care or the risk of embarassing headlines; the police may wish to spend reduce the resources used in searching for missing patients; the patient's goal may be to walk as often and as freely as they like.

Of course, this still depends on having access to stakeholders: you can use these six "pillars" and still come up with findings that are based on erroneous assumptions, but it's provides a useful structure for checking that you're capturing all the main issues when setting your requirements. After all, setting the requirements is in many ways the most important part of product development - develop to the wrong set of requirements, and you'll only have a product that "works" by fluke. And remember that what "works" is defined by that subjective, intentional nature of the product, and may be different for different people.

Critical Design
I'll finish up with a tangent, partly because it's a thought that keeps coming back to me, and partly because it relates to the workshop that I'll be running on Thursday. One of the problems in thinking through user needs and the possible impacts of technologies is that we get bogged down in the practicalities of what can be achieved, or how easy something will be to design, and so on and so forth. An interesting approach to design has been developed by Dunne and Raby in the form of "Critical Design" - designs intended to evoke debate and discussion, rather than for sale. Similar approaches are Design Fictions or Speculative Design. These provide an interesting way of exploring issues. Hence, one of the ways to understand requirements is to generate an "idealised" object - and then try to understand what makes it "ideal". Equally, you can use these approaches to look at what might happen - something that often occurs with science fiction. We thought we'd give this a go, by asking attendees to come up with their ideal tracking device - not to specifically design it, but to at least conceptualise what would make such a device ideal for them. Hopefully, it'll encourage some interesting discussion. I'm keen to see how it goes.

In Summary
So, what does all this mean? Well, to recap: like many complex products, beyond the technological challenges, tracking devices also present a challenge in managing the diverse (and sometimes conflicting) requirements of multiple stakeholders, making trade-offs between them difficult, particularly given the need to manage those across multiple subsystems and potentially multiple teams or suppliers, each with their own view on the product and what it needs to achieve. This is made all the harder when the users themselves are difficult to access or engage fully in the process: and sometimes the very features that make them hard to engage are the reasons that we wish to track these people (for example, those with dementia). And we'll find out on Thursday whether we can use some "Design Fictions" to explore people's concerns and interests in tracking. 

No comments:

Post a Comment