Friday 28 September 2018

Month in Review: September 2018

September signals the approach of teaching - maybe less so this year, since term starts late, but that has been the dominant feature. Students are back, I've run the Welcome back talks (as I'm stepping back in as Programme Manager,  while my successor goes on secondment), and lectures begin on Monday. So if things aren't ready now, it's too late.

This is the time of year when you're glad you sent things off to print in July, and wrote the exam well in advance. I have a new module proposal, which I haven't quite finished, which I'd hoped to clear off my desk before teaching hit, but still: it's mostly there.

And this is important, because research doesn't go away just because of teaching.  We have a SUITCEYES review meeting with the European Commission next month, and deliverables in November, so there can be no sacking up! Still, things are moving: Yang is hard at work developing our sensor systems, technical architecture details are being agreed with CERTH, and I've been developing the new iteration of haptic display drivers (now with added solenoids and I2C communication - a version using feedback for position control is due soon), but most of all - our Work Package 2 research fellow, Adriana, has now carried out the first three interviews for the project, with more to come.  It's Aan important milestone for the project, and great work from Adriana! Here's to the next round...

Thursday 27 September 2018

Thinking out loud: Tracking People and Sociotechnical Systems

One of the things on my mind around the Tracking People network is the sociotechnical framework of Rose Challenger and Chris Clegg, laid out in [1]. So I thought I’d sit down and just think aloud about the whole thing for a bit. Tracking involves sociotechnical systems, of that I’m sure: so what can we learn from Challenger and Clegg?

The paper I’m taking this from is about crowd disasters, but it gives a good overview of their framework and some associated systems design principles. I’ll start with discussing my thoughts on general systems issues in tracking, and then move on to my thoughts against each of Challenger and Clegg’s points.

General Systems points:
Any technological approach to tracking involves a system made up of subsystems. It must: partly because everything is made up of systems, but more pragmatically, because it necessarily involves a mix of software, hardware, at least one person (we are concerned about tracking people, after all - and while I suppose self-tracking of some form is feasible, most of the cases we've discussed involve at least two: , one to track and one to be tracked), often some amount of infrastructure (GNSS, telecommunications networks), and some degree of legal and organisational processes (since most tracking we have discussed involves criminal justice or healthcare organisations).

It’s also interesting to consider the extent to which designers of such systems design them from scratch, or select them from pre-existing solutions (most applications we came across were RFID or GNSS based, and the developers of the devices involved clearly hadn't invented those themselves - it would be interesting to know the extent that the developers of these technologies considered their applications when developing them).

Which raises an interesting question about system boundaries, and the extent to which the design of any new tracking system encompasses the design (rather than the selection) of underpinning technologies and the design (rather than accommodation) of organisational processes (and maybe social processes, though I’m not entirely convinced you can design those). All in the hope of getting the emergent property you want (tracking, for it is an emergent property - though even then the tracking is never an end in itself), without any that you don't want, but might arise from the complex interactions of the system.

This inevitably means that tracking is a socio-technical process. Hence, in this post I wanted to consider the implications of Clegg and Challengers Sociotechnical Framework for tracking applications.

First, let's consider some of the terms of reference in this discussion. Most notably that this is concerned with the deliberate design of systems for tracking people. Not with the design of underlying location technologies, which I think is a different matter - the downstream consequences of being able to locate something in space as a function of time are so far removed from the development of location methods by layers of other decisions and the general utility of being able to do so is valuable enough that I’m happy to consider them separate. If we want to have a debate about whether we should be able to use technology to track anything at all, then that's a very different issue. Likewise, this is about tracking people, not autonomous vehicles or other robots, or the movement of goods round a store or factory, or your laptop if it gets stolen.

With that defined, let's move on to looking at the issues that Challenger and Clegg raise, and their implications for tracking people.

The Sociotechnical Framework

Challenger and Clegg’s framework [1] has six pillars, which interact with each other.

Technology:
This is the bit that we tend to think of in tracking:
               Location - GPS/RFID
               Wearables
               Cameras for face recognition.
              
But also: Methods by which data is stored and transmitted. And probably more… some of these may cross over with the next point, which is:

Buildings/Infrastructure:
It can be hard to see how this differs from Technology in the tracking scenario. Is GPS technology (the GPS locator identifying where it is from transmitted timestamps)? Or infrastructure (the satellites beaming out those timestamps)? Or both?

Thinking in terms of buildings is helpful here. If I’m using a proximity detector (such as RFID) to detect something leaving a given area, I’m relying on physical walls (or similar boundaries) to ensure that things only

Also, in terms of data security it is helpful to remember that all data exists physically somewhere - on a disk of some form, probably (but not necessarily) a server in a data centre. That creates two key issues: 1) that it may provide a physical route for data breaches (lost memory stick or laptop), and 2) that the functioning of the tracking system may depend on that infrastructure being in working order.

Goals
 Note that these may be different for different stakeholders. The goal of the tracker may be different from the goal of the one being tracked (trackee?). Note also that the goal of the one being tracked may not be “don’t be tracked”.

And then there are other goals in life,  which may vary from moment to moment: “get my shopping”, “see my friends”, “hold down a steady job”, etc.

Culture
Again, different stakeholders need to be considered, each with their on culture. This may affect the attitude towards being tracked (is the tracker trusted or distrusted?), but also towards anyone being tracked (“If they have a tag , they must be a paedophile!”), as well as the cultures of organisations involved (“We just do the bare minimum to get by…”, “I need to cover my tracks to make sure I don't get blamed”, “Attention to detail is vital”, “deadlines can't be missed”). Culture definitely cannot be designed: influenced, maybe, but it inevitably evolves rather than being imposed.

Processes/Procedures
This, in many ways, is a key issue: tracking occurs for a reason and there needs to be a use to which the tracking data is put. There needs to be appropriate responses, but issues such as data management, and maintenance also need to be considered. Unlike Culture, processes can be designed - formal processes have to be, though they may not be within the remit of the designers of the tracking system (who have to accommodate processes designed by other people). Informal processes can also evolve, I guess.

People
The person (or people) tracked, the person (or people) being tracked, but also those around them. How does the tracking impact them? Are they being tracked by associating with the tracked person? Do they need to provide assistance with the tracking process? Might they hinder the tracking process?

These, then, are the six pillars of Challenger and Clegg’s sociotechnical framework, and they all have a bearing on any process that involves tracking people, and should be considered when designing a tracking system.

Meta-principles of Sociotechnical Systems Design

In addition to the pillars of the Sociotechnical Framework, Clegg also sets out nineteen “Meta-principles” to “capture an overall view of systems design” (cited in [1, p346], , which is where I’m quoting them from). I’ll consider each in turn, to assess their applicability and implications (if any) for developing tracking systems.

“1 Design is systemic A system comprises a range of interrelated factors and should be designed to optimise social and technical concerns jointly .”

This is straightforward: it’s more or less what I said above - tracking involves social elements and technical elements, and both (and their interplay) need to be considered.

“2 Values and mindsets are central to design Underlying values and mindsets strongly influence systems design and operation.”
This is an important point: designers and engineers are people, and their own culture, goals and relationship will impact the system designed.

“3 Design involves making choices Design choices are interdependent and exist on many dimensions, e.g. how will the system be operated, managed and organised?”

This is something I wholeheartedly agree with. I did my PhD on decision analysis in Integrated Product and Process Design, and one of my stock phrases is “design proceeds as a series of decisions”. The key point here is interdependence - the sequencing and implications of design decisions need to be considered. These create a complexity that need to be kept in mind - although there is also the danger of just throwing demands to consider more and more information, at which point you rub up against bounded rationality: the fact that humans can only deal with so much information at a given time. Sooner or later, providing too much information means that some of it needs to be ignored.


“4 Design should reflect the needs of the business, its users and their managers Systems should be designed to meet the needs of all relevant stakeholders”

This is an interesting one, because as noted above,  tracking involves a range of stakeholders, not all of whom can be directly consulted in the design of the tracking system. And even if they can, they aren't necessarily good at identifying their problems.

“5 Design is an extended social process. Design continues throughout the life cycle of the system, as multiple stakeholders shape and reconfigure it over time.”

This is an important point: since any system encompasses not just the hardware and software,  but processes (official and unofficial) around them, design doesn't necessarily cease once a product is released. Indeed, with the growth of firmware updates and the concept of product as service, even the hardware and software may continue to change.


“6 Design is socially shaped Design is a social phenomenon influenced by social norms, movements and trends.”

This was one of Bucciarelli’s greatest points: however much we may wish to argue that Engineering Design is a rational process of decision-making, in practice it is a social negotiation, shaped by beliefs and personalities as well as by objective information.

“7 Design is contingent There is no ‘one best way’; optimum design depends on a range of issues Content principles (concerned with the content of new systems design).”

This is fairly self-evident. The best design for a given situation is likely to vary, and given that designs are generally not bespoke, they will be optimal for only a subset of use cases. You just have to hope that  it is near-optimal (or at least, good enough) for the cases it is applied to.

“8 Core processes should be integrated Design should avoid splitting core processes across artificial organisational boundaries; people should manage complete processes”

That seems a particularly pertinent point to Tracking where, by definition, processes are going to take place across multiple organisations. Clearly defined responsibilities and interactions are important. Who manages the equipment? Who manages the response? Who manages the data? Do they interact?

“9 Design entails multiple task allocations between and amongst humans and machines Tasks and roles should be allocated amongst humans or machines clearly, in an explicit, systematic way”

True for all systems, I guess, but in tracking we are perhaps looking at how far responses should be automated (a potential way to maintain some degree of confidentiality, for example - where a machine monitors actual location, and details are only divulged to a person in the event of an incident). But then we get into the danger of black box algorithms, false positives and false negatives.

“10 System components should be congruent All system parts should be consistent with one another and fit with existing organisational systems and practices.”

This seems like just plain common sense, but is easily forgotten.  The danger is that if you design a system that doesn't fit with existing systems and practices, then you need to be sure that a) new systems and practices are actually in place and b) you’re satisfied that they will be followed, rather than just circumvented.

“11 Systems should be simple in design and make problems visible Design should maximise ease of use and understanding, learnability, and visibility of problems to allow quicker resolution”

I don't think There’s much to add to this one. Except maybe - visible to whom? If a system has stopped tracking, you may or may not want the tracked individual to know. The person or organisation doing the tracking certainly want to know, but they may not be on the ground to address the problem.  Therefore, you may wish to ensure that someone else (family members, perhaps, in the case of children or dementia patients) knows about the problem. Though that then raises the question about whether they are willing and able to resolve such problems.

“12 Problems should be controlled at source Design should enable system problems to be controlled directly on the ground by end-users, as local experts “

This is particularly interesting in the case of tracking, and links ti the previous point. Should the tracked person be able to rectify problems? Will they (or those around them, particularly in the case of, say, dementia patients or children) be able to. Will they know when an error has occurred and what to do? And… will they actually do it?

“13 The means of undertaking tasks should be flexibly specified Systems should not be over-specified; end-users should be able to adapt processes to suit their needs better.”

This is true: the ability to adapt responses and behaviours of the system as events emerge in practice is important,  and it is better if this can be done by those on the ground, rather than having to go right back to designers every time.

Process principles (concerned with the process of systems design):

Again,

“14 Design practice is itself a socio-technical system Design processes are themselves complex systems involving an interdependent mix of social and technical subsystems”

This is really an extension of principles six and seven, I think. I’m not sure there's anything to add.

“15 Systems and their design should be owned by their managers and users Ownership of a system should be afforded to those who will use, manage and support it, rather than being fragmented“

This is slightly complicated, since the principle of systems being owned by users *and* managers implies fragmentation of ownership, surely? I guess the issue is more that ownership should not be fragmented across groups who are not involved in day-to-day running of the system . This is particularly pertinent where capabilities are bought in, I suppose: if you lease devices and storage space from a third party, then you’ve immediately created a problem, unless they are also the ones managing that process.

“16 Evaluation is an essential aspect of design System performance should be regularly evaluated against the goals of the organisation and its employees”

Well, yes, this stands to reason: design being a social process and all that, decisions can end up being driven by internal dynamics,  rather than end goals.

“17 Design involves multidisciplinary education Design should bring together knowledge, skills and expertise from multiple disciplines”
True. It’s why I work on a multidisciplinary Product Design course and spend so much time working across disciplines.

“18 Resources and support are required for design Design needs resource investment, e.g. time, effort and money; knowledge, skills and expertise; sociotechnical methods, tools and techniques.”
Very true: problems are easy to correct in the design stage when iteration is cheap (not free, but cheaper than it is later on). Skimp on this, and you’ll be at high risk of expensive rework in the field - or living with the consequences. Of course, huge investment in design doesn't guarantee freedom from problems: just that skimping raises the risks of them.

“19 System design involves political processes Complex systems design can be a political process; various stakeholders are affected by design, implementation, management and use.”

Again, very true - particularly in the emotive areas of tracking for criminal justice and dementia patients.

Anyway, brain dump done. Lots to chew over there - I feel like there are some helpful lessons in there, that I can tease out. I’ve just got to actually get them down on a coherent form…

References

[1] Challenger R and Clegg C (2011) “Crowd Disasters: A Socio-technical Systems Perspective” Journal of the Academy of Social Sciences, 6 (3), p343-360.

You’ll note that I use sociotechnical, rather than socio-technical.