Monday, 31 December 2018

An Unexpected End to the Year... and a blogging hiatus.

As you may have inferred from my last post, 2018 didn't end quite as I had hoped. I ended up being abruptly off work for three weeks, and working reduced hours for another four. It's been quite an experience. Abruptly not being able to do all the things you take for granted, and then having the consequences of enforced rest (even after I was well enough to get back to work, commuting was a major challenge, and even now I'm nowhere near as fit as I was back in October) was certainly educational.  I'm feeling much better now - better even than I did a couple of weeks ago - though I'm still not fully right.

   It was not a great time to go off: November and December are accepted as "peak teach", so I would normally work longer hours than usual in order to complete my marking, so I reckon that by the time you allow for sick leave, lost overtime, and just working a lot more slowly than usual, I've lost about 250 hours' work.

   A sizeable chunk of that has fallen on other people: I owe a huge thanks to my Product Design colleagues who covered about 30 hours of teaching in my absence, and to my SUITCEYES colleagues at Leeds (especially Sarah Woodin) who held the tiller as we went through our first review and our first two major deliverables - all while I was either in hospital or recuperating. I'm not sure I could readily count how many hours they've put in on my behalf. There's a big Wellcome Trust bid being spearheaded by Stuart Murry due for interview at the end of the month, and I've been able to provide very little assistance. And the Tracking People grand finale - the symposium organised by Anthea Hucklesby - which I ended up missing most of, and being no help at all in organising on the day. Still, I managed to present, so that's something.

  The rest of the work must be made up: catch up lectures have been delivered, sixty hours of marking have been deferred to the New Year. And then there is just the shear lost momentum. Since my return, the priority has been teaching, since that really had to be delivered by the end of term. I can barely remember where we were up to with SUITCEYES, never mind how it has moved on in the last two months. So, the start of 2019 will mostly be about catching up on missed work and trying to re-establish my momentum, while accepting that I probably still can't put in Herculean hours to make it all up.

Which essentially means that I'll be taking a break from the blog: it's always been a bit of a squeeze to fit it in, but now, even the odd hour or two per month for a blog post will be sorely needed for other things. I'm not sure how long that will last - I'll have a shot at getting the odd blog post up, but I'm making no promises. Hopefully, normal service will eventually be resumed.

In the meantime... have a good one!

Tuesday, 20 November 2018

Jeffrey Bernard is Unwell. Again.

And again, it's not drunk unwell. Unwell unwell. On the mend though: I'll try to get an update out in December.

Friday, 28 September 2018

Month in Review: September 2018

September signals the approach of teaching - maybe less so this year, since term starts late, but that has been the dominant feature. Students are back, I've run the Welcome back talks (as I'm stepping back in as Programme Manager,  while my successor goes on secondment), and lectures begin on Monday. So if things aren't ready now, it's too late.

This is the time of year when you're glad you sent things off to print in July, and wrote the exam well in advance. I have a new module proposal, which I haven't quite finished, which I'd hoped to clear off my desk before teaching hit, but still: it's mostly there.

And this is important, because research doesn't go away just because of teaching.  We have a SUITCEYES review meeting with the European Commission next month, and deliverables in November, so there can be no sacking up! Still, things are moving: Yang is hard at work developing our sensor systems, technical architecture details are being agreed with CERTH, and I've been developing the new iteration of haptic display drivers (now with added solenoids and I2C communication - a version using feedback for position control is due soon), but most of all - our Work Package 2 research fellow, Adriana, has now carried out the first three interviews for the project, with more to come.  It's Aan important milestone for the project, and great work from Adriana! Here's to the next round...

Thursday, 27 September 2018

Thinking out loud: Tracking People and Sociotechnical Systems

One of the things on my mind around the Tracking People network is the sociotechnical framework of Rose Challenger and Chris Clegg, laid out in [1]. So I thought I’d sit down and just think aloud about the whole thing for a bit. Tracking involves sociotechnical systems, of that I’m sure: so what can we learn from Challenger and Clegg?

The paper I’m taking this from is about crowd disasters, but it gives a good overview of their framework and some associated systems design principles. I’ll start with discussing my thoughts on general systems issues in tracking, and then move on to my thoughts against each of Challenger and Clegg’s points.

General Systems points:
Any technological approach to tracking involves a system made up of subsystems. It must: partly because everything is made up of systems, but more pragmatically, because it necessarily involves a mix of software, hardware, at least one person (we are concerned about tracking people, after all - and while I suppose self-tracking of some form is feasible, most of the cases we've discussed involve at least two: , one to track and one to be tracked), often some amount of infrastructure (GNSS, telecommunications networks), and some degree of legal and organisational processes (since most tracking we have discussed involves criminal justice or healthcare organisations).

It’s also interesting to consider the extent to which designers of such systems design them from scratch, or select them from pre-existing solutions (most applications we came across were RFID or GNSS based, and the developers of the devices involved clearly hadn't invented those themselves - it would be interesting to know the extent that the developers of these technologies considered their applications when developing them).

Which raises an interesting question about system boundaries, and the extent to which the design of any new tracking system encompasses the design (rather than the selection) of underpinning technologies and the design (rather than accommodation) of organisational processes (and maybe social processes, though I’m not entirely convinced you can design those). All in the hope of getting the emergent property you want (tracking, for it is an emergent property - though even then the tracking is never an end in itself), without any that you don't want, but might arise from the complex interactions of the system.

This inevitably means that tracking is a socio-technical process. Hence, in this post I wanted to consider the implications of Clegg and Challengers Sociotechnical Framework for tracking applications.

First, let's consider some of the terms of reference in this discussion. Most notably that this is concerned with the deliberate design of systems for tracking people. Not with the design of underlying location technologies, which I think is a different matter - the downstream consequences of being able to locate something in space as a function of time are so far removed from the development of location methods by layers of other decisions and the general utility of being able to do so is valuable enough that I’m happy to consider them separate. If we want to have a debate about whether we should be able to use technology to track anything at all, then that's a very different issue. Likewise, this is about tracking people, not autonomous vehicles or other robots, or the movement of goods round a store or factory, or your laptop if it gets stolen.

With that defined, let's move on to looking at the issues that Challenger and Clegg raise, and their implications for tracking people.

The Sociotechnical Framework

Challenger and Clegg’s framework [1] has six pillars, which interact with each other.

Technology:
This is the bit that we tend to think of in tracking:
               Location - GPS/RFID
               Wearables
               Cameras for face recognition.
              
But also: Methods by which data is stored and transmitted. And probably more… some of these may cross over with the next point, which is:

Buildings/Infrastructure:
It can be hard to see how this differs from Technology in the tracking scenario. Is GPS technology (the GPS locator identifying where it is from transmitted timestamps)? Or infrastructure (the satellites beaming out those timestamps)? Or both?

Thinking in terms of buildings is helpful here. If I’m using a proximity detector (such as RFID) to detect something leaving a given area, I’m relying on physical walls (or similar boundaries) to ensure that things only

Also, in terms of data security it is helpful to remember that all data exists physically somewhere - on a disk of some form, probably (but not necessarily) a server in a data centre. That creates two key issues: 1) that it may provide a physical route for data breaches (lost memory stick or laptop), and 2) that the functioning of the tracking system may depend on that infrastructure being in working order.

Goals
 Note that these may be different for different stakeholders. The goal of the tracker may be different from the goal of the one being tracked (trackee?). Note also that the goal of the one being tracked may not be “don’t be tracked”.

And then there are other goals in life,  which may vary from moment to moment: “get my shopping”, “see my friends”, “hold down a steady job”, etc.

Culture
Again, different stakeholders need to be considered, each with their on culture. This may affect the attitude towards being tracked (is the tracker trusted or distrusted?), but also towards anyone being tracked (“If they have a tag , they must be a paedophile!”), as well as the cultures of organisations involved (“We just do the bare minimum to get by…”, “I need to cover my tracks to make sure I don't get blamed”, “Attention to detail is vital”, “deadlines can't be missed”). Culture definitely cannot be designed: influenced, maybe, but it inevitably evolves rather than being imposed.

Processes/Procedures
This, in many ways, is a key issue: tracking occurs for a reason and there needs to be a use to which the tracking data is put. There needs to be appropriate responses, but issues such as data management, and maintenance also need to be considered. Unlike Culture, processes can be designed - formal processes have to be, though they may not be within the remit of the designers of the tracking system (who have to accommodate processes designed by other people). Informal processes can also evolve, I guess.

People
The person (or people) tracked, the person (or people) being tracked, but also those around them. How does the tracking impact them? Are they being tracked by associating with the tracked person? Do they need to provide assistance with the tracking process? Might they hinder the tracking process?

These, then, are the six pillars of Challenger and Clegg’s sociotechnical framework, and they all have a bearing on any process that involves tracking people, and should be considered when designing a tracking system.

Meta-principles of Sociotechnical Systems Design

In addition to the pillars of the Sociotechnical Framework, Clegg also sets out nineteen “Meta-principles” to “capture an overall view of systems design” (cited in [1, p346], , which is where I’m quoting them from). I’ll consider each in turn, to assess their applicability and implications (if any) for developing tracking systems.

“1 Design is systemic A system comprises a range of interrelated factors and should be designed to optimise social and technical concerns jointly .”

This is straightforward: it’s more or less what I said above - tracking involves social elements and technical elements, and both (and their interplay) need to be considered.

“2 Values and mindsets are central to design Underlying values and mindsets strongly influence systems design and operation.”
This is an important point: designers and engineers are people, and their own culture, goals and relationship will impact the system designed.

“3 Design involves making choices Design choices are interdependent and exist on many dimensions, e.g. how will the system be operated, managed and organised?”

This is something I wholeheartedly agree with. I did my PhD on decision analysis in Integrated Product and Process Design, and one of my stock phrases is “design proceeds as a series of decisions”. The key point here is interdependence - the sequencing and implications of design decisions need to be considered. These create a complexity that need to be kept in mind - although there is also the danger of just throwing demands to consider more and more information, at which point you rub up against bounded rationality: the fact that humans can only deal with so much information at a given time. Sooner or later, providing too much information means that some of it needs to be ignored.


“4 Design should reflect the needs of the business, its users and their managers Systems should be designed to meet the needs of all relevant stakeholders”

This is an interesting one, because as noted above,  tracking involves a range of stakeholders, not all of whom can be directly consulted in the design of the tracking system. And even if they can, they aren't necessarily good at identifying their problems.

“5 Design is an extended social process. Design continues throughout the life cycle of the system, as multiple stakeholders shape and reconfigure it over time.”

This is an important point: since any system encompasses not just the hardware and software,  but processes (official and unofficial) around them, design doesn't necessarily cease once a product is released. Indeed, with the growth of firmware updates and the concept of product as service, even the hardware and software may continue to change.


“6 Design is socially shaped Design is a social phenomenon influenced by social norms, movements and trends.”

This was one of Bucciarelli’s greatest points: however much we may wish to argue that Engineering Design is a rational process of decision-making, in practice it is a social negotiation, shaped by beliefs and personalities as well as by objective information.

“7 Design is contingent There is no ‘one best way’; optimum design depends on a range of issues Content principles (concerned with the content of new systems design).”

This is fairly self-evident. The best design for a given situation is likely to vary, and given that designs are generally not bespoke, they will be optimal for only a subset of use cases. You just have to hope that  it is near-optimal (or at least, good enough) for the cases it is applied to.

“8 Core processes should be integrated Design should avoid splitting core processes across artificial organisational boundaries; people should manage complete processes”

That seems a particularly pertinent point to Tracking where, by definition, processes are going to take place across multiple organisations. Clearly defined responsibilities and interactions are important. Who manages the equipment? Who manages the response? Who manages the data? Do they interact?

“9 Design entails multiple task allocations between and amongst humans and machines Tasks and roles should be allocated amongst humans or machines clearly, in an explicit, systematic way”

True for all systems, I guess, but in tracking we are perhaps looking at how far responses should be automated (a potential way to maintain some degree of confidentiality, for example - where a machine monitors actual location, and details are only divulged to a person in the event of an incident). But then we get into the danger of black box algorithms, false positives and false negatives.

“10 System components should be congruent All system parts should be consistent with one another and fit with existing organisational systems and practices.”

This seems like just plain common sense, but is easily forgotten.  The danger is that if you design a system that doesn't fit with existing systems and practices, then you need to be sure that a) new systems and practices are actually in place and b) you’re satisfied that they will be followed, rather than just circumvented.

“11 Systems should be simple in design and make problems visible Design should maximise ease of use and understanding, learnability, and visibility of problems to allow quicker resolution”

I don't think There’s much to add to this one. Except maybe - visible to whom? If a system has stopped tracking, you may or may not want the tracked individual to know. The person or organisation doing the tracking certainly want to know, but they may not be on the ground to address the problem.  Therefore, you may wish to ensure that someone else (family members, perhaps, in the case of children or dementia patients) knows about the problem. Though that then raises the question about whether they are willing and able to resolve such problems.

“12 Problems should be controlled at source Design should enable system problems to be controlled directly on the ground by end-users, as local experts “

This is particularly interesting in the case of tracking, and links ti the previous point. Should the tracked person be able to rectify problems? Will they (or those around them, particularly in the case of, say, dementia patients or children) be able to. Will they know when an error has occurred and what to do? And… will they actually do it?

“13 The means of undertaking tasks should be flexibly specified Systems should not be over-specified; end-users should be able to adapt processes to suit their needs better.”

This is true: the ability to adapt responses and behaviours of the system as events emerge in practice is important,  and it is better if this can be done by those on the ground, rather than having to go right back to designers every time.

Process principles (concerned with the process of systems design):

Again,

“14 Design practice is itself a socio-technical system Design processes are themselves complex systems involving an interdependent mix of social and technical subsystems”

This is really an extension of principles six and seven, I think. I’m not sure there's anything to add.

“15 Systems and their design should be owned by their managers and users Ownership of a system should be afforded to those who will use, manage and support it, rather than being fragmented“

This is slightly complicated, since the principle of systems being owned by users *and* managers implies fragmentation of ownership, surely? I guess the issue is more that ownership should not be fragmented across groups who are not involved in day-to-day running of the system . This is particularly pertinent where capabilities are bought in, I suppose: if you lease devices and storage space from a third party, then you’ve immediately created a problem, unless they are also the ones managing that process.

“16 Evaluation is an essential aspect of design System performance should be regularly evaluated against the goals of the organisation and its employees”

Well, yes, this stands to reason: design being a social process and all that, decisions can end up being driven by internal dynamics,  rather than end goals.

“17 Design involves multidisciplinary education Design should bring together knowledge, skills and expertise from multiple disciplines”
True. It’s why I work on a multidisciplinary Product Design course and spend so much time working across disciplines.

“18 Resources and support are required for design Design needs resource investment, e.g. time, effort and money; knowledge, skills and expertise; sociotechnical methods, tools and techniques.”
Very true: problems are easy to correct in the design stage when iteration is cheap (not free, but cheaper than it is later on). Skimp on this, and you’ll be at high risk of expensive rework in the field - or living with the consequences. Of course, huge investment in design doesn't guarantee freedom from problems: just that skimping raises the risks of them.

“19 System design involves political processes Complex systems design can be a political process; various stakeholders are affected by design, implementation, management and use.”

Again, very true - particularly in the emotive areas of tracking for criminal justice and dementia patients.

Anyway, brain dump done. Lots to chew over there - I feel like there are some helpful lessons in there, that I can tease out. I’ve just got to actually get them down on a coherent form…

References

[1] Challenger R and Clegg C (2011) “Crowd Disasters: A Socio-technical Systems Perspective” Journal of the Academy of Social Sciences, 6 (3), p343-360.

You’ll note that I use sociotechnical, rather than socio-technical.

Friday, 31 August 2018

Month in Review: August 2018

It's been a slightly odd month, since between bank holidays and annual leave, I've been out of office for half of it. And not in one chunk - little bits here and there have made it a very stop-start month.
As a result, there's not a huge amount to report.

SUITCEYES ticks onwards: we've started trying to finalise our architecture and plan for the next round of psychophysical testing, as well as trying to recruit individuals with deafblindness to participate in our user interviews (if that's you, and you'd like to tell us about your experiences, the barriers you face or experiences with haptic technology, then do drop me a line on R.J.Holt@leeds.ac.uk!).

I've got my slides ready for the new term, and written my module's exam, so that I don't need to worry about teaching prep during teaching. Hopefully. And hopefully that gives me September to write the proposal for my expanded Mechanical Systems module, so that I don't need to worry about that once teaching starts, either. We'll see how that goes.

Anyway, onwards and upwards. The start of term is now only one month (and a day away), and comes at you like freight train... Something to look forward to!

Tuesday, 28 August 2018

Unintended Consequences, Part 2: Papers on the Collingridge Dilemma

It's been over six months since I threatened to start reading on the Collingridge Dilemma. It's always a bit of a challenge, since this is rather outside the scope of my day job (despite my various cross-disciplinary links) and must therefore take a back seat to more pressing activities, and since it's also outside my sphere of expertise, making it more challenging than, say, reviewing prehension literature. Anyway, after a short search of the Library Website, with the terms "Collingridge Dilemma", "Technology Control Dilemma" (a variant name for the Collingridge Dilemma, apparently) and "Constructive Technology Assessment" (a solution that a few of the papers referred to) from the last five years, I picked out 17 papers that seemed appropriate. Not exhaustive by any means - but a first cut to get a feel for what's being said in the area.

Caveat Lector
The standing warning applies, as always: I'm talking here rather outside my discipline, so this is a lay perspective. There are plenty of people writing on Responsible Research and Innovation (RRI) or Engineering Ethics and the like that could give you chapter and verse on this more effectively. Let's just take this for what it is: an engineers' response to a short survey of the literature on topics around the Collingridge Dilemma.

Outcomes
For the most part, the papers I came across were an interesting bunch, if sometimes prone to be hypothetical. Sometimes, engineering projects take a on a life of their own, because of the complexity and emergence involved in their development - when running engineering projects, I tend to feel like I'm something closer to a shepherd or farmer than a machinist. Possibly because I'm in academia, of course, but certainly there's a difference between sitting down to a well-defined task where you have a very good idea of what the solution will look like, and the sort snaking path of trying to address an ill-defined problem with no idea of what the outcome will actually be. It's like trying to plan your route from a map: you theorise the ideal plan, then you get on the ground and discover the ford swollen from rainfall, the bull in the field or the collapsed rock face (does it show that I'm a rambler?) and suddenly you have to rework your whole route. The thing I don't like about abstract discussion of engineering projects is that they miss this out: you end up with neat looking diagrams of stage-gates and the assumption that every decision is drawn from a rational and exhaustive analysis of all possible options, rather than back of an envelope calculations and the things that happened to be handy when you needed a fallback because someone was ill or you had to leave work early or a decision had to be made quickly because someone was going to be away.

Anyway, here are a few of the more interesting that I've had a chance to read:

Stilgoe, Owen and Macnaghten [1] provides an interesting discussion that focuses largely around the abandoned field trial from the SPICE geoengineering project. This is particularly interesting not just for the framework (being one of the few examples I could find of a case study of ethical assessment of a developing technology), but because the concerns expressed about the wider project included concerns about the consequence of doing the research at all - that the very existence of the project might be produce "moral hazard", encouraging people to stop worrying about climate change on the basis that geoengineering would just solve the problem. And the counterargument that by not doing this research there is the the opportunity cost  - that we might not buy vital time to reduce greenhouse gas emissions. This tension goes to the heart of a lot of innovation problems, of course - the dangers if the doors are opened, the lost potential if the door is left closed. They go on to consider a framework for RRI based on four key concepts -  anticipation, reflexivity, inclusion and responsiveness. Perhaps most interesting is that by tying this in with a case study, this actually provides some very concrete reflections upon the process of trying to apply RRI frameworks, and the attached stage gates.

Genus and Stirling [2] offer an interesting critique of Collingridge's work - including the observation that many of those discussing the Collingridge Dilemma don't engage with all of Collingridge's writings on the matter. Which is a helpful reminder to me that I haven't actually read Collingridge's The Social Control of Technology, which feels like a bit of an oversight on my part.  Anyway, one of the things I most liked about this paper was that they brought out the messy nature of product development:

[T]o the extent that RRI approaches can fully embrace Collingridge’s contributions, they will need to grapple not only with contending qualities and principles for rationalistic decision making but also with the fundamental realities (foundational for Collingridge) that the governance of research and innovation are fundamentally about ‘muddling through’ in the presence of steep power gradients and strongly asserted interests." [p67]
They don't go as far as putting forward a practical framework for RRI, but do make some recommendations, most notably that:
His [Collingridge's] prescriptions of inclusion, openness, diversity, incrementalism, flexibility and reversibility might all now be better expressed in terms of qualities other than ‘control’–including care, solidarity, mutualism, non-consequentialist notions of accountability and responsibility itself [p67]
Van de Poel [3] has a take on experimental technologies that takes the idea of engineering as a large scale social experiment literally, and considers assessing this through the "bioethical principles for experiments with human subjects: non-maleficence, beneficence, respect for autonomy, and justice". Van de Poel contrasts this with the "technologies of hubris" (a phrase credited to Jasanoff [4]):
Jasanoff speaks of ‘technologies of hubris’, i.e. those ‘‘predictive methods (e.g., risk assessment, cost-benefit analysis, climate modelling) that are designed, on the whole, to facilitate management and control, even in areas of high uncertainty’ [p668]
Which is a good point: like much engineering decision-making, lots of tools rests on the requirement for (reliable!) crystal ball and limitless capacity to process information. I'll come to this a in a moment.

Finally, Pesch [5] discsuses Engineers and active responsibility, noting that:
Engineers have to tap on the different institutional realms of market, science, and state, making their work a ‘hybrid’ activity combining elements from the different institutional realms. To deal with this institutional hybridity, engineers develop routines and heuristics in their professional network, which do not allow societal values to be expressed in a satisfactory manner. [p925]
This is again an interesting point, and one which chimes with my thinking about the significance of engineering institutions as the interface between their members and society.

These are a small selection of the papers I've found: those I've had time to read to date. I'll cover more in due course, I daresay (give me another six months...). But the main question is: what have I learned? And does it have any bearing on matters like the Tracking People network?

Conclusions... To Date
Collingridge's Dilemma interests me because it relates to the whole issue of engineering failure and its inevitability when dealing with experimental and novel designs. The problem is always the same: absent precognition, predicting how a system will behave becomes increasingly challenging as the complexity of that system and its influence increases. Even modelling the mechanistic behaviour of a system becomes difficult as it gets more complex - uncertainties begin to add up, to the point where extensive testing or redundancy are required before unleashing the system upon the public. 

This is even more problematic when the consequences we're talking about aren't just physical behaviours of designed and natural systems, but individuals and groups who will tend to respond to the way things evolve, and may deliberately obfuscate their behaviours or hide information. Predicting the social consequences of technologies are fraught with peril, and I don't have any good solutions to that. There's a link here with John Foster's excellent The Sustainability Mirage [6], in which he notes that the uncertainty over the future and scientific modelling gives us the wiggle room to equivocate - we just adjust the underpinning assumptions until we get the outcome that we want.  I guess this applies to technology and social impact as well. It's not hard to get responses everything from "this will save the world" to "this will destroy the world", the SPICE project being an interesting case study for that very reason. 

Responsible Resarch and Innovation, Value Sensitive Design, and Constructive Technology Assessment are all posited as potential solutions - I'll need to look further into these. I suppose they are a variant of Design For X - looking ahead to virtues a design should embody (quality, accessibility, cost, environmental impact), or modelling the potential consequences of design decisions at a given lifecycle stage (manufacture, assembly, use, disposal). I guess that the same major challenge must apply - we are at best "boundedly rational", and can only cope with so much information - just throwing more and more information at designers with greater and greater complexity doesn't guarantee better decisions: it just leads to greater risk that something will get dropped.

With the Tracking People network symposium coming up in a few months, it makes me wonder whether Tracking People really is concerned with a Collingridge Dilemma. I mean, we aren't really talking about the development of new tracking technologies, are we? Electronic tagging, GPS location, face recognition on CCTV, privacy concerns over internet data - these are all established methods of tracking: we're long into the power part of Collingridge's Dilemma. We can see the consequences - the challenge is how to deal with them.

Which raises another challenge - how do you know when a technology is still "new"? I don't think it's as obvious as that. Did Twitter or Facebook or Google know what they were getting into when they started out? Maybe they did. It's just that sometimes the disruptive innovation doesn't become apparent until it's well into hindsight. Then again, perhaps there's an argument for saying that we didn't know how disruptive these things would be, but we can see that - for example - AI or robots are going to be. But even these are very broad categories. Any given researcher or engineer is probably working on a very incremental step within the field.

There are of course, a few issues around tracking that arise from all this. There is the potential moral hazard of tracking: the risk that we over-rely on the technology and use it as a cheap alternative to incarceration, or supervision of patients or children; or that we create a data free-for-all that allows all kinds of unexpected and nefarious uses of data, either through breaches or obscure data management practices. Or unexpected big data mash-ups of information from multiple sources. And there is the opportunity cost of not tracking: that we fail to give people the increased independence, or continue to carry the cost of a large prison population and so deprive other potential recipients of taxpayers' money.

This is about to get rather more practical for me, as in the SUITCEYES project (http://suitceyes.eu) we are talking about using technologies like location tracking and facial recognition. All this raises related issues: what I don't have is a sound solution! Still, at least it's given me some points to reflect on...

References 

1. Stilgoe J, Owen R, Macnaghten P.  Developing a framework for responsible innovation. Research Policy. 2013 Nov;42(9):1568–80.

2. Genus A, Stirling A. Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy. 2018 Feb;47(1):61–9.

3. van de Poel I. An Ethical Framework for Evaluating Experimental Technology. Science and Engineering Ethics. 2016 Jun;22(3):667–86.

4. Jasanoff S. Technologies of Humility: Citizen Participation in Governing Science. In: Bogner A, Torgersen H, editors. Wozu Experten? [Internet]. Wiesbaden: VS Verlag für Sozialwissenschaften; 2005 [cited 2018 Aug 24]. p. 370–89.

5. Pesch U. Engineers and Active Responsibility. Science and Engineering Ethics. 2015 Aug;21(4):925–39.

6. Foster J. The Sustainability Mirage: Illusion and Reality in the Coming War on Climate Change,  Routledge, (2008)

Tuesday, 31 July 2018

2017.58333333...: Slightly-Post-Mid-Year-Review

I should really have done this at the end of June, but between exhibiting Engineering the Imagination and preparing for the SUITCEYES meeting at Leeds, it all went a bit... manic.

Anyway: let's see how I'm doing:

On the Blog

I set three goals for the whole of 2018:
* At least 24 posts (pro-rata, this would be 12 to the end of June, 14 to the end of July)
* At least 2 posts per month.
* At least 1 non-review post per month.

I was on 9 posts at the end of June, and had caught up to 13 at the end of July. (EDIT: I tried to get clever by posting this on the morning the 1st of August so this would be my August non-review post, but Blogger seems to have recorded it as 31st July - it must run on US time, I guess? Anyway, I've actually just caught up at the end of July and now need a new non-review post for August). So I'm currently 1 post off my target, but I failed to post at all in June,  missed having a non-review post in April, and this will be my fifth post in as many weeks, so I've fallen into the feast/famine pattern I wanted to avoid.

That said, I think the tick-tock approach is working well, and for the most part I'm on schedule, so I'll stick with it.

Research
* Deliver the SUITCEYES and APEX projects.

I'm doing these - we've recruited staff for SUITCEYES, begun design and experimentation so things are happening. APEX is virtually finished.

* Submit at least five grant applications as either PI or Co-I.
I'm on two at the moment, with two more in the pipeline. So, I need to find project number five!

* Submit at least two *more* papers to high quality journals (resubmitting the ones under review don't count).
Ouch. I'm on zero, at the moment. There are two in draft, though, so I think I'm on target.

* Get the MagONE force sensor incorporated into FATKAT.
Done! Ish. Laidlaw Scholar Jamie Mawhinney has put the hardware and software together. There's just the small matter of calibration...

* Get BIGKAT (the new generation of PSAT that incorporates prehensile as well as postural measures) up and running.

Done! PhD student Awais Hafeez has done sterling work on getting this working and benchmarked. Field tests to begin this autumn... You'll notice that I'm taking the credit for a lot of other

* Continue to develop the grip model to address feedback and corrections: This, having no direct funding attached to it, remains the poor cousin to other work
Done, in the sense that I have continued to develop it, rather than it being finished, but we've demonstrated predictive value of the model, and I have a fancy new way of extracting features. All hush-hush til the publications come out, though...

Other
*  Make some inventions: And get back into Leeds Hackspace while I'm at it. I haven't been for about eighteen months.
* Formulate a reading list for the Engineering Imagination.
These are two that got dropped last year, and I think they're going to get dropped this year, too. It's shaping up to be a busy autumn with everything that's going on. Well, we shall see...

Monday, 30 July 2018

Month in Review: July 2018

July (like most months, now I think about it) is a funny month. A sort of liminal space: it doesn't have the teaching rush of May or June, and it isn't as full of family holidays and childcare commitments as August. Nor, however, is it a doss: especially if you don't want to be crunching come September.

I always set myself the target of having my handouts ready for the end if July. I haven't *quite* hit that target,  but I'm pretty close (I have a new lecture this year, which still needs tidying up). The rationale behind this is to keep me from fiddling, and allow me to write my exam before term strikes. I also have to double the size of my Mechanical Systems module for 2019-20 delivery, which seems a whole away, but the module proposal needs to go in in November, and if I don't want to be writing it and an exam while trying to teach, both need to be done this summer. It makes for a much smoother transition between term and "not term": it gives me more research time during term, and avoids wildly swinging between teaching and research. I prefer it this way.

The big news this month was the second SUITCEYES consortium meeting, which took place in Leeds. It was good: though we communicate frequently by email, Slack, and teleconference,  you can't beat face-to-face meetings.

One of the main aims of the meeting (other than planning - and the reason for holding it in Leeds) was to introduce consortium members to the Social Model of Disability. After all, Leeds is one of the leading Centres for Disability Studies. We had a session from Leeds Disabled People's Organisation on the Social Model itself, from Professor Anna Lawson (Director of the Centre for Disability Studies) on Human Rights and Legal Frameworks, and from Deafblind UK on working with people with Dual Sensory Loss. All valuable sessions, and very helpful for the stage of the project as we prepare to launch into user engagement and technical development.

Then we had a couple of extra days with Astrid Kappers (from VU Amsterdam), Nils-Krister Persson and Li Guo (both from Borås), to follow up on the thermal testing we did in April. This time we were looking at more conventional vibrotactile feedback, trying out different patterns. A good session, and we're going to be using this to start more formal data gathering in the autumn.

Other than that, both my Laidlaw students have finished - a 3-axis version of FATKAT based on the MagONE is now ready for calibration (barring a bit of resoldering to improve the part fit), and we've been designing some VR experiments on interaction with objects.

We've now rebranded from CAP (formerly PACLab) to ICON (Immersive Cognition), which has included some changes in the way we manage the group and seem (touch wood) to be working.

Now August beckons - always an even odder month. I'm off for half of it with annual leave, so we'll see how fitting in all the work that needs doing (that new handout, the exam, and planning that new module... Oh, and running SUITCEYES and writing publications, and at least one large grant proposal that needs doing) goes...

Engineering and the Posthuman

I'm co-writing a piece with Stuart Murray that touches at the end on posthumanism and engineering design. Writing across disciplines is always a challenge - so bound is each by its conventions and expectations - but there is nothing quite like it for really proving a collaboration! This is some way out of my comfort zone - you may recall I have some unfinished business with Rosi Braidotti's The Posthuman from 16 (!) months ago - so I thought the blog would make a good place to work through some my thoughts.

Caveat Lector
I say this often, but the warning still holds: I'm an engineer trying to describe areas way beyond my competence. What you're getting are my raw, crudely thought out reflections on encountering new material: not the carefully constructed argument of an expert. Feel to free to put me right.

Posthuman vs Transhuman
A complexity in dealing with the "posthuman" is the sheer range of ways that the term is used. One  of my main lessons from reading Braidotti is that "posthumanism" is not about being post-human or the next stage in evolution or the perfecting of the species, or moving beyond human (that being the domain of transhumanism, and there I think the relationship of such engineeres bodies to Engineering is quite clear). Rather, it us about moving beyond Humanist philosophy. That being the case, does it have any relevance to engineers?

A Note on Terminology
Given the variety of meanings for "posthuman", and the fact that while we might talk about "posthuman engineering" and "transhuman engineering", they don't really compare with "human engineering", I'm going to come up with a clunky solution and use the phrase "humanist", "posthumanist" and "transhumanist" when describing styles of engineering. You will also notice a lot of use of quotation marks. Often inconsistently. That's how much I'm struggling with the terminology here.

Beyond Human
One of the major tenets of posthumanism  seems to be the breaking down of barriers between human, animal and machine. This can be captured in the whole question of "Why should bodies end at the skin?". By this token "human" isn't an ideal to be refined and perfected (as it is in Transhumanism, I guess?), but a boundary to be questioned. Note also assumptions about who is allowed to be "fully human", whether humans are rational, 

This Deluezian style of inquiry is one I have written about before, and the notion of system boundaries as something socially negotiated in any engineering project rings true. And there are times when the skin is a useful system boundary, and times when it is not. The question that is interesting is: what happens when the system boundaries are negotiated without engaging users in that discussion? 

This is where we get into criticisms of "fixing" people. Or coming up with complex solutions to problems where simple solutions already exist. Or failing to consider the broader sociotechnical issues - who maintains the device when it goes wrong? What happens if it becomes obsolete? But recognising that a person should be considered as part of an "assemblage" (to use the Deleuzian phrase) and considering the social environment around them is hardly a radical new approach to design or Deleuzian experimentation. And I doubt it would qualify as being a "posthuman" position.

Does the Posthuman have any value in the discussion of assistive technology and prosthetics? Or does it run on a parallel track to engineering, with the two never actually intersecting? The difference between "humanist engineering" and "transhumanist engineering" seems reasonably clear: the former sees the human as a given; a constraint to be designed around, outside the scope of design; the latter sees the human as part of what is being designed, with form and structure, function and material all up for debate and redesign. What would the difference between "humanist engineering" and "posthumanist engineering" look like?

Is it an ethical position? Consideration of robot and animal rights, for example? Shifting away from viewing humans as the rightful rulers of the world who can do with it as they will to seeing us as only one part of the world, with corresponding moral obligations? Is it something else? And wouldn't that apply to almost everything, not just to medical devices? Of course, if one adopts the Social Model of Disability, then questions of access pervade every engineering decision. Who gets to use this device, and who is excluded? This brings us back to the idea of "selective enabling", and questions of what obligations the engineer has to society. But would this make inclusive design an example of "posthumanist engineering"?  That doesn't feel quite right.

Of course, one of the interesting properties of labels is that as a concept becomes more commonplace, it becomes absorbed into mainstream terminology. With time aspects of "posthumanist engineering" could just merge into the accepted definition of engineering. The notion that designers need to avoid toxic materials, or think about how products will be disposed of, or consider the security of their devices move from being unusual ideas to just part of what it is to design.

Can Engineering be Humanist?

It only makes sense to talk of "posthumanist engineering" if engineering can also be "humanist". Still, I'm not entirely sure I would recognise a "posthumanist" or "humanist" design if I saw them. Or if designs can be "humanist" or "posthumanist" for that matter. We rub up against the dual nature of technical artefacts: it is very hard - maybe impossible - to reverse engineer the intentional nature of a product from its physical nature. Since humanism and posthumanism describe philosophies, they would manifest as different approaches to design decision-making (applying different values, even if the underlying process didn't change). Different approaches to design can converge on very similar - or even identical - outcomes. The can diverge wildly, as well, and changes of intent will change the "best" design for a given circumstance. It might be the case that actually, posthumanist designs end up being very different from humanist designs. It's just that it might not. Would the design process even be different? Or just the values underpinning choices? How would you know if your design process took a posthuman approach? This, by the by, probably applies to all engineering done across disciplinary boundaries. How would you know from a product whether it had been engineered ethically? Or critically? Or with the Social, rather than medical model of Disability in mind?  

An interesting comparison (well, I think it's interesting) is whether the same difficulty would apply to recognising  "transhumanist" design. There is a difference in intent (which would be difficult to confirm from the end product), but would there also be a difference in the scope and material of designs? Would "transhumanist" engineering be visible in terms of the incorporation of the human body as part of the material being designed? But then, would we class tissue engineering or the reconfiguration of muscles in amputees to improve the reading of EMGs for prosthetics as "transhumanist"? Probably not - again, the difference lies in intent.

That said, the phrase "Humanist Engineer" is not one that I have originated. You can find several references to it: a Twitter account, an interview with Lewis Cerne of New Relic, a talk to the Royal Academy of Engineering from Janusz Kozinski of the New Model in Technology and Engineering. The thing that these have in common is a need for engineers to be "more human", take a broader view, and have a focus on the needs of humans, rather than just on developing technology.  Of course, here the term "humanist" is probably used in the sense of a non-religious person who seeks to "live a good life"  and " work together to improve the quality of life for all and make it more equitable" (see Humanists UK), which I'm not sure is quite the same thing that posthumanism critiques.

I mean, I think that telling an engineer working on prosthetics that "your users are no less human because they have had an amputation!" would probably elicit a puzzled look and the response "Why would I think they were any less human?". Though that in turn might leader to a debate about why they are designing prostheses - which is probably better taken by those who use them than by me. I guess, though, we run back to the matter of intent: if you're designing prosthetic hands (say) because anyone who doesn't have two hands is broken and needs to be restored to normality, then that's a different case from designing them because amputees find them a useful tool for picking things up.

Of course, perhaps the "posthuman" becomes more interesting in the context of artificial intelligence (AI) and robotics, where blurring the boundary between human and artefact becomes more significant. Are robots slaves, or do they have rights? And if robots have rights, then do other machines? Do we have obligations to the machines that we create? And if so, where is the line drawn? Do we have a moral obligation not to injure the pavement by walking on it? I don't think anyone would make that case. Would we only have such obligations to "strong" AI? 

The short answer is, I'm just not sure. Which is a slightly unsatisfying point at which to end this blog post, but greater minds than mine are grappling with these issues.  Perhaps a better question is: are these questions relevant to engineers while they are doing engineering?  As distinct from being relevant to engineers because they are relevant to everyone?

In Summary

So, after that long and rambling thought-piece: what (if anything) can we conclude? 

First and foremost that the effects of any philosophical intervention would have to be sought in the process, rather than the product of engineering. 

Following from this, then we might ask: what would be different about this process?  The steps involved might even be the same. It would be in the values and attitudes brought to design decisions. 

And what would be different about those values? Here the amorphous nature of the posthuman causes the detail to become blurry. Would it be more respectful of life as a whole? More reflexive, identifying and challenging biases? More open to exploring broader horizons? More focused on the needs of users rather than theoretical ideals? All of these have potential in helping to head off Colingridge's dilemma, and a countering the problem of microscopic vision - but none of these would be unique to the Posthuman.  

But maybe that doesn't matter. Maybe the value of engineers engaging with the Posthuman is that it provides an avenue for raising these issues, rather than a magic bullet to solve them.

And that feels like a positive conclusion to me, so I think I will stop there.

Thursday, 5 July 2018

Month in Review: June 2018

Well, the wheels are definitely off in terms of my two posts a month target! We'll see if I can catch up an do four posts in July!

It goes without saying that June was a busy month. It's not quite as brutal as May (particularly since I'd finished my marking), but it's exam board season. We're much more efficient than we used to be (what was about 14 hours of meetings has been trimmed to 7 or so), but the deadlines are absolute. Things have to be ready for the external examiners come hell or high water (did I say that in my last post? Well, it's true!). All marks need to be finalised, students with coursework extensions' work marked , marks uploaded and coursework  samples selected.

Of course,  it was made extra busy by the British Academy Summer Showcase preparations, and prep for New Designers and the upcoming SUITCEYES consortium meeting taking place in Leeds this month. Also our taught MSc conference, where we spend a day with every taught postgraduate presenting their dissertation. Busy and hard work, but a great way of finding out about the range of projects going on.

I have two Laidlaw Summer students, one returning to work on FATKAT (the Finger and Thumb Kinematic Assessment Tool), and another working on haptic feedback in VR and SUITCEYES. The Leeds SUITCEYES team is finally complete, with Zhenyang Ling (known as Yang), Research Fellow in Haptic Communication and Navigation starting.

There are some big changes afoot for CAP (the Cognition-Action-Planning lab I am part of), as we do our annual stocktake of where we are and where we're going. The most obvious change is our increasing focus on Immersive Cognition. It doesn't mean a lot of change in what we do, but reflects the fact that our work is increasingly oriented around Virtual, Augmented or Mixed Reality. The lab will be rebranded as ICon (for "Immersive Cognition" - ICogn seemed a bit opaque). It's exciting times with Leeds' new centre for Immersive Technologies coming online.

July promises to be another busy month! We have the aforementioned SUITCEYES consortium meeting taking place in Leeds, we are now ready to start conducting our first interviews, and with Yang getting stuck into the technology side,  we're really hitting our stride. And, of course, teaching prep. I always aim to have handouts ready by the end of July, so I can print in good time and can't make last minute changes.

Busy times - still,  it keeps me out of trouble, eh?

British Academy Summer Showcase

I nearly titled this "Too busy to blog", since it's been a fiendishly heavy duty month. That's partly due to exam boards - this is always quite a busy time of year, since marks need to be in and confirmed for the external examiners' visit come hell or high water - but this year things have been busier than usual, thanks to the upcoming SUITCEYES consortium meeting here at Leeds next month, and in particular the British Academy's first summer showcase, where Stuart Murray, Sattaporn Barnes (of Eatfish Design) and I were showing off our "Engineering the Imagination" project, and the resulting artificial hands that we developed.

It was a great time - if very busy (we spent about thirteen hours each over three days on our exhibition stand). Lots of good conversations! But let me back up a little: what is Engineering the Imagination? After all, you might have spotted a certain similarity to the title of this blog...
Engineering the Imagination is a year long project funded by the APEX scheme, intended to bring together sciences and the humanities. This particular project focuses on the design of artificial hands, and in particular the consideration of non- functional hands: which is hard for me, as an engineer, to get my head around. I suspect that Stuart and I have very different takes on the project.  For Stuart, I think it's all about hands as metaphor, ideas of deficit and difference: what makes a hand 'disabled'? Why do we design artificial hands to be like 'normal' hands - and what makes a hand 'normal'? What do hands signify, and how does this change if the hand is artificial? Stuart would be better placed to explain his views.

For me,  it's about exploring ideas about what we can do with artificial hands. Why not have a sixth finger? Lights? If we can't replicate the human hand,  are there other ways an artificial hand could emote? Or function?

The designs we were showing off reflected this. There was the Empathy Hand: a powered hand that could adopt a range of poses; the three-fingered "Mudd Hand", based the hand of our collaborator Andy Mudd (who was also there to show the original that inspired it!) and the six-digit "Lunate Hand" which had a second thumb, inspired by the work of Clifford Tabin, and his comments about extra thumbs.

You can see images of all three, and the stand (for context!) below! Also, though we didn't have it ready in time for the Showcase,  the Empathy Hand now has a light-up palm which, when pressed, causes the hand to light up and close in response. It was a great three days, but I'm aware that I'm already five days late with this update, so I think I'll call it a day there, and let you enjoy the pics!



The Stand as a Whole!
A three-fingered artificial hand, shown with fingers closed.
The Mudd Hand: A three-fingered hand designed to mimic that of our collaborator, Andy Mudd

A six-digit hand: it has the normal five digits, plus an additional thumb extending from the palm to oppose the middle finger.
The Lunate Hand: A six-digit hand adding an extra thumb from the palm. Named Lunate because we reckon that the thumb is attached roughly where the lunate bone is in the interest, and it sounded swish.


An artificial hand shown in an open pose, with fingers splayed.
The Empathy Hand: An artificial hand that can open and close in response to trigger signals. It is designed to be modular so that parts can be interchanged. Adding a light-up palm for example! At the moment it just has a range of poses triggered by button presses.


An artificial hand shown in a closed pose, grasping another hand from the exhibition.
The Empathy Hand getting to grips with the competition!


The Mudd and Lunate Hands in Situ


Friday, 25 May 2018

Month in Review: May 2018

It's not the end of the month, yet, but as it's half term next week, I'm off work, so this seemed like a good time to update. Rather than risk drifting into June.

It has (as always) been a busy old month. In many ways, May is exam month: vivas have been the main feature. I've marked portfolios, read and examined dissertations, and examined not one but two product design exhibitions! Everything else gets rather squeezed out. Still, I've managed to fit in a presentation at the Pint of Science Festival, which was good, and we've made some significant progress on the Apex project, so I'm awash with bits of 3D printed hands at the moment! I also managed a trip to Peterborough to visit Deafblind UK for the SUITCEYES project, which was very informative. And last - but far from least - we welcomed aboard a new member of the SUITCEYES team: Adriana Atkinson, as Research Fellow in User Needs Analysis. She'll be looking after the interviews in the UK that will inform the SUITCEYES project. In fact, after four months of largely admin, recruitment and planning, with me doing a bit of technical development on Work Package 5 (Driver Control Units - the bits I took to Amsterdam last month), things have abruptly sprung into life. This is particularly true on Work Package 2 where we suddenly have a draft protocol (thanks in large part to Sarah Woodin), an application for ethical review for the protocol (thanks in large part to Bryan Matthews) and a good chunk of literature under review (thanks to Sarah, Bryan and Adriana). I mention who's doing these things since, for the most part, I've ordered computers, booked rooms, organised meetings and run vivas - it feels almost unnerving to have so much happening without me being the one doing it! But it is also a huge relief to feel all the early work starting to pay off, and feel like we're actually getting into research and not just lots of planning and project management.

Next month is shaping up to be an even more exciting one: Jamie Mawhinney will be resuming his Laidlaw Scholarship on developing FATKAT; we have a second Laidlaw Scholar (one Erik Millar)  starting who will be looking at tactile feedback and VR; we have another SUITCEYES Research Fellow starting - looking after the sensing and technical developments and, of course, I will be down at the British Academy Summer Showcase with Stuart Murray and Eat Fish Design showing off our work on Engineering the Imagination. Also, there will be exam boards, so my teaching duties are not done yet.

Still, first, I'm off to see the Falkirk Wheel and the Kelpies at the back end of this month: I couldn't be more excited!

Talking through Touch: Pint of Science Festival

I was invited to participate in the Pint of Science festival this year - specifically at the "Harder, Better, Faster, Stronger" event on the 16th of May. As is my want, I like to think out loud in writing a presentation, and the blog is a perfect place to do that, so here are my jottings - published retrospectively, in this case, largely because I've been so busy with examining duties that the blog as been a low, low priority!

This presentation is on "Talking through Touch", and it really relates to the work I'm doing on the Horizon 2020-funded SUITCEYES project. As always, I need to be careful because I am an Engineer - not a neuroscientist, or a psychophysicist, or even a philosopher of the senses. I know how to make things, but I can't give chapter and verse on - say - sensory illusions or the practicalities of multisensory integration or the merits of different haptic sign languages. I can parrot what I've read elsewhere and heard from others, I can give you a bit of an overview on these areas, but I'll never be anywhere near as good at them as those who specialise in them. But I can make stuff so, y'know - swings and roundabouts.

Anyway, it does imply the need for my customary "Caveat Lector" warning: you're about to read the thoughts of an engineer, and they need to be read in that context!

The Sense of Touch
Perhaps a logical place to start is with the sense of touch. And where to better start than by pointing you to people who are far more well-versed in these things than I am? A good place to start would be the recent Sadler Seminar Series "Touch: Sensing, Feeling, Knowing" convened here at the University of Leeds by Amelia De Falco, Helen Steward and Donna Lloyd. Sadly, the slides from the series aren't available - I might need to chase up on those to see if they or recordings will be made available, because they were very good talks. Particularly notable for my purposes - because they deal with crossing senses - were those from Charles Spence from the University of Oxford (noted for his work on multisensory integration - using sound and tactile stimuli to augment the sense of taste, for example) and Mark Paterson from the University of Pittsburgh who deals with sensorimotor substitution and the problems thereof (which we will come back to later on).

A lot of my research is about prehension and grip - but hands are also used to explore the world around us (sensing hot and cold, rough and smooth, hard and soft, and so forth) and to communicate - through gestures or direct touch (India Morrison's presentation on Affective Touch at the aforementioned Sadler Seminar series was particularly illuminating in the latter regard). And of course, it is worth noting that touch is not a sense restricted to the hands, but present across the skin - albeit with different degrees of fidelity. Hence the classic "Cortical Homunculus" representations that you see:

Sensory homunculus illustrating the proportion of the somatosensory cortex linked to different parts of the body.
Cropped from image by Dr Joe Kiff taken from Wikipedia under creative commons licence CC BY-SA 3.0
This is the limit of my knowledge on neurology of the somatic senses, so I'm going to leave it there. The key point for my talk is that we're interested in touch as a mode of communication, rather than, for example, as a way of exploring properties of the world around us. Of course, there is a link here: in order to communicate through touch, we need to be able to perceive the signals that are being sent! So let's have a think about what those signals might be.

Communicating Through Touch
Tactile communication takes many forms. The one we're probably most familiar with is the eccentric-rotating mass motor, that provides vibrotactile feedback on our phones - the buzzing when you set it to "vibrate". But there are lots of other examples. Braille is well known, and likewise you can often get tactile images (see this link for a nice paper on this from LDQR, who are partners in the SUITCEYES project), such that information can be presented in a tactile form. Tactile sign languages exist, and these take a variety of forms, from fingerspelling alphabets (often the hand) to more complex social haptic signals or tactile sign languages such as Protactile. This does highlight an interesting distinction - between signals (one-off, discrete messages) and language (assembling signals into complex messages - at least, to my engineering mind, language assembles signals: linguistics may take a different view!). You can see the fundamental difference between a simple buzz, and - as an example - Protactile. Haptic sign languages have shape, movement, involve proprioception. They aren't just morse code that can be converted easily into vibrations.

Luckily, Haptic Feedback isn't restricted to vibrotactile feedback through eccentric rotating mass motors. One of the examples that I find really interesting is the Haptic Taco, which changes its shape as you get nearer or further from a target point. And there are lots of examples of different modalities of haptic feedback - electrostatic, thermal, pressure, shape changing, etc, etc, etc - you can check out conferences such as Eurohaptics for the cutting edge in haptic feedback.

Sensory Substitution vs Haptic Signals vs Haptic Language
This brings us neatly onto the issue of what it is that we want to signal. After all, in the case of SUITCEYES, the aim is to "extend the sensosphere" by detecting information from the environment, and then presenting this to the user in a tactile form. This can take two forms that I can see: direct sensory substitution (transferring information from one modality to another - measuring distance with a distance sensor and then giving a related signal, as we did back in the WHISPER project) or by signalling - that is, interpreting the sensor data and sending a corresponding signal to the user.

A simple example, based on comments from the WHISPER project, might help to illustrate this. One piece of feedback we received was that the device we developed would be helpful for identifying doors, since it could be used to locate a gap in a wall. This suggests two different approaches.

The first is the sensory substitution approach: you measure the distance to the wall, and feed this back to the user through vibrations that tell them the distance to the item the distance sensor is pointing at. Close items, for example, might give a more intense vibration. The system doesn't know what these items are - just how far the signal can travel before being returned. In this scenario, the user sweeps the distance sensor along the wall, until they find a sudden decrease in vibrations that tells them that they have found hole. It would then be up to them to infer whether the hole was a door. Of course, this wouldn't work terribly well if the door was closed. An alternative would be to use computer vision, for example, to recognise a doorway.

A second approach would be to use, for example, computer vision to interpret a camera feed and recognise doorways. Now, instead of sending a signal that is related to distance, the system would need to provide some sort of signal that indicated "Door". This might be in the form of an alert (if the system is just a door detector, it need only buzz when it sees a door!), or of a more nuanced signal (it might spell out D-O-O-R in fingerspelling, morse code or braille, or it might use a bespoke haptic signal using an array of vibrotactile motors).

There is a third approach, which would be that of a haptic language - that is, combining multiple signals into a coherent message. "The door is on the left", for example, or "The door is on the left, but it's closed", or "The door is 3m to the left".

There is one further issue to consider (kindly highlighted to me by Graham Nolan from Deafblind UK), which is that of nuance: when we speak, we don't just state a series of words. We modify them with tone, gesticulation and body language, something that often gets lost in written text alone - see Poe's Law, or any of the many misunderstandings on the internet and email arising from failure to recognise sarcasm, or a joke - it is, after all, one of the reasons that emojis have caught on. I imagine. The same problem applies in haptic communication: less so with our door example, which is largely functional, but let's take a different example.

If signal distance, then you would know when something was in front of you. You might, using (let's say) our hypothetical computer vision system give that thing a label. Is it a wall, a door, a person? Or your best friend? And what if it is your best friend giving a greeting, or your best friend waving warning? Do they look happy or worried? Can we have empathetic communication and build relationships if our communication is purely functional?

I'm not the right person to answer that, but from a technical perspective, it does highlight the challenge. Do we need a stimulus that directly conveys a property (such as distance)? A signal that can be used to label something?

So, there are a few things we can look at here: modulation of some one property to represent another, a set of signals to label different things, combining multiple signals to create messages, and finally the challenge of modulating those signals, or messages, to capture nuance. But what properties do we have to play with?

Things to Consider
There are several modalities of tactile stimuli:

Contact - a tap or press, bringing something into contact with the skin.
Vibration - the classic vibration setting on mobile phones
Temperature - not one that is well used, as far as I'm aware, since it's tricky to get things to heat up and cool down quickly.

Another interesting example is raised by the Haptic Taco: a device that changes shape/size to indicate proximity to a target. So, we can add shape and size to our list. There are others, too (electrostatic displays being the most obvious).

Then, we can modulate each of these in three ways - duration, location and intensity - and play around with ordering.

So, we have a toolkit of modalities and modulations that we can apply to create signals or more complex communication. Of course, we then have questions of discrimination - the ability to differentiate these elements - in time, location and space.

There is finally, the question of efficiency: how quickly and reliably a message can be interpreted. After all, morse code can be delivered readily through vibrotactile feedback, but compared to direct speech, it is relatively slow.

And... that's pretty much where my presentation runs out. No clear conclusions... no findings, because we're still very much at the beginning of the project. This is more about laying out my thoughts on haptic communication. Let's hope that doesn't bother the audience too much.

Monday, 30 April 2018

Month in Review: April 2018

I'm going to have to admit defeat on producing a none Month-in-Review post this month: life and work have been too busy for blogging! Of course, paradoxically, that gives me a lot to talk about, but such is life.

Of course, this month has seen the Easter holidays, so I've had Bank Holidays, and a substantial chunk of annual leave taken up looking after children (including a trip to Manchester to visit the Robots exhibition at the Museum of Science and Industry, and a family holiday), which means that I've only actually been working for half the month.

Still, it's been an exciting half month, particularly on the SUITCEYES front. We've been interviewing for User Needs Research Fellows, and I've also had a trip to visit Astrid Kappers' lab in Amsterdam to test out the first prototypes with Nils-Krister Persson and Adriana Stöhr from Borås. And Astrid. It was a fascinating, and very informative time. Lots to report on - in due course!

There was also a trip to Dundee with Stuart Murray to meet with Graham Pullin (the author of the excellent Design Meets Disability, and leader of the also-excellent Hands of X project) to talk Hands, which was also great. On top of that, we are at the end of teaching, have had a very productve workshop on Immersive Technologies (today!), and we've just managed to get revisions on one of our grip modelling papers in.

Lots of exciting stuff to report - but no time in which to report them, sadly! Such is the way!

I will be doing the Pint of Science festival in May on the subject of Talking Through Touch, so hopefully I'll get a chance to put up a post on my talk before I give it... here's hoping!

Sunday, 1 April 2018

Month in Review: March 2018

Ooops. I managed to miss my "two posts a month" target in March, albeit only by a day. Anyway, it's been a busy month with not a huge amount of specifics to report. As noted in my last post, I was down to Westminster for the All-Party Parliamentary Group on Assistive Technology: other than that there's been the usual end of term rush, planning for the Summer Showcase, advertising for Research Fellows, sorting out employment paperwork, and one PhD student (Awais Hafeez) has passed his transfer viva, while another has passed his final viva (Haikal Sitepu) - congratulations to both! The main thing that has kept me busy though is implementing the controller for SUITCEYES, which I'm pleased to say is coming along nicely and will be tested in Amsterdam next month. Sorry, *this* month, since it's now April. Of course, half of this month is School holidays, so I've a fair bit of annual leave, plus the two days in Amsterdam, plus a day's round-trip to Dundee. I'll keep you posted!

Friday, 30 March 2018

APPGAT: Assistive Technology and the Industrial Strategy

This week, I attended the All-Party Parliamentary Group on Assistive Technology's symposium on Assistive Technology and the Industrial Strategy. This was a new experience for me: policy and parliament are both rather outside my sphere of experience, but ever since Claire Brockett organised a Parliamentary outreach session on Science and Academia in UK Policy, I've been thinking about how I might engage more with Westminster, and this seemed like a good opportunity to get involved and keep my finger on the pulse. 

I went with two hats on (not literally) - representing both the Centre for Disability Studies, and the Institute of Design, Robotics and Optimisation - though I was there for both in very much a listening capacity. Just attending was an interesting experience - the format was very different from anything one experiences in academia. Each presenter got five minutes, the keynote got ten, and the timekeeping was absolutely dead on. The floor was opened to questions and comments, the questions were (more or less) answered by the panel, and that was the end of the session. By academic standards - where presentations are usually fifteen to twenty minutes and frequently overrun - this was lightning fast. Of course, the aim wasn't to describe a detailed piece of research, but to give high level comments, and make way for discussion.

The session was chaired by Lord Chris Holmes (Conservative Peer and noted Paralympian Swimmer), and had contributions from Hazel Harper  of Innovate UK, Bill Esterson MP (Shadow Minster for International Trade, and Shadow Minister for Small Business), Prof Nigel Harris (Director of the outstanding Designability), David Frank (Microsoft's UK Public Affairs Manager), 
Dr. Catherine Holloway (Academic Director of the Global Disability Innovation Hub), Alex Burghart MP (Member of the Work and Pensions Select Committee), with the keynote coming from Sarah Newton MP, Minister for Disabled People. This was followed by questions from the floor - I won't go through a blow-by-blow account of what was said: rather, let me pull out the key themes.

Of course, there were two themes that were in some tension here - as always in assistive technology - the needs of disabled people to remove barriers and find solutions that enable them to do what they wish to do; and the needs of the designers and manufacturers of assistive technology to keep making new devices and thereby keep making money. This technology exists within academia, as well, of course - the REF requires me to produce new and cutting edge engineering (AI! Exoskeletons! Self-Driving Cars!), which isn't necessarily the same research that will most benefit disabled people. Which isn't to say that the two are mutually exclusive, of course, but it is a source of tension.

This tension exists in the Industrial Strategy itself: this strategy is all about "building a more productive economy". So, in terms of AT does that mean improving the productivity of the AT sector? Or does it mean AT to improve the productivity of disabled people? This was never really addressed - there was a lot of reference to helping disabled people "fulfill their potential", which basically seemed to mean working. But there were also references to the size of the AT sector in the UK economy, how well we perform there, selling to the rest of the world. The two need not be mutually exclusive - indeed, they can be mutually reinforcing, as highlighted by Nigel Harris' discussion of Designability's co-design approach.

Inevitably, the poster children of cutting edge technology (AI! Exoskeletons! Self-Driving Cars!) cropped up. Which I'm not against by any means - with my iDRO hat on, these are exciting new technologies that are going to help us to do all sorts and have huge potential for enabling things - with my CDS hat on , though, I'm more skeptical. And this is where the underlying tension rears its head again. If we want the UK to be world leaders in tech, we need to be doing R&D where the tech is "sexy" and the world at large will want to invest. But that's not necessarily the same areas that will most benefit the lives of disabled people. 

This ties in with the wider issue that the size of the market for any given piece of AT is relatively small. Nigel Harris highlighted the need for products that have wider appeal, so that they can be sold to the mainstream as well as the specialist sectors. This also raises the larger question of accessibility - that is, whether we need to develop specialist AT and to what extent we need to ensure that technology is accessible so that it everyone can enjoy the benefits - the selective enabling issue that I was musing on a year and a bit ago.

Particularly noticeable was the lack of any representation of Disabled People Organisation's on the panel (noted by Catherine Holloway) - we had academics and industrialists, but nothing about the end user. Which communicates to me that the focus of the symposium was on the AT sector as a business, rather than on the needs of the recipients of AT. Perhaps that's unfair: or perhaps it's just an indication that one way of resolving the tension between them is to treat the two aspects separately, and this symposium was really about the manufacturers. After all, it's unreasonable to judge the activities of APPGAT on the basis of a single symposium. Nevertheless, the symposium promised to look at "how the AT sector can further contribute to our economy and society". There was a lot of the former, out rather the less of the latter, other than the need for AT to help people "fulfill their potential".

How this will work out in the Industrial Strategy remains to be seen. Maybe there is a need to address "AT as a business" and "AT as a service" separately? Going after the cutting edge is always chasing rainbows - things that were exciting and novel and exploratory become useful and hence commonplace and mundane, so the research and the attention moves on. It's good to have that cutting edge - but attention needs to be paid to the other part as well: how we get from that cutting edge to useful products and devices that actually benefit people's lives once the immediate research attention moves on. That's something that I'd like to see addressed, though I've no idea how you'd do it. 

Still, in reflecting on all this, a particular question keeps popping up in my mind: how do we enable disabled people to get involved in the AT industry? Not just as users and testers, but as designers, makers, direction-setters? How do we enable people to make their own AT, and customise their own devices, rather than just selling them specialist kit?

Anyway, those are my thoughts - you can follow APPGAT on Twitter at @AT_APPG and follow the discussion of the symposium on #ATIndustrialStrategy  .