Friday, 25 May 2018

Month in Review: May 2018

It's not the end of the month, yet, but as it's half term next week, I'm off work, so this seemed like a good time to update. Rather than risk drifting into June.

It has (as always) been a busy old month. In many ways, May is exam month: vivas have been the main feature. I've marked portfolios, read and examined dissertations, and examined not one but two product design exhibitions! Everything else gets rather squeezed out. Still, I've managed to fit in a presentation at the Pint of Science Festival, which was good, and we've made some significant progress on the Apex project, so I'm awash with bits of 3D printed hands at the moment! I also managed a trip to Peterborough to visit Deafblind UK for the SUITCEYES project, which was very informative. And last - but far from least - we welcomed aboard a new member of the SUITCEYES team: Adriana Atkinson, as Research Fellow in User Needs Analysis. She'll be looking after the interviews in the UK that will inform the SUITCEYES project. In fact, after four months of largely admin, recruitment and planning, with me doing a bit of technical development on Work Package 5 (Driver Control Units - the bits I took to Amsterdam last month), things have abruptly sprung into life. This is particularly true on Work Package 2 where we suddenly have a draft protocol (thanks in large part to Sarah Woodin), an application for ethical review for the protocol (thanks in large part to Bryan Matthews) and a good chunk of literature under review (thanks to Sarah, Bryan and Adriana). I mention who's doing these things since, for the most part, I've ordered computers, booked rooms, organised meetings and run vivas - it feels almost unnerving to have so much happening without me being the one doing it! But it is also a huge relief to feel all the early work starting to pay off, and feel like we're actually getting into research and not just lots of planning and project management.

Next month is shaping up to be an even more exciting one: Jamie Mawhinney will be resuming his Laidlaw Scholarship on developing FATKAT; we have a second Laidlaw Scholar (one Erik Millar)  starting who will be looking at tactile feedback and VR; we have another SUITCEYES Research Fellow starting - looking after the sensing and technical developments and, of course, I will be down at the British Academy Summer Showcase with Stuart Murray and Eat Fish Design showing off our work on Engineering the Imagination. Also, there will be exam boards, so my teaching duties are not done yet.

Still, first, I'm off to see the Falkirk Wheel and the Kelpies at the back end of this month: I couldn't be more excited!

Talking through Touch: Pint of Science Festival

I was invited to participate in the Pint of Science festival this year - specifically at the "Harder, Better, Faster, Stronger" event on the 16th of May. As is my want, I like to think out loud in writing a presentation, and the blog is a perfect place to do that, so here are my jottings - published retrospectively, in this case, largely because I've been so busy with examining duties that the blog as been a low, low priority!

This presentation is on "Talking through Touch", and it really relates to the work I'm doing on the Horizon 2020-funded SUITCEYES project. As always, I need to be careful because I am an Engineer - not a neuroscientist, or a psychophysicist, or even a philosopher of the senses. I know how to make things, but I can't give chapter and verse on - say - sensory illusions or the practicalities of multisensory integration or the merits of different haptic sign languages. I can parrot what I've read elsewhere and heard from others, I can give you a bit of an overview on these areas, but I'll never be anywhere near as good at them as those who specialise in them. But I can make stuff so, y'know - swings and roundabouts.

Anyway, it does imply the need for my customary "Caveat Lector" warning: you're about to read the thoughts of an engineer, and they need to be read in that context!

The Sense of Touch
Perhaps a logical place to start is with the sense of touch. And where to better start than by pointing you to people who are far more well-versed in these things than I am? A good place to start would be the recent Sadler Seminar Series "Touch: Sensing, Feeling, Knowing" convened here at the University of Leeds by Amelia De Falco, Helen Steward and Donna Lloyd. Sadly, the slides from the series aren't available - I might need to chase up on those to see if they or recordings will be made available, because they were very good talks. Particularly notable for my purposes - because they deal with crossing senses - were those from Charles Spence from the University of Oxford (noted for his work on multisensory integration - using sound and tactile stimuli to augment the sense of taste, for example) and Mark Paterson from the University of Pittsburgh who deals with sensorimotor substitution and the problems thereof (which we will come back to later on).

A lot of my research is about prehension and grip - but hands are also used to explore the world around us (sensing hot and cold, rough and smooth, hard and soft, and so forth) and to communicate - through gestures or direct touch (India Morrison's presentation on Affective Touch at the aforementioned Sadler Seminar series was particularly illuminating in the latter regard). And of course, it is worth noting that touch is not a sense restricted to the hands, but present across the skin - albeit with different degrees of fidelity. Hence the classic "Cortical Homunculus" representations that you see:

Sensory homunculus illustrating the proportion of the somatosensory cortex linked to different parts of the body.
Cropped from image by Dr Joe Kiff taken from Wikipedia under creative commons licence CC BY-SA 3.0
This is the limit of my knowledge on neurology of the somatic senses, so I'm going to leave it there. The key point for my talk is that we're interested in touch as a mode of communication, rather than, for example, as a way of exploring properties of the world around us. Of course, there is a link here: in order to communicate through touch, we need to be able to perceive the signals that are being sent! So let's have a think about what those signals might be.

Communicating Through Touch
Tactile communication takes many forms. The one we're probably most familiar with is the eccentric-rotating mass motor, that provides vibrotactile feedback on our phones - the buzzing when you set it to "vibrate". But there are lots of other examples. Braille is well known, and likewise you can often get tactile images (see this link for a nice paper on this from LDQR, who are partners in the SUITCEYES project), such that information can be presented in a tactile form. Tactile sign languages exist, and these take a variety of forms, from fingerspelling alphabets (often the hand) to more complex social haptic signals or tactile sign languages such as Protactile. This does highlight an interesting distinction - between signals (one-off, discrete messages) and language (assembling signals into complex messages - at least, to my engineering mind, language assembles signals: linguistics may take a different view!). You can see the fundamental difference between a simple buzz, and - as an example - Protactile. Haptic sign languages have shape, movement, involve proprioception. They aren't just morse code that can be converted easily into vibrations.

Luckily, Haptic Feedback isn't restricted to vibrotactile feedback through eccentric rotating mass motors. One of the examples that I find really interesting is the Haptic Taco, which changes its shape as you get nearer or further from a target point. And there are lots of examples of different modalities of haptic feedback - electrostatic, thermal, pressure, shape changing, etc, etc, etc - you can check out conferences such as Eurohaptics for the cutting edge in haptic feedback.

Sensory Substitution vs Haptic Signals vs Haptic Language
This brings us neatly onto the issue of what it is that we want to signal. After all, in the case of SUITCEYES, the aim is to "extend the sensosphere" by detecting information from the environment, and then presenting this to the user in a tactile form. This can take two forms that I can see: direct sensory substitution (transferring information from one modality to another - measuring distance with a distance sensor and then giving a related signal, as we did back in the WHISPER project) or by signalling - that is, interpreting the sensor data and sending a corresponding signal to the user.

A simple example, based on comments from the WHISPER project, might help to illustrate this. One piece of feedback we received was that the device we developed would be helpful for identifying doors, since it could be used to locate a gap in a wall. This suggests two different approaches.

The first is the sensory substitution approach: you measure the distance to the wall, and feed this back to the user through vibrations that tell them the distance to the item the distance sensor is pointing at. Close items, for example, might give a more intense vibration. The system doesn't know what these items are - just how far the signal can travel before being returned. In this scenario, the user sweeps the distance sensor along the wall, until they find a sudden decrease in vibrations that tells them that they have found hole. It would then be up to them to infer whether the hole was a door. Of course, this wouldn't work terribly well if the door was closed. An alternative would be to use computer vision, for example, to recognise a doorway.

A second approach would be to use, for example, computer vision to interpret a camera feed and recognise doorways. Now, instead of sending a signal that is related to distance, the system would need to provide some sort of signal that indicated "Door". This might be in the form of an alert (if the system is just a door detector, it need only buzz when it sees a door!), or of a more nuanced signal (it might spell out D-O-O-R in fingerspelling, morse code or braille, or it might use a bespoke haptic signal using an array of vibrotactile motors).

There is a third approach, which would be that of a haptic language - that is, combining multiple signals into a coherent message. "The door is on the left", for example, or "The door is on the left, but it's closed", or "The door is 3m to the left".

There is one further issue to consider (kindly highlighted to me by Graham Nolan from Deafblind UK), which is that of nuance: when we speak, we don't just state a series of words. We modify them with tone, gesticulation and body language, something that often gets lost in written text alone - see Poe's Law, or any of the many misunderstandings on the internet and email arising from failure to recognise sarcasm, or a joke - it is, after all, one of the reasons that emojis have caught on. I imagine. The same problem applies in haptic communication: less so with our door example, which is largely functional, but let's take a different example.

If signal distance, then you would know when something was in front of you. You might, using (let's say) our hypothetical computer vision system give that thing a label. Is it a wall, a door, a person? Or your best friend? And what if it is your best friend giving a greeting, or your best friend waving warning? Do they look happy or worried? Can we have empathetic communication and build relationships if our communication is purely functional?

I'm not the right person to answer that, but from a technical perspective, it does highlight the challenge. Do we need a stimulus that directly conveys a property (such as distance)? A signal that can be used to label something?

So, there are a few things we can look at here: modulation of some one property to represent another, a set of signals to label different things, combining multiple signals to create messages, and finally the challenge of modulating those signals, or messages, to capture nuance. But what properties do we have to play with?

Things to Consider
There are several modalities of tactile stimuli:

Contact - a tap or press, bringing something into contact with the skin.
Vibration - the classic vibration setting on mobile phones
Temperature - not one that is well used, as far as I'm aware, since it's tricky to get things to heat up and cool down quickly.

Another interesting example is raised by the Haptic Taco: a device that changes shape/size to indicate proximity to a target. So, we can add shape and size to our list. There are others, too (electrostatic displays being the most obvious).

Then, we can modulate each of these in three ways - duration, location and intensity - and play around with ordering.

So, we have a toolkit of modalities and modulations that we can apply to create signals or more complex communication. Of course, we then have questions of discrimination - the ability to differentiate these elements - in time, location and space.

There is finally, the question of efficiency: how quickly and reliably a message can be interpreted. After all, morse code can be delivered readily through vibrotactile feedback, but compared to direct speech, it is relatively slow.

And... that's pretty much where my presentation runs out. No clear conclusions... no findings, because we're still very much at the beginning of the project. This is more about laying out my thoughts on haptic communication. Let's hope that doesn't bother the audience too much.