Tuesday 25 July 2017

Robots, Humans, Puppets: What's the Difference?

I've agreed to give a talk for the Superposition on the 2nd of August on the subject of "Humans, Puppets and Robots: What's the Difference?". This is part of their ASMbly talks and events, which bring together an Artist, a Scientist and a Maker to discuss a particular topic (you can get tickets via Eventbrite, if you so wish!). In this case, the Artist is the excellent Anzir Boodoo (puppeteer of Samuel L. Foxton), the Scientist is Samit Chakrabarty (who specialises in motor control via the spine)  and the Maker is... well, me. Unsurprisingly, Anzir will be talking Puppets, Samit will be talking Humans and it falls to me to talk Robots. As is my custom, I thought I'd use the blog as a handy place to work out my thoughts before putting the presentation together.

The main question that we're looking at is how Puppets, Humans and Robots are similar and how they are different: the clue's in the title of the talk. This is really interesting question. I've often thought about the human vs robot link. It's something that crops up a lot in my line of work, especially when you're looking at how humans and robots interact and when the robot has to use force feedback to help guide human movement. Samit is particularly interesting in this area, because of his modelling of human motor control as an electrical circuit. The links between robots and puppets though has been particularly interesting to reflect on, as it ties in with some of my recent thoughts about the Tracking People project, and algorithms as a set of pre-made decisions. I mean, what is a computer program but a kind of time-delayed puppetry? By that token, a robot is just a specialised type of puppet: at least until Strong AI actually turns up.

I thought I'd break the talk down into four sections:

1) What is a Robot?
2) How do Robots Act?
3) How do Robots Sense?
4) Algorithms: Robots are Stupid

Let's take them in turn.

What is a Robot?

For all that we hear a lot about them, we don't really have a good definition of what constitutes a robot. I work in robotics, so I see a lot of robots, and I'm not sure I have a good sense of what the average person thinks of as a robot. iCub probably comes pretty close (seen here learning to recognise a chicken from our Augmenting the Body visit to Sheffield Robotics last year) to what I imagine most people think of as a robot:


A sort of mechanical person, though the requirement to look like a human probably isn't there - I mean, most people would recognise R2-D2 (full disclosure - R2-D2 remains what I consider the ideal helper robot; which may say as much about my age as its design) or more recently BB-8, as a robot just as much as C-3PO.  Perhaps the key feature is that it's a machine that can interact with its environment, and has a mind of its own? That's not a bad definition, really. The question is: how much interaction and how much autonomy are required for an item to become a robot? To all intents and purposes, the Star Wars droids are electomechanical people, with a personality and the ability to adapt to their environment.

The term Robot originates from the Czech playwright Karel Čapek's Play Rossum's Universal Robots (apparently from the Czech word for forced labor, robota). In this play, robots are not electromechanical, but biomechanical  - but still assembled. This draws an interesting link to Samit's view of the human body, of course. Perhaps one day we will have robots using biological components: for the time being, at least, robots are electromechanical machines. Yet, there are lots of machines that do things, and we don't consider them robots. A washing machine, for example. A computer. What sets a robot aside?

Well, for starters we have movement and the ability to act upon the environment. A computer, for example, is pretty complex, but its moving parts (fans and disc drives, mostly) are fairly limited, and don't do much externally. It doesn't act upon its environment, beyond the need to pull in electricity and vent heat. So, we might take movement as being a key criterion for a robot. We might wish to specify complex movement - so a washing machine, for example, that just spins a drum wouldn't count. No, there needs to be some substantial interaction - movement, or gripping.

We can also differentiate an automaton from a robot - something that provides complex movements, but only in a pre-specified order. It carries on doing the same thing regardless of what happens around it. A wind-up toy, might provide complex movement, for example, but it wouldn't be a robot. We expect a robot to react and adapt to its environment in some way.

This brings up four handy conditions that we can talk through:

1) A robot is an artefact - it has been designed and constructed by humans;
2) A robot can act upon its environment - it possesses actuators of some form;
3) A robot can sense its environment - it has sensors of some form;
4) A robot can adapt its actions based upon what it senses from its environment - it has an algorithm of some form that allows it to adapt what its actuators do based upon its sensors.

The first of these doesn't require further discussion (except in so far as we might note that a puppet is also an artefact, whereas a human is not), but let's take a look at each of the others in turn.

Actuators  - how does a robot act upon its environment?

Actuators imply movement - a motor of some form. A robot could also have loudspeakers to produce sound, LEDs to produce light, all of which can be intelligently controlled, but so can any computer, or mobile phone.  So I'll focus on actuators.

It's worth noting that actuation has two characteristics - some power source which it will convert into mechanical power in the form of movement; and some control signal that tells it how much output to produce. These actuators can be linear (producing movement in a straight line) or rotary (spinning round in a circle). The power source is often electricity, but can be pneumatic (using compressed air) or hydraulic (using liquid).

The mechanical output can then be adapted using all kinds of mechanisms - attached to wheels or propellers to provide propulsion;  attached to four-bar linkage to get more complex oscillations such as moving parallel grip surfaces so that a robot can grip; attached to cables to drive more complex kinematic chains (for example, the Open Bionics ADA Hand).

Apart from the complex mechanism design this is fairly straightforward (thanks, it is worth saying, to all the efforts of those put the hard work into developing those actuators). The challenge lies in getting the right control signal. That's what differentiates robots from automata. Automata have actuators, but the control signal is pre-determined. In a robot, that control signal adapts to its environment. For that, we need the other two elements: sensors and a decision-making algorithm to decide how the system should respond.

Sensors - how does a robot sense its environment?

So, a robot has to have some way of detecting its environment. These can take a huge variety of forms, but as electrical signals are communicated as voltages, anything that can produce a voltage or changes its resistance (which, thanks to the magic of the potential divider can be used to change a voltage) can be measured electronically, and a huge array of sensors are available for this purpose. A few of the most obvious:

Sense of Balance - Accelerometer/Gyroscope: An accelerometer gives a voltage proportional to linear acceleration along a given axis. Since gravity produces downward acceleration, this can be used in a stationary object to detect orientation; it will get confused if other accelerations are involved (for example if the accelerometer is moved linearly); a gyroscope, on the other hand, detects changes in orientation and therefore can be used to detect orientation - with these two, the robot immediately has some sense of balance and inertia, I guess akin to the use of fluid in the inner ear.

Proprioception - Potentiometers: Linear and rotary potentiometers change their resistance as a function of linear or angular position; allowing the robot to detect the actual position of any joints to which they are attached (as opposed to where they are supposed to be). In this way, the robot can know when something has perturbed its movement (for example, one of it's joints has been knocked, or bumped into a wall). Encoders are a more advanced version of this.

Touch - Force Dependent Resistors: As their name suggests, Force Dependent Resistors change their resistance based on the amount of force or pressure they experience. This is useful for telling when an object or barrier has been encountered - but even a simple switch could do that. The benefit of a force dependent resistor is that it gives some indication of how hard the object is being touched. That's important for grip applications, where too little force means the object will slip from the grasp, and too much will damage the object.

Temperature - thermistor: A thermistor will changes its resistance according to the temperature applied to it, providing a way of measuring temperature.

Light - A light dependent resistor will change its resistance according to how much light reaches it. In this way, a robot can know whether it is in darkness, or light.

Distances: Ultrasonic or infrared distance sensors return a voltage based on how long an ultrasonic or infrared signal takes to bounce off an object in front of it. In this way, the robot can be given a sense of how much space is around it, albeit not what is filling that space. In this way, Robots can be equipped with sensors that stop them from bumping into objects around them.

Hearing - Microphones: Microphones are a bit more complex, but they produce a voltage based on soundwaves that arrive. This is the basis of telephones and recording mics, and can be used for simple applications (move towards or away from a noise, for example) or more complex applications (speech recognition is the latest big thing for Google, Apple and Amazon).

Vision - Cameras: Computer vision is a big area, and one that is currently developing at a rapid pace. Object recognition is tricky, but can be done - face recognition has become extremely well developed.  In this way, a robot can recognise whether it is pointing towards a face, for example, or can be trained to keep a particular object in the centre of its vision.

There are a wealth of others (magnetometers to detect magnetic fields; gps location tracking; EMG to trigger prosthetics from muscle signals) but these are the most common. Putting these together, a robot can gather quite complex information about its environment, and the position of its constituent parts within it. The challenge of course, is in making sense of this information. All the sensor provides is a voltage that tells you something about a property near that sensor.

You can get some basic stimulus-response type behaviour with simple circuitry - a potential divider that turns on a light when it gets dark outside, for example. The real challenge is in how to integrate all this information, and respond to it in the form of an algorithm.

Algorithms: Robots are Stupid

Although we hear a lot about  artificial intelligence and the singularity, robots are actually stupid and literal-minded: they do exactly what you tell them, and nothing more. They don't improvise, they don't reinterpret, they don't imagine: they mechanically follow a flowchart of decisions that basically take the form "If the sensor says this, then the actuator should do that". I'm simplifying a lot there, but the basic principle stands. They can go through those flow charts really quickly, if designed properly, and perform calculations in a fraction of the time it would take a human, they might even be able to tune the thresholds in the flowchart to "learn" under which actuator responses best fit a given situation: but that "learning" process has to be built into the flowchart. By a human. The robot, itself, doesn't adapt.

Now, AI research is advancing all the time, and one day we may have genuinely strong Artificial General Intelligence that can adapt itself to anything, or at least a huge range of situations. Right now, even the best AI we have is specialised, and has to be designed. Machine Learning means that the designer may not know exactly what thresholds or what weights in a neural network are being used to say, recognise a given object in a photograph. But they had to design the process by which that neural network was tuned. As we increasingly depend on black-box libraries form scratch, perhaps one day, robots will be able to do some extremely impressive self-assembly of code. For now, Robots learn and can change their behaviour - but only in the specific ways that they have been told to.

So, an algorithm is basically a set of pre-made decisions: the robot doesn't decide, the designers has already pre-made the decisions, and then told the robot what to do in each situation. Robots only do what you tell them to. Granted, there can be a language barrier: sometimes you aren't telling the robot what you thought you were, and that's when bugs arise and you get unexpected behaviour. But that's not the robot experimenting or learning: it's the robot following literally what you told it to do. This also means that as a designer or programmer of robots, you need to have foreseen every eventuality. You need to have identified all the decisions that the robot will need to make, and what to do in every eventuality - including what to do when it can't work out what to do.

Of course, robots can have different degrees of autonomy. For example, a quadcopter is not autonomous. It makes lots of local decisions - how to adjust its motors to keep itself level, so that the user can focus on where they want it to go, rather than how to keep it in the air - but left to its own devices, it does nothing. By contrast, a self-driving car is required to have a much greater level of autonomy, and therefore has to cover a much broader range of eventualities.

Thus, there is always a slightly Wizard-of-Oz type situation: a human behind the robot. In this sense, robots are like puppets - there is always a puppeteer. It's just that the puppeteer has decided all the responses in advance, rather than making those decisions in real-time. What's left to the robot is to read its sensors and determine which of its pre-selected responses it's been asked to give.

There is a side issue here. I mentioned that robots can do calculations much faster than humans - but for a given robot, it still has a finite capacity, represented by its processor speed and memory. It can only run through the flowchart at a given speed. For a simple flowchart, that doesn't matter too much. As the flowchart gets more complex, and more sensors and actuators need to be managed, the rate at which the robot can work through it slows down. Just to complicate matters further, sensors and actuators don't necessarily respond at the same speed as the robot can process flowchart. Even a robot with the fastest processor and masses of memory will be limited by the inertia of its actuators, or the speed at which its sensors can sample reliably.

One response to this is more local control: dividing up the robot's sensors and actuators more locally. A good example of this is the servomotor, where you have a sensor attached to the motor so that it knows its position and speed, and will try to maintain the position or speed specified by a central controller. This is handy because it frees up the designer from having to implement steps in their flowchart to provide this control, which has the benefit of freeing up capacity for other decisions, as well as meaning that if something happens to perturb the actuator, it responds immediately, rather than waiting for robot to work through to the relevant part of its flowchart.

Humans, Puppets and Robots: What's the Difference?

Let's return to the motivating question, then. How is a robot similar to or different from a human or puppet?

There are some obvious similarities to humans, even if the robot is not itself humanoid. It has bones (in the form of its rigid members). It has actuators which are effectively equivalent to muscles. It has sensors which respond to stimuli (analogous to the variety of receptors in the human body). It has a computer (brain) which runs an algorithm to decide what how to respond to given stimuli. Finally, it sends signals between sensors, actuators and computer through electric signals. These are similar to a human. The difference, I guess is that a robot is an artefact, lacks the self-repairing capacities of a human body, and its brain lacks the flexibility of human thought, since a human has to pre-make all the decisions, whereas in person, a human can make decisions in real time.

There are some obvious similarities between a robot and a puppet as well. Both are artefacts, and in both cases the decisions about how to act are taken by a human (in advance in the case of the robot, in real-time in the case of the puppet). Both lack the self-organising/self-repairing/automatically learning nature of a human.


Human
Puppet
Robot
Occurs…
Naturally
Artificially
Artificially
Is…
Biological
(Electro)Mechanical
Mechanical
Decisions are made by…
Self*
External Human
External Human
Decisions are made in…
Real time*
Real time
Advance
Sensors?
Yes
No
Yes
Actuators?
Yes
No
Yes
Learning occurs…
Automatically
Not at all.
As dictated by programmer.

* At least, conscious decisions are. There are lots of decisions that are, as I understand it, “pre-programmed”, so we might argue that on this front, we might argue that these decisions are made in advance, and query whether the “self” is really making them.

Earlier, I said that robots can perform calculations much faster than humans, but actually, now that I think about it, I don't know if that's true. Maybe their flowcharts are just a lot simpler, meaning they can work through the steps faster? Simpler, but less flexible. Perhaps Samit will enlighten us.

Also, is a puppet a robot with a human brain? We can't really talk about a puppet without an operator, so is a puppet an extension of the operator? A sort of prosthetic?

I don't know - but I'm looking forward to exploring these issues on the night!

No comments:

Post a Comment