Horizon (1964–…): Season 45, Episode 7 - Where's My Robot? - full transcript

I really want a robot.

Good work, Dr Harada. Thank you.

I want my robot to be able to see.

And I want my robot to be clever.

If we had an infinite
number of scientists

and an infinite amount of time,

we could just about make
a robot recognise an umbrella? Yes.

Walking is vital.

Are you familiar with Dr Who?
Yes, I love that show.

That's a Dalek.

I want a robot I can talk to.



ROBOT VOICE: I will go get
the stapler for you. Shut up.

We're still having
a human experience

even though I'm the only
human here. Exactly.

That is all I want from a robot.

Is that too much to ask?

HE MOUTHS

For as long as I can remember
there's been a dream.

A dream that just won't go away.

It's the dream of clever,
useful robots designed to help
and entertain us.

Intelligent androids created
by humans to happily do our bidding.

Until now the dream has remained
a dream, but the promise of
a useful robot

is still regularly made
by writers, by film-makers
and, crucially, by scientists.

Which is why I'm determined
to discover exactly when

we can expect the first proper robot
to walk off the production line.



When I set off on my journey
I had no idea it'd be so strange.

This is my first android,
this is my daughter's copy.

What was your daughter's reaction?
She almost started to cry.

Could you get up and
give me a cup of coffee, please?
I've got to do what?

'And I didn't realise that
the problems facing roboticists

'would turn out to be so
unbelievably complicated.'

There is no robot today
that could do all of that.
You're doing extraordinary things.

Aren't I?

'I discovered why designing a robot
that makes its own decisions

'is a bigger challenge than anybody
could have possibly imagined.'

Coding would be insane.

It would take forever just to do
the most simple thing, surely?

We're the people just charting and
mapping and trying to get a sense of
what's going on.

'And the trying to make artificial
people has revealed how little
we know about ourselves.'

Understanding intelligence, you can
argue, is the deepest, most difficult
problem in natural science today.

'But I also discovered
that there really might be
robots just around the corner.'

We call it the Stradivarius of
robots. It is the most expressive
robot in the world today.

That's what I want.
You can teach it new things and
it can help you perform those things.

'My quest for the perfect
robot companion starts here,
just south of San Francisco,

'where there exists one of
the world's most advanced robots.

'A machine, by all accounts, that
represents the very sharpest bit of
the cutting edge of robot design.'

Welcome to the
Stanford University AI lab.

This is where it happens.

What you see here is the STAIR,
the Stanford AI Robot, that we've
been working on.

This...this is the robot?
Yes, it is. Right.

Some day I hope we'll have
robots in every person's house

doing useful things like
tidying up, cooking, cleaning.

Well, I'll be honest, Andrew.

It's not quite what I was
expecting. It's an arm on wheels.

What were you expecting? I was
expecting, well, look round here.

This is much more what I sort of
had in mind,

a kind of a robot humanoid
who can just kind of walk around.

Are you familiar with Dr Who?

Yes, I love that show.
That's a Dalek.

You've created a Dalek.

'According to Andrew, Dalek or not,

'STAIR really is the future
of robotics, and to prove it

'his PhD student Morgan
is going to show me

'just how useful STAIR can be.'

STAIR, please fetch
the stapler from the lab.

STAIR, please fetch
the stapler from the lab.

ROBOT VOICE:
I will go get the stapler for you.

So... You didn't tell me
it could talk! Yeah,

it turns out getting robots to talk
is not the most difficult thing.

It's blindingly obvious to you
and me where the stapler is,

but to a robot to figure this out
is actually surprisingly difficult.

Oh, he's missed it, ooh.

'STAIR knows the basic layout of
the building and has other sensors

'that mean it can avoid crashing
into unexpected obstacles.'

Come on, left, left.

'It knows what a stapler is and it
knows which room the stapler is in,

'but other than that
it's on its own.'

Oh look, he's going down, he's going
for it. When we pick up things there
are lots of ways to do it.

When you pick up a coffee cup or
when you pick up a bottle of water,

the motions you would make with
your hands are very different
from the motions you make

when you're picking up a stapler
and it has to choose for itself how
to do that. It's done it. Yes.

He's picked up the stapler. STAIR's
done it, it's picked up the stapler.

So...

That's great.

'And it is great, sort of.

'I mean, I can see why it's clever
and autonomous and all of that.

'It's just I really expected
something that looked like it
had been built AFTER 1974.

'It somehow seems to lack ambition.

'Can this really be what they meant

'when they promised us useful
robots all those years ago?

It seems to me that any robot worthy
of the name should be able to walk,

because surely, if only on
a practical level, a really
useful robot

will have to move around in our
world just like we do.

In short, it'll have to look a bit
like us, it'll have to be humanoid.

Which is why I've come to Japan,
where an entire nation of robot
fanatics share this humanoid dream.

And little wonder. The Japanese have
been exceptional makers of complex
automata for hundreds of years.

Check this out.

This is from a 300-year-old
Japanese design. It's clockwork.

Apparently it's extraordinarily
simple to put together.

Crucially for me it mixes
the twin worlds of robots and tea.

I assume that's the head.

It's supposed to look a bit
like that.

There's definitely a piece missing.

There should be like a whole body.

There you go, simple.

Now, what this does is great.

Say that you've got a cup of tea
and you want to get it to your
friend at the other end of the table

but you can't be bothered to walk,
you pop the tea on this little fella

and he does the rest.

He even, if you do it again, will
turn round and come back to you.

Come on.

Wasn't really made for trains.

The point is, it looks like a little
person, it even appears to walk.

It could very easily have just been
a cube on wheels that you
popped a cup of tea on, but no,

it's a little Samurai,
meaning that this essentially is a
17th-century humanoid robot.

Go on.

The Japanese have never lost
their passion for automata,
but these days,

instead of making curiosities that

appear to walk, their scientists are
at the forefront of producing
robots that actually do.

In a research lab just outside Tokyo
they've created a walking robot

that is apparently
everything I want...

and more.

Step forward HRP 3.

Good work, Dr Harada. Thank you.

When I was a kid that is exactly
what I hoped robots would look like.

So what is this?

This is a HRP 3 human robot.

And what does it do?

Currently the robot can just walk.

He can just walk right now?
Just walk and...

So can we see him walk?

Yes. Oh, he is. He's off.

Right.

Very nice, nice little walk there.
Thank you.

Oh, he's doing more.

Turn around. Turning round.

'HRP 3 is unique
in walking robots because it doesn't

'need to be pre-programmed
for every eventuality.

'It can autonomously cope
with rough, uneven surfaces
by itself as it walks over them.

'Which is just as well,
because HRP 3 is supposed to be
a robust manual worker robot

'destined to work in construction
and, when needed,

'disaster environments.'

Relax.

Currently the robot can walk just
small bump.

OK, so he can tackle a small bump
right now. Are we talking...

Maybe a few centimetres.

A few centimetres? Are we able to
see him do the small bump now or...?

Today? Mm. No, no. No? OK.

What happens if he falls over?

The robot can get up by himself.

Can he? Yeah.
Is that something we can see?

No, no. Oh, OK.

Is that because
he's very expensive? Yeah. OK.

So, you may have noticed that that
interview ended slightly abruptly.

That's because we
were politely asked

if we wouldn't mind
leaving, just because HRP 3,

the, um, super robot, fell over.

Oh, blimey. I think his knees went.

Needs a little more work.

HRP 3's unfortunate demise makes it
painfully clear that building

a robot that can reliably balance,
let alone walk, will be less
straightforward than I thought

and let's not forget talking,
and seeing, and feeling,

and even thinking.

These challenges would make most
scientists hastily choose another
discipline, but not Gerald Edelman.

After a Nobel-Prize-winning career
in immunology,

Professor Edelman founded
the Neurosciences Institute
in La Jolla, California,

and set about the small matter
of replicating human intelligence.

So if anyone can tell me what
the prospects are for a decent robot
it's Professor Edelman.

Hello. Come in.

How are you? Good. Sit down.

Very nice to meet you. Thank you.

Danny, could you get up and
give me a cup of coffee, please?

I've got to do what? Get you a...
Get me a cup of coffee.

I will happily get you... Yes. Where
do I, where is it? It's over there.

Your office, your rules.
I'll get you a coffee.

Do you take sugar? No,
straight, that cup there will do.

Milk? No. That's fine.

Right. Thanks so much, good.

There you go. Here we go.

OK. So.
What do you think was happening when
I asked you to get a cup of coffee,

what was going on in your brain,
what was going on in your body?

In mine? Yes.

Well, you just said you wanted
a coffee, so... and you told me
where it was and I went...

OK. I asked you to do me
a favour, right? Yeah.

I asked you in words, and
you made sense of the words,

but you put it into something
that says "get coffee".

A little more polite, I hope.

So now when you decided to do that
you exercised what we
call motor control.

There's a part of your
human brain which is called

the pre-frontal and the pre-motor
cortex, which connects up to
something called the motor cortex,

which connects to your spinal cord
and makes your legs move.

Right. Right. The cerebellum,
which controls fast movements,

the basal ganglia which controls
the programme of motor actions,

and something
called the hippocampus, which is
involved in long-term memory.

If you think about that
whole act and my request,

there is no robot today that
could do that in a general sense.

Cos, for me, all I did was stand up
and pour some coffee. Yes.

But I guess I had to know what
coffee is, what a cup looks like.
I'd never seen those cups before.

Right, and look, you're pointing
to them with your thumb,
you're doing extraordinary things.

Aren't I? Yeah, and if I were
an AI programmer I would have to
write every single one

of those conditions in and I think
eventually would hit the wall.

Mm. So taking the vast complexities
of the brain into account,
where do we start?

What is important for a robot?

OK, I believe the first thing that
you have to start with is what we
call perception, right? You have to

be aware that I'm sitting here, that
this is a room, that the coffee's
over there, et cetera, et cetera.

Mm-hm. So perception is all.

Perception is not all,
but it's fundamental.

It's the start. It's the start
and the beginning of the structure.
Robotics 101.

OK, if you want to put it that way.

If Professor Edelman is right,
any robot worth its diodes

is going to have to see and touch
and hear its way through our world,

a world which is often surprising
for us,

let alone robots.

Producing a robot that makes sense
of our world is a tough challenge

and one currently being risen to by
this man, Gabriel Gomez, and a robot
called Domo.

Hey. Hello.

Hello. I'm Danny.

Gabriel. Nice to meet you.
How are you? Fine, and you?

And I'm guessing this is Domo.

You are right. He looks brilliant.

He looks beautiful.

It does look beautiful,
like a proper, proper robot.

'Domo has been designed to bring
together all the latest advances
in robotics in one beautiful body.

'Domo can see, touch,
hear, speak...'

ROBOT VOICE: I did this.

'..and move around,
from the waist up anyway.'

Is this its brain here?
Well, it looks very small if it is.

No, no, no,
this is just a camera acquisition.

So Domo has two cameras.

We can recognise face detection,
colour detection, motion detection.

Those are general clues that
capture the attention of the robot.

So what was it, face... Face.

Colour. Colour. And motion.

And motion.

'Domo sounds great.

'After all, finding and recognising
human faces is a crucial skill

'for robots to possess
if they're to do our bidding.'

Basically this part is very easy
to recognise, so if you try to...

'The T-shaped bright spot across the
brow and down the nose means face,

'but today the light is behind us,
not above us,

'and the system is finding it hard
to identify any T shapes at all.'

It's like trying to
catch a girl's eye.

It doesn't like it. Fair enough.
It isn't interested in me.
I mean, what else can he do?

It can also...

Isn't that weird? I'm calling him
"he" and you're calling it "it". It!

It should maybe be
the other way round.

It's a robot. It's a robot to you.

It's a little person to me.

Yes. It's very friendly,

and people feel comfortable.
And now we can try a different task.

Domo, shelf.

On shelf.

Put an object on it.

Now he has to align the two arms.

Come on, Domo.

You can do this.

Now he's got to
find the shelf again.

Oh, good luck, good luck, Domo.

Oh! Well, you know, that's still...
it's on the shelf. He's done well.

So Domo recognises
where is the top of the shelf by
these two green markers.

Is the fact that that's green
important?

It can be any colour.

So the only thing he's looking at
is the ball, the one colour?

Disregarding everything else.
Exactly.

OK.

Hey, yeah, yeah.

So, that was Domo.

I like Domo, Domo's good.

He can see, sort of.

You know, he can make out faces
and he can identify single coloured
objects. It's very, very impressive.

Except that the world isn't just
made up of faces

and it isn't just
made up of single coloured objects.

For it to work,
Domo's world needs to be simplified.

The problem is that
the real world is complicated.

Domo needs a vision system
that allows him to make sense
of the world as it is,

not one
marked up with strips of green tape.

Across the road from Domo's
crooked house there stands another
striking building,

where humans,
not robots, are being investigated.

'This is the McGovern Institute
for Neuroscience, home to a man...'

Hello, Professor Poggio.

'..who claims that useful sight
for robots will only arrive

'once we've
fully understood the way we see.'

How can knowing how we see
help us with making a robot see?

Vision is...from a computational
point of view,

from the complexity of
the processes involved,

is more difficult than playing chess.

Are you sure about that? Because
I've played chess against some
computers and they're very good.

That's right, so we can reproduce
the ability of playing
chess in computers.

We cannot yet
reproduce the ability of seeing.

So watching a game of chess is
harder than playing a game of chess.

That's a very good point. Yes.
Really? That's right.

A lot of complex things are going on
in your brain when you see. Right.

For instance, Danny,
what do you see outside?

Er, well, it's raining, there's
a car, a fellow with an umbrella.

An umbrella, right.

How do you recognise
it's an umbrella?

Well, it's an umbrella,
it's umbrella-shaped.

It's raining, there's a man with it.

Yeah, I mean I've seen plenty of
people with umbrellas and I own

my own umbrella and so I just looked
at it and went, that's an umbrella.

But remember, you have seen umbrellas
before but you have never seen

that particular umbrella in that
particular position under this light.

So how does your brain
recognise it is an umbrella?

Remember your brain is looking at the
image but the image is just a large
array of activities and neurons.

Uh-huh. And you have to interpret it
- the brain has.

So, you know, of course you could
in principle give to the robot

a list of all possible images
of all possible umbrellas
under all possible conditions, yes.

So every umbrella in the world,
every possible colour,

every context, every background
from every possible angle?

Right, with rain and without rain
and with this kind of light...

If we had an infinite number of
scientists and an infinite amount
of time,

we could just about make
a robot recognise an umbrella?

Yes. But then we'd have to
get on with the other stuff. Right.

'Unsurprisingly, Professor Poggio
isn't interested

'in infinite databases of images.

'He wants to give robots the much
more useful human ability

'of being able to recognise
TYPES of things.'

My belief is that we have to look
how the brain is doing it,

and we have some insight now
on how this is possible.

'Professor Poggio's belief
has been confirmed...

'by a video game he's designed.

'It's a simple game
of man versus machine.

'The computer and I
have to say whether or not

'there's an animal in the images
flashed up on the screen.

'We're being tested on our ability
to recognise types of things,

'in this case, animals.'

No animal.

Oh, blimey! Oh, there was!

Ah, take that! Yes.

So the computer equalled me.

I don't know whether to
be pleased or annoyed.

I'm pleased. That this
computer, this programme does not...

was not developed
to do better than you are doing.

It was developed
to do the same you are doing.

Professor Poggio has discovered
from studying humans that we store

in our brains a set of visual
references like colour,

texture, shapes and types of curve,

that are consulted in a fraction
of a second when we see a new image.

And it's these relatively few
visual touchstones that enable us

to quickly identify something
like a car, a building,

a human, a tree or an animal.

What's astonishing
is that Professor Poggio has managed

to teach a computer programme
to learn and judge in a similar way.

This model may actually capture
what's going on in the brain.

It's a brain in a box. Yes. Or a
little piece of the brain in a box.

'Professor Poggio's
breakthrough with sight

'provides a potential
quantum leap for robotics.

'If it's possible to
electronically replicate

'one type of human brain ability,
then why not others?

'Why not all?

'It seems that what science
needs to do now

'is to simply work out how
an entire brain works and copy it.'

You're scanning my brain
through here, wow! OK.

'Mine for instance.

'Which is why I've come to see
a man called Dr Patel,

'who's going to read my mind.'

I'll give you a blanket now. OK.

Do people come here just
to relax sometimes? Yes.

'What Dr Patel and scanning
technician Lacey are going to do

'is to play me simple eight-note
melodies in a sealed steel vault.'

Well, it's sort of touching.

'This, they assure me, is essential
to revealing how my brain works.'

HE MOUTHS

Ready? OK.

OK, it's going to start.

SIMPLE ELECTRONIC MELODY PLAYS

We're looking at the activity
of hundreds of thousands of neurons

as they react in real time to
the music that he's hearing,

and these little squiggles
are the brain waves -

sort of the voice of these neurons
talking as he listens to music.

MELODY CONTINUES

It's very easy
to come to the view, based on, say,

looking at pictures of brain scans
that we often see in newspapers,

that the brain is a collection
of specialised modules,

of regions that each
do different things.

This might be the area
that responds to faces

or this might be an area that becomes
more active when you eat chocolate.

The brain has many specialised
regions, a fundamental aspect of
neuroscience.

The brain is not a homogenous blob.
There are different neighbourhoods
that do different things.

What Dr Patel has discovered
is that when I try to make sense of
the music

there's activity in the part
of my brain concerned with hearing.

But there's also a complex
web of interconnection

with areas concerned with language,

sight, reason and many other
apparently unmusical capabilities.

These new technologies, with all
their magnificence, and they seem

so advanced, they've in a way
brought us back to the very basics.

How do we process something as simple
as an eight-note melody, and, boom,

we've got 15 different
brain areas lighting up,

each doing different things
in concert with each other.

How do we begin to unpack that?

It's taken science
in a whole new direction.

Ah, I was falling asleep in there.

Is that normal? Yes. Is it? Yes.

So obviously, the human brain...

we all know it's a complex thing,
but I didn't realise

just how vastly complex
it really is.

Are we even getting to a point
where we can be anywhere close to

replicating that in
an artificial system?

It doesn't look like it. If you look
at what artificial systems are able
to do at the current time,

compared to what even simple
organisms like wasps or bees can do,

we're still
a long way off, I think.

When I set out on my search
for our promised intelligent
autonomous robot,

I had no idea that making one
would be so difficult.

I didn't realise that
seeing would be tricky,

and I certainly didn't anticipate
that meaningful interaction

would require a robot to have
a replica human brain, at least.

It seems that the dream, if not
in tatters, at least needs some

re-examination, which is why I'm
going back to see Gerald Edelman,

who thinks that the only way we're
going to get a decent robot

is to allow one to evolve.

Now, on the face of it,
allowing a computer to evolve

sounds like the most
ridiculous thing imaginable.

But that's precisely what
Professor Edelman is doing,

with a series of robots called,
appropriately enough, Darwin.

So this is it. This is it,
this is Darwin VII. Darwin VII.

Darwin VII, a brain-based device
that learns and conditions.

It actually can connect
sight and shape with taste.

If it sees a striped block it knows
that that's going to be good

so it'll close its jaws on it,
as you will, the gripper.

'The Darwins taste
by checking conductivity.

'The striped blocks made of
conductive copper taste good,

'whereas the non-conductive
blobby blocks made of wood

'are not to Darwin's liking.'

Is it running along a programme?
Does it know where it's going?

No. I mean, we don't either.

In fact, if we ran this twice, even
with the same layout of blocks,

it wouldn't do the same thing.

It's very much like a biological
organism. It makes the choices.

So in that sense it's not
working like a computer.

'Darwin's neural networks
allow it to perform and compare

'several tasks simultaneously,
giving it a remarkable ability.

'Once Darwin has tasted and seen
a few examples of striped

'or blobby cubes, it can
differentiate using sight alone.

'It knows whether or not it
will want to taste a block

'simply by looking at it.'

What it's doing now...
is it thinking?

No, no, no way. I think...although
some philosopher might argue
with me,

I don't think so.

That doesn't mean we
aren't aiming at doing that.

Well, exactly, that surely
must be... It's a great goal.

In other words, I think, personally,

I think it is the greatest goal
of science in the 21st century.

'I really am much encouraged by
Professor Edelman's optimism

'and, as it turns out, he's not
the only one who's anticipating

'the kind of robots
I think we all want.'

Now, here is something
quite exciting.

It's a quote, it's a quote from
one of MIT's leading artificial
intelligence experts,

and it provides us
with hope for robotics.

Here's the quote.
"In from three to eight years..."
- three to eight years -

"..we will have a machine with
the general intelligence
of an average human being.

"A machine that will be able to
read Shakespeare, grease a car..."

- I don't know what that means -
"..play office politics,
tell a joke.

"At that point the machine will
begin to educate itself
with fantastic speed.

"In a few months it'll be at genius
level and a few months after that

"its powers will be incalculable."

Three to eight years!
Incalculable power!

The problem is, the fellow who
said that, Marvin Minsky...

he said that in 1970.

Still hasn't happened.

When Marvin Minsky
made that prediction,

he was Professor of Computer Science
and Artificial Intelligence at MIT,

a post held today by Rodney Brooks,
who, as the inventor

of the world's most successful
robotic vacuum cleaner,

takes a more pragmatic view.

When I was a kid growing up
in Australia,

I had a book which said
that the human brain

is a telephone switching network.

This was from the '50s -
made out of relays,

mechanical switchers and stuff.

And not too many years ago
I was giving a talk and someone asked

the question I'd been waiting
for for a while,

"Isn't the brain just
like the worldwide web?"

The brain is always the most
complicated machine we currently have

and what's the most complicated
machine we currently have?
It's digital computation.

So it could be
that a digital computer

is not intrinsically powerful enough

to do whatever it is that's
happening in our heads.

And then there's
the additional question,
are we smart enough to do it?

Yeah, which right now
we don't seem to be.

Well, making progress. It took us
a long time to build airplanes.

It should take us a long time to
build humans. Why are we bothering?

I'm slightly...not disappointed,
but I'm slightly perturbed

at this point. Oh, it doesn't
mean I'm not trying,

I'm just trying to look, you know,
realistically at the big picture.

One of my colleagues here at MIT
used to start the first course

in artificial intelligence,
the first lecture,

he would talk about a pet raccoon
he had had, and raccoons
are very dextrous.

It's hard to keep them in cages
because they manage to undo the locks
and things

and he said
he's fascinated watching this raccoon
with its hands do all this stuff

and he said it never occurred
to him to think that that raccoon

might build a robotic raccoon
as smart as itself.

Are you saying, then,
that we may not be bright enough?
Maybe we're not bright enough, yeah.

You know, we like to think we can
do anything but maybe we're just not.

Taxi!

A...any of you?

You.

The huge problem of making a robot
that's intelligent in its own right

has inspired some roboticists
to take a very different approach -

and I've come to Osaka
to meet one of them.

Now, Professor Ishiguro -
this is him here,

looking a little bit stern -
isn't just a visionary
when it comes to robotics.

He was also recently voted the 26th
best living genius in the world.

Hello. I'm looking
for Professor Ishiguro?

Third floor. The third floor,
thank you very much.

Professor Ishiguro thinks that
the cleverest way of investigating

the possibilities of robotics

is to try and understand
human-robot interaction.

And he's going about it
in a very personal way.

Hey!

Professor Ishiguro? Hi.

I'm Danny. Don't get up.

No, no, sorry, I cannot do that.

How are you?
I'm fine, how are you?

Not so bad. Listen, thanks very much
indeed for sparing the time
to speak to me.

You're a very busy man? Oh, yeah.

I've heard you've even built
a kind of a clone of yourself,

a kind of android clone.
Is that true?

Yeah. Actually it's...you know,

you are talking with my copy.

I did suspect it, I'll be honest.

Something gave it away.

Have you always had
a fascination with robotics?

Well, you know, robotics is
quite interesting because, you know,

in my...what I want to do
is understand what is a human

by being a robot.

Let's have a experiment. Please
push my cheek with your finger.

I should what? Yeah, push my cheek.

Right, actually when you push my
cheek I can feel something mentally.

OK. I'm just watching two videos
but I'm not getting sensory feedback

but still I can indeed
feel something.

Is that right? Yeah, this is
a very strange feeling.

So you're saying that we're still
having a human experience

even though I'm the only human here?

Exactly. Exactly. So I recognise
this robot body as my body

and therefore, when you touch
the robot face, I can feel something.

Sure. Right.

I have a real tongue and tooth.
Let's have a look.

Oh, yeah, you do. I mean, I'd feel
awkward doing this to a human

but can I touch your tongue?

Sure.

Agh! You just bit me,
Professor Ishiguro.

That's inappropriate!

How do you feel? A little odd.

'Professor Ishiguro started
making people five years ago

'as an investigation into
what it is to be human.

'That knowledge, he claims,
will help us

'to build robots that we're
truly comfortable with.'

Robots everywhere.

Yeah, this is my first android.

Your first android?
Yeah, this is my daughter's copy.

Wow!

Hello!

Yeah, a little bit suspicious
of me, as children generally are.

So what was your
daughter's reaction?

Yeah, my daughter scared
very much. She was scared? Yeah.

I think I'd be a little bit scared
if I walked into a room

and you'd created a clone of me.

She almost started to cry. Really?

And is your daughter OK with it now?

Oh, with this one? Yeah.
No. No, still? Still.

It's something from
a horror movie almost. Right.

You know, in the beginning I didn't
expect this kind of negative effect.
Oh, really?

I just tried to make
a copy of my daughter.

But when the robot, they are very
close to the human, robot also

need to have human-like movement,
human-like perceptions, you know.

We need to have a good balance.

That's the thing.
When I see the other robots,
the more metallic ones,

the cliched ones that you might see
in films, I feel absolutely normal.

This one scares me a little bit.

I think because...it's like
you say, it's kind of human

but it's obviously not.

'Professor Ishiguro has discovered
that being a bit human

'is simply not good enough.

'If you're going to make an android,

'it had better be indistinguishable
from the real thing, or you might
as well not bother,

'which is why
he's constantly improving
his androids' capabilities.'

Hey!

She's great.

Thanks. I'm mildly attracted to her.

You know, this robot also
has human-like sensations.

See, here and here.

Oh, she likes that.

There you go! No, no, no,
she doesn't!

Oh, she doesn't like it.

Can I take this one?

If you want to, you know,
I can make another female for you.

OK, my wife may be watching
so I'll talk to you in private.

'I think it's fair to say that
Professor Ishiguro's androids,

'even the beguiling rubber lady,

'have some way to go before it'd be
possible to genuinely accept them

'as anything other than
curiosities.'

But the idea that psychology is
just as important as hardware
and programming

has inspired a roboticist in the US

to take a completely
different approach.

Which is as far away from making
fake people as it's possible to be.

Leonardo is, I would say,
the most sophisticated

social robot in the world today.

We call it
the Stradivarius of robots.

It is the most expressive
robot in the world today.

You're making some quite
bold claims there,

but am I going to be able to see
any of this? I'll tell you why.

I've met many scientists recently

and they all make these claims

and then when I say, "Can I see it?"
they've gone, "No."

So I'm going to ask you now,
can I see some of this happen?

Oh, it's your lucky day. Yeah? Yes!

Hello, Leo.

Leo, this is Elmo.

Can you find Elmo?

Hey, Leo, look at Elmo.

Wow, isn't he awesome?

Yeah.

Leo's never seen this before,
he doesn't know what it is.

When Matt introduces this novel thing
he gets really excited about it.

Hey, look at Elmo.

So he goes,
maybe I should get excited.

He's picking up the emotional
wavelength of what Matt is doing

with these novel entities in
the world and Leo adopts that.

Leo, this is Cookie Monster.

Leo, Cookie Monster is very bad.

He's very bad, Leo.

He's a scary monster.

He wants to eat all your cookies.

So the blue monster,

Matt isn't so keen on.

Exactly.

It turns out that for children,
when they're about a year old,

a lot of what they learn is through
social referencing.

He wants to steal your cookies.

If the mother acts like, "Don't do
that!" the children might not know
what it is they shouldn't be doing

but they pick up on that effect
and start associating,
"That's a negative thing,

I shouldn't do that," or, "This is
something that...you like it, then
I should probably like it."

OK, Leo. I'll take him away.

He's designed that way to get
people to want to interact
in a way that's more like

a sort of parent-to-child or
parent-to-puppy sort of relationship.

And we'll spend more time
being more patient with him.

You might be more patient with it,
you might enjoy it more, you know...

'Now, I suspect what Leo is actually
doing is simply

'linking high or low tone of voice
with one of two colours

'and then moving in one
of two pre-programmed ways.

'But it doesn't matter,
because I'm hooked emotionally.'

Ah! That's more like it.

Come on, Matt, give him it.

Have it.

You can have it.

Hey... Oh, now he doesn't want it.

'What Cynthia has
discovered is that robots
don't necessarily have to be clever

'for them to start being
really effective helpers,

'and even friends, like these ones
that offer weight loss support
on their little screens.

'In a recent six-week study,
they were tested against laptops
giving identical advice.'

The only thing that this robot did
that was different than the computer

was it has this embodiment and
it has eyes like in the camera here,

so it can find your face and
make eye contact. Right, let me
try and guess the result.

What do you think? It wasn't
so obvious at the time!
Oh, really? We hoped...

I would assume that this
would be a storming success,
cos were it a computer,

then I would look at
my computer and I would think, well,
that's where I do my e-mails...

There you go! You're on top of this!
High five. Absolutely.

What we found, to our surprise,
was that people stuck to
the programme almost twice as long

as the computer or the pen and paper.
But the only difference was
that it looks like this.

The humanoid robot is great,
one that could walk around
and do things just as I do.

It would be an amazing achievement,
absolutely. But I don't think
it's necessary and, in fact,

if you look to the future,
or possible future, we'll
have all kinds of robots.

You know, there could be the humanoid
but there's also, already, there's
the robot vacuum cleaner

and there's the robot toys.
You've got to broaden your
thinking to what roles could...

Technology that can interact
with you, communicate,
understand your goals,

understand something about
your feelings and so forth,

could really do to help you
live a better quality of life.

It seems unlikely that I'll be
getting the autonomous humanoid
robot that I wanted any time soon.

But maybe I've been wrong
about what it is I really want.

This invention is
better than the airplane.

But aren't you removing a piece of
the sort of...the human experience?

'Maybe I don't want a device that
actually has a mind of its own.'

He's really smart with his hands and
his typing and he thinks he's going
to build a robot just like him.

'Perhaps it doesn't matter
whether it's clever or not,
so long as it seems clever.'

Is it thinking?

'And so long as it can do
all the boring stuff that
I can't be bothered with.'

Fella with an umbrella.

'The really exciting thing
is that, as science discovers
more about how we work,

'the better robots become.'

Is this its brain?
It looks very small if it is.

It looks to me that although
it might not walk for a while...

..a useful robot really is
just around the corner.

I will go get the stapler for you.

'Of all the robots I've seen,
STAIR is the most impressive.

'It does what it's told.

'It can move around by itself
without falling over.

'It can recognise the things
it needs to and work out how best
to deal with them on its own.

'And ironically, the very fact that
it isn't trying to be a fake person

'is where
its greatest strength lies.'

We were evolved, in a way,

to hunt animals and survive
on the savannahs and climb mountains,

to hunt animals and survive
on the savannahs and climb mountains,

and we don't need a robot
to do many of those things.

In some ways, this robot sees things
better than you and I can.

And using existing technologies,
not technologies based on how
a human sees or how a human walks,

but how a robot could
be most functional. Yeah.

Come to the lab and
I'll show you some of the sensors
we have in the robot.

It's called a Swiss Ranger.
Swiss Ranger? Yeah.

If you look on the monitor
on the left, it's actually
seeing a picture of us

and what the colours on
the monitor represent is actually not
light intensity but instead distance.

So if I walk back...

I should... Oh yeah, there I go.
There I am, I'm yellow.

And if you could go even further
you'd turn red.

And so we use this to let
the robot see things in 3D.

There you go, there we are.
There we are.

When I think about robotics today,
it feels a little bit like
the 1970s of the computing age,

where there are suddenly all
these new cameras, new sensors and
things are finally coming together.

So we're on the brink
of a revolution?

That is how many of us feel and
that's certainly what I hope is true.