Nova (1974–…): Season 45, Episode 104 - NOVA Wonders: Can We Build a Brain? - full transcript

Are you wondering how healthy the food you are eating is? Check it -
What do you wonder about?

The unknown.

What our place
in the universe is.

Artificial intelligence.


Look at this, what's this?


An egg.

Your brain.

Life on a faraway planet.

"NOVA Wonders" investigating
the biggest mysteries.

We have no idea
what's going on there.

These planets in the middle

we think are
in the habitable zone.

And making incredible

Trying to understand

their behavior, their life,
everything that goes on here.

Building an artificial

is going to be the crowning
achievement of humanity.

We're three scientists

exploring the frontiers
of human knowledge.

I'm a neuroscientist

and I study
the biology of memory.

I'm a computer scientist

and I build technology

that can read human emotions.

And I'm a mathematician,

using big data to understand
our modern world.

And we're tackling
the biggest questions...

- Dark energy?
- Dark energy?

Of life...

There's all of these microbes,

and we just don't know
what they are.

And the cosmos.


On this episode...

Artificial intelligence...

Machines that can learn

by themselves.

How smart are they?

It can flirt, make jokes,

identify pictures.

It has changed the whole field.

We've made such huge progress
so fast.

And it's going to make life
a lot better.

But could it go too far?

If we screw it up...
massive consequences.

"NOVA Wonders"...

"Can We Build A Brain?"

Inside a human brain

there's about a hundred billion

And each one of them
can connect to 10,000 others.

And from these connections



The human brain can compose

create beautiful works of art...

It allows us to navigate
our world,

to probe the universe,

and to invent technology
that can do amazing things.

Now, some of that technology
is aimed at replicating

the brain that created it...

Artificial intelligence,
or A.I.

But has it even come close
to what these babies can do?

For ages, computers have done
impressive stuff.

They crack codes, master chess,
operate spacecraft.

But in the last few years,
something has changed.

computers are doing things

that can seem...
much more human.

Today, computers can see,

understand speech,

even write poetry.

How is all this possible?

And how far will it go?

Could we actually build a
machine that's as smart as us?

One that can imagine, create,
even learn on its own?

How would a machine like that
change society?

How would it change us?

I'm Rana El Kaliouby.

I'm André Fenton.

I'm Talithia Williams.

And in this episode
"NOVA Wonders"...

"Can We Build A Brain?"

And if we could... should we?



Many think the A.I. revolution
is happening

not just in Silicon Valley,
but here.

We're so used to America being

the absolute primary center
of the world

when it comes to this stuff.

And now we're starting to see

fully different ideas
come out of China.

People are much more used to
using their smart phones

for everything.

In China,

chat dominates daily life,

even in its
most intimate moments.



These might seem like
your typical conversations

between friends,

but they're not.

They're with this.

Meet Xiaoice...
Or "Little Ice"...

A chatbot created by Microsoft.

A chatbot's just software
that you can talk to.

A really bad example

is when you call a company.

I'm sorry.

Press "5" to return
to the main menu.

But Xiaoice is in
a whole other league.

She's had over
30 billion conversations

with over 100 million people.

She's even a national

Delivering the weather,

appearing on TV shows,

singing pop songs.

But the craziest thing is...

People cannot tell difference

whether it's a bot
or a real human.

You heard right; in fact,
many of her users

treat her no differently
from a real friend.

Once I remember feeling
really down

and stressed out,
and she kept consoling me.

She told me that actually
life is beautiful

and even sung me a song
and said "I love you"...

I felt very touched.

For example,

if you had a fight at work
or your boss scolded you,

you might be afraid
to tell your friends

since they might spread
the story.

But with Xiaoice
you don't have to worry.

To me,
Xiaoice is a very good friend.

So Xiaoice is a lot of people's
best friend,

including me.


But hold it, she's not human.

What's the difference?


Di Li is a senior engineer
at Microsoft

and one of Xiaoice's creators.

To him, she is much more
than a piece of software.

What does it mean
when millions of people

embrace a computer program
as a friend?

Does this mean Xiaoice
is "intelligent?"

Intelligent like us?

Of course Xiaoice
is intelligent.

Xiaoice can recognize

your writing,
can recognize your voice.

She can flirt.

She can make jokes.

She can identify pictures.

She can... you know, I mean,

I think by all rights,
you'd have to say that she is.

Which brings us
to the question...

What is intelligence anyway?

people in the field of A.I.

have thought of "intelligence"
as the ability

to do intelligent things,
like play chess.

In the chess playing machine,
a computer is programmed

with the rules of the game...

with 200 million
possible moves every second.

But this kind of thinking
only got us so far.


We got supercomputers
that can beat chess champions,

model the weather,

play "Jeopardy."


- Who is Isaac Newton?
- You are right!

They were each experts
at specific tasks,

but none could tell you
what chess is,

know that rain is wet,

or why money
is important to us...

They had no understanding
of the world,

no common sense.

We thought just because

computers were very good
at math,

that they would suddenly be
very good at everything.

But it turns out that

what a typical three-year-old
could do drastically outstrips

what any current artificial
intelligence system can do.

These things are
eventually coming,

we have a hard time
predicting exactly when,

but I think that building
an artificial intelligence

is actually going to be

the crowning achievement
of humanity.

Now wait, hold on a second.

Even if we decided we wanted to
build a human-like intelligence,

what makes us think we could?

Consider your brain.

Isn't there some sort of
ineffable magic in there

that makes me me,

and you you?

I don't think so.

Based on what we've learned
from neuroscience,

I think that fundamentally

every thought you've ever had,
every memory, even every feeling

is actually the flicker

of thousands of neurons
in your brain.

We are biological machines.

Now for some people,
that might sound depressing,

but think about it:
how does this make this?


Somehow, these crackling
connections between brain cells

produce thoughts and
an understanding of our world.

The question is how.

For the last 60 years,
computer scientists

have believed if we could
just figure that out,

we could build
a new breed of machine...

One that thinks like us.

So where would you start?

If you really want to build
intelligent machines,

I believe that vision
is a huge part of it.

Fei-Fei Li's mission
is to teach computers to see.

Vision is the main tool

we use to understand the world.

A world so complex,

we rarely stop to think

how much our eyes and brain
process for us...

All in a matter of milliseconds.

We take vision for granted

as human because we don't
consider this

as a particularly intelligent

But, in fact, it is.

It takes up about a quarter
to a third of our entire brain

to be able to do vision.

That's not to say
vision is intelligence,

but it's hard to appreciate

just how complex a task it is,

until you try to get
a computer to do it.

Take recognizing pictures
of cats, for instance.

So you think it'll be easy
for a computer

to recognize a cat, right?

A cat is a simple animal
with round face,

two pointy ears...

A traditional programming

to identifying a cat

would be that you would
build parts of the program

to accomplish
very specific tasks,

like recognizing cat ears, fur,
or a cat's nose.

But what if the cat is in this,

kind of a funny position
or you don't see the cat's face,

you see it from the back
or the side.

They can be sleeping,
they can be lying down...

Cats come in different shapes,
they come in different colors.

They come in different sizes.

Running around,
attempting a jump...

curled up in a little ball,

headfirst stuffed into a shoe.

You just cannot imagine
how to write a program

to take care of
all those conditions.


But that is exactly what
Fei-Fei set out to do...

Figure out how to get computers
to recognize not just cats,

but any object.

She started not by writing code,

but by looking at kids.

Babies from the minute
they're born

are continuously receiving

Their eyes make about
five movements per second,

and that translates to
five pictures, and by age three,

it translates to hundreds
of millions of pictures

you've seen.

She figured that if a child

learns by seeing
millions of images,

a computer would have to do
the same.

But there was a hitch...


We started realizing that

one of the biggest limitations
to being able to train machines

to identify objects is to
actually collect a dataset

of a large number of objects.

And a little thing called
the internet

would help solve that problem.

Let's take a second here
to talk data.

All those cat videos, Facebook
posts, selfies and tweets.

Turns out we create a ton of it.

In fact every day,
our collective digital footprint

adds up to 2.5 billion gigabytes
of new data...

That's the same amount
of information

in 530 million songs,

250,000 Libraries of Congress,

90 years of HD video,

and that's each and every day.

But how to make sense of it all?

The real trick of that
isn't just that it needs

tons and tons of data.

It needs tons and tons
of labeled data.

Computers don't know what
they're looking at...

Someone would have to label
all that data.

Here's where Fei-Fei
had an idea:

We crowdsourced, crowdsourced,
crowdsourced, crowdsourced.

She crowdsourced the problem.

Paying people pennies a picture,

she recruited thousands
of people from across the globe

to label over
ten million images,

creating the world's largest
visual database... ImageNet.

Now suddenly we have a dataset

of millions
and tens of millions.

Next, she set up
an annual competition to see

who could get a computer
to recognize those images.

This was very exciting

because a lot of schools
from around the world

started competing to identify

thousands of categories
of different types of objects.

At first,

computers got better and better,

until they didn't.

Performance just sort of stalled

and there were not really
any major new ideas coming out.

The computers were still making
bone-headed mistakes.

We were still struggling
to label objects.

There were questions about
why are you doing this.

But then, in the third year
of the contest,

something changed.

One team showed up
and blew the competition away.

The leader of the winning team
was Geoff Hinton.

The person who evaluated
the submissions

had to run our system
three different times

before he really believed
the answer.

He thought he must have made
a mistake

because it was so much better
than the other systems.

The change in performance
on ImageNet was tremendous.

So until 2012

the error rate was 26%.

When Geoff Hinton participated,
they got 15%.

The year after that it was 6%,

and then the year after that
it was 5%.

Now it's 3%, and it's basically
reached human performance.

For the first time,
the world had a machine

that could recognize
tens of thousands of objects...

Irish setter,



baseball bat...

As well as we do.

Now, this huge jump
got everyone really excited.

So how did Geoff and his team
do it?

The most intelligent thing
we know is the brain,

so let's try and build A.I.

by mimicking the way
the brain does it.

As it happens,

he used a kind of program
first invented decades before,

but that had long ago
fallen out of favor,

dismissed as a dead end.

The majority opinion within A.I.
was that this stuff was crazy.

It's called "neural networks,"
or deep learning,

and since sweeping ImageNet,

it's been taking the field
by storm.

So how did they do it?

How does deep learning
actually work?

Let's break it down,

with a little help
from man's best friend.

Now when you or I look at this

we know it's a dog.

But when a computer sees him,

all it sees is this.

How do I get a computer
to recognize that this photo,

or this one, or that one,

is a photo of a dog?

It turns out that
the only reliable way

to solve this problem is to give
the computer lots of examples

and have it figure out
on its own, the average,

the numbers,
that really represent a dog.

Here's where deep learning
comes in.

As you might recall,

it's a program based
on the way your brain works,

and it looks something
like this:

here we have layers of sensors,
or nodes.

Each feeds information in one
direction from input to output.

The input layer is kind of like
your retina,

the part of your eye
that senses light and color.

In the case of this photo
of Buddy,

it senses dark over there,
light over here.

This information gets fed
to the next layer,

which can recognize
basic features like edges.

That then goes
to the next layer,

which recognizes more complex
features like shapes.

Finally, based on all of this,

the output layer
labels the image as

either "dog"...

or "not dog."

But here's the kicker...
And this is what's revolutionary

about deep learning
and neural networks.

At first, the computer has
no idea what it's looking at.

It just responds randomly.

But each time
it gets a wrong answer...

Information flows backwards
through the network saying,

"You got the answer wrong,"

so anybody who was supporting
that answer,

your connection strength should
get a bit weaker.

And anybody who was supporting
the right answer?

Their connections get stronger.

Back and forth, it does this
over and over again

thousands of images later,

the computer teaches itself
the features

that define "dogginess."

The magic of it is that
the system learns by itself,

it figures out how to represent
the visual world.

But teaching computers to see,
as it turns out,

was only the beginning.

It's been a paradigm shift.

It was a paradigm shift.

I think deep learning
is a paradigm shift.

Suddenly, with deep learning,
anything seemed possible.

Around the world,
A.I. labs raced

to put neural networks
into everything.

But it wouldn't be news
to the rest of the world

until one day in March 2016,
in Seoul, South Korea...

When world champion Lee Sedol

steps onto the stage

to challenge a machine
in the game of Go.

Starting tomorrow in South Korea

a human champion will square off
against a computer.

You might not know what Go is,

but too much of the planet,
it's bigger than football.

All right, folks, you're going
to see history made.

Stay with us.

I believe it was beyond

the Super Bowl.

I mean, just millions
and millions of people.

In fact,

nearly 300 million people
watched these matches.

This game is hugely popular
in Asia.

The game goes back, I think,
thousands of years,

it's deeply connected
with the culture.

People who play this game
don't view it

as an analytical,
quasi-mathematical exercise.

They view it almost as poetry.

It's a board game, like chess,

that demands a high level
of strategy and intellect.

The goal is to surround
your opponent's stones

to capture as much area
of the board as possible.

Players receive points

for the number of spaces
and pieces captured.

It might sound simple, but...

The number of

possible board positions in Go

is larger than the number
of molecules in the universe.

It's just not going to work
to exhaustively search

everything that could happen.

So what you need are these
gut feelings.

That's intuition.

That's the kind of thing
computers can't do.

Not according to these guys
at Google's DeepMind in London.

They knew in Go

no machine could ever win
with brute force.

And it was only by bringing

deep learning in particular
to this area

that we were able to build

artificial systems
that were able to

"see" patterns on the board
in the same way

that humans see patterns
on the board.

Using deep learning,
DeepMind's AlphaGo

analyzed thousands
of human games

and played itself
millions of times,

allowing it to invent entirely
new ways to play the game.

I think black's ahead
at this point.

AlphaGo was really
a stunning result.

It's very humbling for humanity.

I think he resigned.

A loss heard 'round the world.

A clash of man against machine
is over, and the machine won.

A victory over a human
by a machine...

To see a machine play the game
at a high level

with moves that feel creative
and poetic

I think was a bit
of a game changer.

All of a sudden,
it has changed the whole field.

And it's not just winning at Go.

In the past few years,
deep learning has invaded

our everyday lives...

Without most of us
even knowing it.

Deep learning is a big deal
because of the results.

There're just little things
or big things that we can do

that we couldn't do before.

It's what allows smart devices

like Alexa to understand you.

Alexa, how many feet in a mile?

One mile equals 5,280 feet.

It's what taught Xiaoice
how to chat,

and Facebook to pick you
out of a crowd

at your cousin's wedding.

We've suddenly broken through
a wall.

When I started in this field,
none of that was possible.

Now today you have machines
that can effortless,

in real time, recognize people,
know where they're looking at.

So there has been breakthrough
after breakthrough.

Now it's bested humans
in many tasks.

Lip Net can read your lips
at 93% accuracy...

That's nearly double
an expert lip reader.

Google Translate can "read"
foreign languages in real time.

Hey Isabel, how's it going?

Hey Isabel.

Even translate live speech.

Absolutely okay, thank you.

Deep learning programs
have composed music,

painted pictures,

written poetry.

It's even sent Boston Dynamics's
robot head over heels.

For the foreseeable future,

which I think is about
five years,

what we'll see is this
deep learning

invading lots and lots
of different areas,

and it's going to make life
a lot better.

At least that's the hope...

Just consider medicine.

Deep learning systems
are very good

at identifying tumors in images,
skin conditions,

you know, things like that.

One of the first attempts
with real patients

was conducted by Dr. Rob Novoa,

a dermatologist
at Stanford's Medical School.

He knew nothing about
deep learning until...

I came across the fact

that algorithms could now
classify hundreds of dog breeds

as well as humans.

And when I saw this I thought,
my God, if it can do this

for dog breeds, it can probably
do this for skin cancer as well.


So we gathered a database

of nearly 130,000 images
from the internet,

and these images had labels

of melanoma,

skin cancer, benign mole,

and using those we began
training our algorithms.

The next step was to see
how it stacked up

against human doctors.

The algorithms did as well as

or better than our sample
of dermatologists

who were from academic practices
in California

and all over the country.

And all this can be put
on a phone.

Give it a moment

and it accurately classified
it as a benign...

Technology has always changed
the way we practice medicine,

and will continue to do so,

but I'm skeptical
as to its ability

to completely eliminate
entire fields.

It will change them,
but it won't eliminate them.

Rather than replace doctors,

Rob thinks this will expand
access to care.

In the future, a primary care
doctor or nurse practitioner

in a rural setting would be able
to take a picture of this,

and be able to
more accurately diagnose

what's going on with it.

So deep learning has given us

machines that can see...



It might rain in Aliquippa

But to build
an intelligence like ours,

you're going to need a lot more.

Our devices know who we are,
they know where we are,

they know what we're doing,

they have a sense
of our calendar

but they have no idea
how we're feeling.

It's completely oblivious
to whether you're having

a good day, a bad day,

are you stressed, are you upset,

are you lonely?

In other words, our machines
have no emotional intelligence.

And that's important...

Our host, Rana el Kaliouby,
would know...

She's devoted her career
to solve just that.

It all started back when she was
a grad student from Egypt

at the University of Cambridge.

There was one day when I was
at the computer lab

and I was actually
literally in tears

because I was that homesick.

And I was chatting
with my husband at the time

and the only way
I could tell him

that I was really upset
was to basically type,

you know, "I'm crying."

And that was when I realized

that, you know,
all of these emotions

that we have as humans,

they're basically lost
in cyberspace,

and-and I felt we could
do better.

But to do better how?

Rana's next stop was MIT,

where she continued work
on a new algorithm,

one that could pick up
on the important features

of human behavior...

That tell you whether
you're feeling happy,

sad, angry, scared, you name it.

It's in your facial expressions,

it's in your tone of voice,

it's in your, like, very nuanced
kind of gestural cues.

Because she thinks
this could transform the way

we interact with technology...

Our cars could alert us
if we get sleepy...

our phones could tell us whether
that text really was a joke,

our computers could tell
if those web ads

are wasting their time.

But where to start?

She decided to go with the most
emotive part of the human body.

The way our face works
is basically we have

about 45 facial muscles.

So, for example,

the zygomaticus muscle
is the one we use to smile.

So you take all these

muscle combinations, and you map
them to an expression

of emotion like anger or disgust
or excitement.

The way you then train
an algorithm to do that,

is you feed it

tens of thousands of examples
of people doing

each of these expressions.

At first her algorithm
could only recognize

three expressions,

but it was enough to push her
to take a leap.

I remember very clearly
my dad was like,


"You're leaving MIT
to run a company?

Like, why would you ever
do that?"

In fact,
the first couple of years,

I kept the startup
a secret from my family.

Rana would convince her parents,

but convincing investors
was a whole other story.

It is very unusual,

especially for women coming from
the Middle East

to be in technology
and to be leaders.

I remember this one time
when I was supposed to be

presenting to an audience
and I walked into the room

and people assumed
I was the coffee lady.

And investors were not
the hardest to convince.

All these doubts in my mind
like are probably shaped

by my upbringing, right, where
women don't lead companies,

and maybe I should be back home
with my husband.

I think I've learned

over the years to have a voice
and use my voice

and believe in myself.

And once she did that...

We have this golden opportunity
to reimagine how, you know,

how we connect with machines

and, therefore, as humans
how we connect with one other.

Today, Rana's company...

Called Affectiva...

Has raised millions and has
a deep learning algorithm

that can recognize
20 different facial expressions.

Many of her clients
are marketing companies

who want to know
whether their ads are working,

and she's also developing
software for automotive safety,

but an application she's
especially proud of is this.

Most autistic children

struggle with
the basic communication skills

that you and I take for granted.

A collaboration
with neuroscientist Ned Sahin

and his company Brain Power
that allows autistic children

to read the emotions
in people's faces.

Imagine that we have technology

that can sense
and understand emotion

and that becomes

like an emotion hearing aid,

that can help these individuals

understand in real time
how other people are feeling.

I think that
that's a great example

of how A.I.,
and emotion A.I. in particular,

can really transform
these people's lives

in a way that wasn't possible
before this kind of technology.


No doubt deep learning
has accomplished a lot.

But how far will it go?

Will it ever lead to the
so-called "holy grail" of A.I...

A general intelligence
like ours?

No, very unlikely,
because it has challenges...

It's difficult to generalize,
it's difficult to abstract,

if the system meets something
it's never encountered before,

the system can't reason
about it.

This is the problem
of deep learning...

In fact, is the problem of A.I.
in general today...

Is that we have a lot of systems
that can do one thing well.

My best analogy to deep learning
is we just got a power drill,

and boy can you do amazing
things with a power drill.

But if you're trying to build
a house,

you need a lot more
than a power drill.

Which makes you wonder...
Will we ever get there?

Can we ever build
an intelligence

that rivals our own?

I think we're a long way off
from human-level intelligence.

There's been this sort of
trend in A.I.

maybe for the past 50 years
of thinking that if only

we could build a computer
to solve this problem,

then that computer must be
generally intelligent

and it must mean that we're
just around the corner

from having A.I.

Okay, so if it's not
deep learning... how?

What we need to do
is we build machines

that can learn in the world
by themselves.

Like the way we do.

Humans are not born
with a set of programs

about how the world works.

Instead, with every blink...


and bruise,
we acquire that knowledge

by interacting
with the environment.


By the time we walk,
we've developed a crucial skill

we take for granted but is
impossible to teach computers:

common sense.

I cannot leave an apple
in the middle of the air.

It will drop.

If I push something toward
the edge of the table,

probably it's going
to fall off the table.

If I throw something at you
like that,

you know that it's going to be
projectile kind of movement.

All of those things
are examples of things

that are just so simple
for human brain,

but these problems are insanely
difficult for computers.

Ali Farhadi wants computers

to solve these problems
for themselves.

But the real world
is complicated,

so he starts simple...
With a virtual environment.

We put an agent
in this environment.

We wanted to teach the agent

to navigate through
this environment

by just doing
a lot of random movement.

At first it knows nothing about
the rules that govern the world.

Like if you want
to get to the window,

you can't go through the couch.

The whole point is that
we didn't explicitly mention

any of these things to the robot

and we wanted the robot to learn
about all of these

by just exploring the world.

For the robot it's a game.

Its goal: to get to the window,

and each time it bumps
in the couch it loses a point.


By doing lots of
trial and error,

the agent learns what are
the things that I should do

to increase my reward
and decrease my penalties.

Over the course
of millions of iterations,

the robot would actually

develop common sense.

But that's just the first step.

You can actually
get this knowledge

that this agent learned
in this synthetic environment,

move it to an actual robot
and put that robot in any room

and that robot should be able
to operate in that room.

This robot has never been
in this room.

Think of it as a toddler
made of metal and plastic.

This is a big deal
because the robot wakes up

in a completely unknown

So it needs to basically match
what it as has seen before

in the virtual environment,

with what it sees now
in the real environment.

Its goal sounds
ridiculously simple.

So now the robot is searching

for where it might find
the tissue box.

What makes this hard
for this specific one

is that the tissue box is not
even in the frame right now.

It has to move around

to find this little box.

It's going to scan the room
left and right

until it can latch onto
something that gives it

some indication of where it is

and then move forward
towards it.

I think it got it now.

If after 60 years of trying,

this is state-of-the-art,

that probably says something
about the state of A.I.

When we look around today,
at things in A.I.,

we can see little pieces
of lots of humanity,

but they're all very fragile.

So I think we're just a long,
long way from understanding

how intelligence works yet.

There is a huge gap
between where we are

and what we need to do
to build this general unified

intelligent agent that can act
in the real world.

Ultimately, ideally one day
we'll be there.

But we are really far
from that point.

Before we reach
human-level intelligence

in all the areas that humans
are good at,

it's going to take
significant progress

and not just
technological progress,

but scientific progress.

If A.I. is ever going
to get there,

many think it will have to go
beyond neural nets

and model even more closely
how the actual brain works.

If we're going
to really get down

to the sort of core algorithms

of how we want to teach machines
how to learn,

I think we're going to have
to actually open up the box

and look inside and figure out
how things really work.

One example of this approach is
called neuromorphic computing.

Instead of writing software
like deep learning,

scientists like Dharmendra Modha
draw direct inspiration

from the brain to build
new kinds of hardware.

The goal of brain-inspired

is to bridge the gap
between the brain

and today's computers.

You might not realize it,
but compared to your brain,

computer hardware today
requires vast amounts of energy.

Consider DeepMind's AlphaGo,

the machine that beat
Lee Sedol at Go.

Just think about these
two machines...

The AlphaGo hardware
and the human brain.

The human brain, right,
it's sitting right here.

It's tiny, it's powered by let's
say 60 watts and a burrito.

AlphaGo is a cavernous beast,

even in this day and age,
you know,

thousands of processing units

and a huge amount of electricity
and energy and so on.

In fact, DeepMind used
13 data center servers

and just over 1 megawatt
to power AlphaGo.

That's 50,000 times more energy
than Lee Sedol's brain.

The human brain is three pounds
of meat, 80% water.

Occupies the size
of a two-liter bottle of soda,

consumes the power
of a dim light bulb

and yet is capable
of amazing feats of sensation,

perception, action,

cognition, emotion,

and interaction.

So why is the brain
so much more efficient?

Engineers have pinned down
a few clues.

For one, traditional computers
work by constantly

shuttling data from memory...
Where it's stored...

To the CPU, where it's crunched.

This constant back-and-forth
eats up a lot of juice.

Today's computers fundamentally
separate computation from memory

which is highly inefficient,

whereas our chips,
like the brain,

combine computation, memory,
and communication.


The chip is called TrueNorth,

and its architecture
combines memory and computation.

For certain applications,

this design uses
a hundred times less energy

than a traditional computer.

And it's worth pondering
the consequences...

Funded by
the Defense Department,

the Army and the Air Force
are already testing the chip

to see if it can help drones
identify threats

and pilots make split-second
targeting decisions.

Until now,
the only possible way to do that

was with banks of computers

thousands of miles away
from the battlefield.

That's amazing because
of the low power

in the real-time response
of TrueNorth

allows this decision-making
to happen

without having to wait
for a long time.

Of course, many fear
technologies like this

will eventually take human
intelligence out of the loop.

We're going to increasingly

be giving over
our decision-making ability

to machines,

and that's going to range
from everything from, you know,

how does the steering wheel
turn in the car

if somebody walks out
into the road,

to should a military drone
target a person and fire.

And handing those decisions over
to the machines?

Well, that's a nightmare
familiar to anyone

who's seen the movies.

Ava, I said stop!

Whoa, whoa, whoa...

I'm sorry, Dave,

I'm afraid I can't do that.

If you're worried,
you'd have good company.

Big thinkers like
the late Stephen Hawking,

Bill Gates. and Elon Musk

have all made headlines

warning about the dangers
of A.I.

A.I. is a fundamental
existential risk

for human civilization.

It's a burning question
for many of us...

Are we just sitting ducks

for the arrival
of the robot overlords?

, that's so off the mark.

My immediate subconscious
reaction is I laugh.

I want to challenge Elon Musk.

Show me a program
that could even take

a fourth grade science test.

Reality seems to paint
a different picture entirely,

one where achieving
an intelligence like ours...

Never mind one that would want
to kill us... is far away.

Instead, potential threats from
A.I. might be much more mundane.

Think about it.

Without so much as a blink,

we've surrendered control
to systems we do not understand.

Planes virtually
pilot themselves,

algorithms determine
who gets a loan

and what you see
in your news feed.

Machines run world markets.

Today's A.I. would seem to hold
tremendous promise and peril.


Just consider self-driving cars.

Self-driving cars are one

of the really first big
opportunities to see A.I.

get into the physical world.

This physical interaction
with the world

with intelligence behind it,
it's-it's huge.

You're talking about
having an actual object

going out into the world,
interacting with other agents.

It has to interact with people,
pedestrians, cyclists.

Has to deal with different
road conditions.

They're also a pretty good
litmus test

for reality versus hype.

There seems to be a tremendous
PR war going on,

who can make
the most outrageous claims.

It makes it hard to sort
through, then, what's real

and what's smoke and mirrors.

A glance online
would make it appear

as if self-driving cars
are right around the corner

when, in fact,
it'll likely be decades

before one is in your driveway.

So every year there are gonna be

self-driving cars
with more abilities.

But it's gonna be
a really long time

before the car
can completely take over

and you can take a nap.


For one,

almost under all conditions,

they still need a safety driver.

This one belongs to Argo...

The center of Ford's
self-driving efforts.

Lisa here's got her hands
in a position

where she can really have
a very fast reaction time

to take over from the car.

This allows us to have a
very short leash on the system.


Even after logging
millions of miles,

the only places you can find
truly autonomous vehicles today

are either on test tracks

or carefully chosen routes

that have been
meticulously mapped.

And even under those conditions,

neither Argo nor its competitors

can reliably drive
in the snow or rain.

Nonetheless, many engineers are
confident that these problems

will eventually be solved.

The question is when.

We can debate five years,
ten years, 20 years.

But absolutely there's a future

in which most cars
are self-driving.

If we go out far enough,

we won't have any human drivers,

ultimately, but it's a lot
further off than I think

a lot of the Silicon Valley

and some of the car companies

And if that day comes,
there could be a huge upside.

Dramatic reduction
in traffic density,

because we don't need
as many cars if the cars

are being used all the time.

Old people won't have to give up
their driver's license,

we won't have drunk driving.

About 40,000 people died
in the U.S. last year

in auto accidents.

And that number is huge,

and it's a million worldwide.

The vast majority of which
are due to human error.

In fact, car crashes
are a leading cause of death

in the U.S.

On the other hand,
taking us out of the equation

raises some big
ethical questions.

A woman was hit and killed
by a driverless Uber vehicle

in Tempe, Arizona, last night.

This accident was big news.

It was the first of its kind,

but it almost certainly
won't be the last.

When the machine
makes the wrong decision,

how do we figure out
who's to be held responsible?

What you have
is a series of questions

that our laws are really
not all that ready for.

And then there's the issue
of jobs.

At the moment,
these vehicles are so expensive

they only make sense
for companies that have fleets

that could be used 24/7.

So the early adopters

won't be the
individual customers,

it'll be big fleets.

Like trucks.

Because they mostly run
on predictable highway routes,

they might be the first
self-driving vehicles

you'll see in the next lane.

We're at the point where
highway driving in a truck

with an autonomous vehicle

will be solved in the next
five, ten years,

so those are all jobs
that are going to go away.

There will be economic

If you think of things like

truck drivers, taxi drivers,
Uber and Lyft drivers,

we need to have this discussion
as a society

and how are we going
to prepare for this.

And what if those 3.5 million
truck drivers in the U.S.

are just the
canary in the coal mine?

We have learned
a certain number of things,

you know, in the last 50 years
of A.I. and we understand that,

like on the ranking of things
to worry about,

Skynet coming and taking over

doesn't even rank
in the top ten.

It distracts attention
from the more urgent things

like, for example,
what's going to happen to jobs.


For a glimpse into the future,

consider one of the largest
companies on the planet...


Whether you're aware of it
or not,

that pair of socks you ordered
last week comes from

a place like this.

Amazon has tremendous scale.

We have fulfillment centers
that are as large

as 1.25 million square feet.

That's like 23 football fields,

and in it we'll have
just millions of products.

To deal with that scale,

Amazon has built
an army of robots.

Like a marching army of ants
that can constantly change

its goals based on
the situation at hand, right.

So, our robotics
are very adaptive and reactive

in order to extend
human capability to allow

for more efficiencies
within our own buildings.

And there's plenty more
where those came from.

Every day, this facility
in Boston "graduates"

a new batch of machines.

All of the robots that you see
that are moving the pods

have been built right here
in Boston.

I call it the nursery,
where the robots are born.

They'll be built, they'll take
their first breath of air,

they'll do their own

Once they're good,
then they'll line up

for robot graduation,

and then they will swing their
tassels to the appropriate side,

drive themselves right onto
a pallet,

and go direct
to a fulfillment center.

To some of us,
this moment belies a dark sign

of what's to come:
a future that doesn't need us,

one where all jobs... not just
cab drivers and truckers...

Are taken by machines.

But Amazon's chief roboticist
doesn't see it that way.

The fact is really plain
and simple:

the more robots we add
to our fulfillment centers,

the more jobs we are creating.

The robots do not
build themselves.

Humans design them,

humans build them,
humans deploy them,

humans support them.

And then humans,

most importantly,
interact with the robots.

When you look at that,

this enables growth.

And growth does enable jobs.

Certainly, history would seem
to bear him out.

Since the Industrial Revolution,

new technologies...
While displacing some jobs...

Have created new ones.

There's nothing special
about A.I.

compared to, say,
tractors or telephone

or the internet or the airplane.

Every single technology that was
deployed displaced jobs.

And the new jobs workers took,
more often than not,

raised wages and the standard
of living for everyone.

200 years ago,
98% of Americans were farmers.

98% of us
are not unemployed now.

We're just doing jobs that were
completely unimaginable

back then,
like web app developer.

I'd argue as we invent
new things,

it lifts the plate
for everybody.

Let's take inventions in the
last 100 years that matter...

Television, telephones,
penicillin, modern healthcare.

I believe that the ability
to invent new things

lifts us all up as a society.

While this is the predominant
view in the A.I. community,

some think it ignores
the reality of today's world.

There's a long history
of technology creators

assuming that only good things
would happen with their baby

when it went out into the world.

Even if there are
some new jobs created somewhere,

the vast majority of people
are not easily going to be able

to shift into them.

That truck driver
who loses their job

to a driverless truck
isn't going to easily become

an app developer
out in Silicon Valley.

It's easy to think that
automation-related job losses

are going to be limited
to blue-collar jobs,

but it's actually already
not the case.

Physicians, that's an incredibly
highly educated,

highly paid job,
and yet, you know,

there are significant fractions
of the medical profession

that are just going to be done
better by machines.


That being the case, even if
changes like this in the past

ultimately benefited
the present,

how do we know
the pace of change

hasn't altered the equation?

So I'm really concerned about
the time scale of all of this.

Human nature can't keep up
with it.

Our laws, our legal system has
difficulty catching up with it

and our social systems,

our culture has difficulty
catching up with it,

and if that happens,

then at some point
things are gonna break.

So if you're talking about

something like
artificial intelligence,

this is a technology
like any other technology.

You're not going to uninvent it.

You're not going to stop it.

If you want to stop it,

you're going to first
have to stop science,

capitalism, and war.

But even if A.I. is a given,

how we choose to use it is not.

As technologists,

as businesspeople,

as policymakers,

as lawmakers,

we should be
in the conversations

about how do we avoid
all the potential pitfalls.

We get to decide
where this goes, right?

I think A.I. has the potential
to unite us,

it can really transform

people's lives in a way
that wasn't possible

before this kind of technology.

What we do with A.I.

is a decision that we all
have to make.

This isn't a decision

that's up to A.I. researchers
or big business or government.

It's a decision that we,
as citizens of the world,

have to work together
to figure out.

Artificial intelligence
may be one of

humanity's most powerful
inventions yet.

The challenge is are we going
to be smart enough to use it?

It's like Carl Sagan said,
right, you know,

"History is a race between
education and catastrophe."

The race keeps getting faster.

So far education
seems to still be ahead,

which means that if we let up,
you know,

catastrophe could come out

The stakes are incredibly high
for getting this right.

If we do it well,

we move into an era
of almost incomprehensible good.

If we screw it up,
we move into dystopia.