We Need to Talk About A.I (2020) - full transcript
A new film from acclaimed director, Leanne Pooley.
MAX TEGMARK:
If we build artificial
general intelligence,
that'll be the biggest event
in the history
of life on Earth.
LOUIS ROSENBERG: Alien intelligence inside a computer system that
has control over the planet,
the day it arrives...
JAMES CAMERON:
And science fiction
books and movies
have set it up in advance.
What's the problem?
HAL: I think you know
what the problem is
just as well as I do.
What are you
talking about, HAL?
HAL: This mission
is too important
for me to allow you
to jeopardize it.
I don't know what you're
talking about, HAL.
KEIR DULLEA:
HAL, a super-intelligent
computer.
We'd call it AI today.
My character, Dave,
was the astronaut
in one of the most prophetic
films ever made
about the threat
posed to humanity
by artificial intelligence.
HAL was science fiction.
But now, AI is all around us.
Artificial intelligence
isn't the future.
AI is already here.
This week a robot
ran for mayor
in a small
Japanese town.
The White House says
it is creating
a new task force
to focus
on artificial intelligence.
Google has already
announced plans
to invest more in AI research.
MALE NEWS ANCHOR: IBM sending an artificial intelligence robot into space.
DULLEA:
Because of my small role
in the conversation,
I understand
that we're on a journey,
one that starts
with narrow AI.
Machines that perform
specific tasks
better than we can.
But the AI of science fiction
is artificial general
intelligence,
machines that can
do everything
better than us.
Narrow AI is really
like an idiot savant
to the extreme,
where it can do something,
like multiply numbers
really fast,
way better than any human.
But it can't do
anything else whatsoever.
But AGI, artificial
general intelligence,
which doesn't exist yet,
is instead
intelligence
that can do everything
as well as humans can.
DULLEA: So what happens
if we do create AGI?
Science fiction authors
have warned us
it could be dangerous.
And they're not the only ones.
ELON MUSK: I think people should be concerned about it.
AI is a fundamental risk
to the existence
of human civilization.
And I don't think people
fully appreciate that.
I do think we have to
worry about it.
I don't think it's inherent
as we create
super intelligence
that it will
necessarily always
have the same goals in mind
that we do.
I think the development
of full
artificial intelligence
could spell the end
of the human race.
[ECHOING]
...the end of the human race.
Tomorrow, if the headline said
an artificial intelligence
has been manufactured
that's as intelligent
or more intelligent
than human beings,
people would argue about it,
but I don't think
they'd be surprised,
because science fiction
has helped us imagine all kinds
of unimaginable things.
DULLEA: Experts are divided
about the impact
AI will have on our future,
and whether or not Hollywood
is helping us plan for it.
OREN ETZIONI: Our dialogue
has really been hijacked
by Hollywood
because The Terminator
makes a better blockbuster
than AI being good,
or AI being neutral,
or AI being confused.
Technology here in AI
is no different.
It's a tool.
I like to think of it
as a fancy pencil.
And so what pictures
we draw with it,
that's up to us as a society.
We don't have to do
what AI tells us.
We tell AI what to do.
[BEEPING]
DAVID MALONE: I think
Hollywood films
do help the conversation.
Someone needs to be thinking
ahead of what
the consequences are.
But just as we can't
really imagine
what the actual science
of the future
is going to be like,
I don't think we've begun
to really think about
all of the possible futures,
which logically spring out
of this technology.
MALE REPORTER:
Artificial intelligence has
the power to change society.
A growing chorus
of criticism
is highlighting the dangers
of handing control
to machines.
There's gonna be
a lot of change coming.
The larger long-term concern
is that humanity will be
sort of shunted aside.
NEWS ANCHOR:
This is something
Stanley Kubrick and others
were worried about
50 years ago, right?
DULLEA: It's happening.
How do we gauge the urgency
of this conversation?
ROSENBERG: If people saw on radar right now
that an alien spaceship
was approaching Earth
and it was 25 years away,
we would be mobilized
to prepare
for that alien's arrival
25 years from now.
But that's exactly
the situation that we're in
with artificial intelligence.
Could be 25 years, could be 50 years,
but an alien intelligence will arrive
and we should be prepared.
MAN:
Well, gentlemen, meet Tobor.
An electronic simulacrum
of a man.
Oh, gosh. Oh, gee willikers.
Even though much work remains
before he is completed,
he is already
a sentient being.
[MURMURING]
RODNEY BROOKS:
Back in 1956 when the term
artificial intelligence
first came about,
the original goals
of artificial intelligence
were human-level intelligence.
He does look almost kind,
doesn't he?[BEEPING]
BROOKS: Over time that's proved to be really difficult.
I think, eventually,
we will have human-level
intelligence from machines.
But it may be
a few hundred years.
It's gonna take a while.
And so people are getting
a little over-excited
about the future capabilities.
And now, Elektro,
I command you to walk back.
Back.
MALONE:
Almost every generation,
people have got
so enthusiastic.
They watch 2001,
they fall in love with HAL,
they think,
"Yeah, give me six weeks.
I'll get this sorted."
It doesn't happen and everyone gets upset.
DULLEA:
So what's changed
in AI research?
MALONE: The difference with the AI we have today
and the reason AI suddenly took this big leap forward
is the Internet.
That is their world.
Tonight the information
superhighway,
and one of its main
thoroughfares,
an online network
called Internet.
MALE REPORTER: In 1981,
only 213 computers
were hooked to the Internet.
As the new year begins,
an estimated
two-and-a-half million
computers
will be on the network.
AIs need to eat.
Just like sheep need grass,
AIs need data.
And the great prairies of data
are on the Internet.
That's what it is.
They said, "You know what?
"Let's just let them loose out there."
And suddenly,
the AIs came to life.
FEMALE REPORTER: Imagine
thousands of talking robots
able to move as one unit,
taking on crime, fires,
and natural disasters.
Artificial intelligence
platform Amper
has created the first album
entirely composed
and produced
by artificial intelligence.
Next week, Christie's will be
the first auction house
to offer artwork created
by artificial intelligence.
Driverless cars, said to be
in our not-so-distant future.
DULLEA: So even though
we don't have AGI
or human-level
intelligence yet,
are we ready to give autonomy
to the machines we do have?
Should self-driving cars
make ethical decisions?
That is the growing debate
as the technology
moves closer towards
mainstream reality.
MARY CUMMINGS: This idea
of whether or not
we can embed ethics
into machines,
whether or not they are
automated machines
or autonomous machines,
I'm not sure
we've got that figured out.
If you have
a self-driving car,
it's gonna have to make
ethical decisions.
A classic one people have been
thinking a lot about is...
I don't know if you've heard
of trolley problems.
The trolley problem.The trolley problem.
The trolley problem.The trolley problem.
CUMMINGS:
In the trolley problem,
whether or not
a driverless car
is going to save the driver
or run over
a group of school children
crossing the road.
ETZIONI: Will the car
go right and hit the old lady
or will it go left and kill
the four young people?
MICHAL KOSINSKI:
Either kill five people
standing on a bus stop
or kill yourself.
It's an ethical decision
that has to be made
in the moment,
too fast for any person
to get in there.
So it's the software
in your self-driving car
that's gonna make
that ethical decision.
Who decides that?
KOSINSKI:
Smart society,
society that
values human life,
should make a decision that
killing one person
is preferable
to killing five people.
Of course,
it does raise the question
who's gonna buy that car.
Or, you know, are you gonna
buy the car where actually,
[WHISPERS] it'll swerve
and you'll be fine,
but they're goners.
ETZIONI: There are hundreds
of those kinds of questions
all throughout society.
And so, ethical philosophers
battle back and forth.
ETZIONI: My peeve with
the philosophers is
they're sucking the oxygen
out of the room.
There's this metaphor
that I have of
you have a car
driving away from a wedding,
and the car is driving,
and behind it there are cans.
And the metal cans are
clanking and making noise.
That car is science
and technology
driving forward decisively.
And the metal cans clanking
are the philosophers
making a lot of noise.
The world is changing
in such a way
that the question of us
building machines
that could become
more intelligent than we are
is more of a reality than a fiction right now.
DULLEA: Don't we
have to ask questions now,
before the machines
get smarter than us?
I mean, what if they
achieve consciousness?
How far away is that?
How close we are
to a conscious machine?
That is the question
that brings
philosophers and scientists
to blows, I think.
You are dead center
of the greatest
scientific event
in the history of man.
If you've created
a conscious machine,
it's not the history of man.
That's the history of gods.
The problem with consciousness
is hardly anybody
can define exactly what it is.
DULLEA: Is it consciousness
that makes us special?
Human?
MALONE: A way of thinking
about consciousness
which a lot of people
will agree with...
You'll never find something that appeals to everyone.
...is it's not just
flashing lights,
but someone's home.
So if you have a...
If you bought
one of those little
simple chess-playing machines,
it might beat you at chess.
There are
lots of lights flashing.
But there's nobody in
when it wins,
going "I won."
All right?
Whereas your dog
is a rubbish chess player,
but you look at the dog
and you think,
"Someone's actually in.
It's not just a mechanism."
People wonder whether general intelligence
requires consciousness.
It might partly depend on what
level of general intelligence
you're talking about.
A mouse has some kind
of general intelligence.
My guess is, if you've got
a machine which has
general intelligence
at the level of,
say, a mouse or beyond,
it's gonna be a good contender for having consciousness.
SAM HARRIS: We're not there yet.
We're not confronted by humanoid robots
that are smarter than we are.
We're in this
curious ethical position
of not knowing where and how
consciousness emerges,
and we're building
increasingly complicated
minds.
We won't know whether
or not they're conscious,
but we might feel they are.
MALONE: The easiest example
is if you programmed
into a computer everything you knew about joy,
every reference in literature,
every definition,
every chemical compound
that's involved with joy
in the human brain,
and you put all of that
information in a computer,
you could argue
that it understood joy.
The question is,
had it ever felt joy?
DULLEA: Did HAL have emotions?
Will AIs ever feel
the way we do?
MICHAEL LITTMAN:
It's very hard to get
machines to do the things
that we don't think about,
that we're not conscious of.
Things like detecting
that that person over there
is really unhappy
with the way that...
Something about
what I'm doing.
And they're sending
a subtle signal
and I probably
should change what I'm doing.
We don't know how to build
machines that can actually be
tuned in to those kinds
of tiny little social cues.
There's this notion that maybe
there is something magical
about brains,
about the wetware
of having
an information
processing system
made of meat, right?
And that whatever we do
in our machines,
however competent
they begin to seem,
they never really
will be intelligent
in the way that we experience
ourselves to be.
There's really no basis for that.
TEGMARK: This silly idea
you can't be intelligent
unless you're made of meat.
From my perspective,
as a physicist,
that's just carbon chauvinism.
Those machines
are made of exactly
the same kind of
elementary particles
that my brain is made out of.
Intelligence is just
a certain...
And consciousness is just
a certain kind of information processing.
DULLEA: Maybe we're not
so special after all.
Could our carbon-based brains
be replicated with silicon?
We're trying to build
a computer-generated baby
that we can teach
like a real baby,
with the potential for general intelligence.
To do that, you really
have to build
a computer brain.
I wanted to build a simple toy brain model
in an embodied system
which you could interact with
face-to-face.
And I happen to have an infant at home,
a real baby.
And I scanned the baby
so I could build a 3D model out of her
and then use her face
as basically the embodiment
of the system.
If a brain which is made out of cells,
and it's got blood pumping, is able to think,
it just may be possible
that a computer can think
if the information is moving in the same sort of process.
SAGAR: A lot of
artificial intelligence
at the moment today
is really about
advanced pattern recognition.
It's not about killer robots
that are gonna take over
the world, et cetera.
[AUDIENCE LAUGHING]
So this is Baby X.
She's running live
on my computer.
So as I move my hand around,
make a loud noise,
[CLAPS]
she'll get a fright.
We can kind of zoom in.
So she basically can see me
and hear me, you know.
Hey, sweetheart.[AUDIENCE LAUGHING]
She's not copying my smile,
she's responding to my smile.
So she's responding to me
and we're really concerned
with the question
of how will we
actually interact
with artificial intelligence
in the future.
Ah...
When I'm talking to Baby X,
I'm deliberately
modulating my voice
and deliberately doing big facial expressions.
I'm going "Ooh,"
all this stuff
because I'm getting
her attention.
Ooh.
And now that I've got her attention,
I can teach something.
Okay, so here's Baby X,
and this is... She's been
learning to read words.
So here's
her first word book.
So let's see what
she can see.
Turn to a page...
And here we go.
Let's see what she...
What's this, Baby?
What's this? What's this?
What's this? Sheep.
Good girl.
Now see if she knows
what the word is.
Okay.
Baby, look over here.
Off you go.
What's this? What's this?
Sheep.Good girl.
Let's try her
on something else.
Okay.
[WHISPERS] Let's try on that.
[NORMAL VOICE] What's this?
Baby, Baby, over here.
What's this?
Baby. Look at me.
Look at me. What's this?
Baby, over here.
Over here.
Puppy.Good girl.
That's what
she's just read.
Ooh.[LAUGHS]
The most intelligent system
that we're aware of
is a human.
So using the human
as a template,
as the embodiment of Baby X,
is probably the best way
to try
to achieve
general intelligence.
DULLEA: She seems so human.
Is this the child
who takes our hand
and leads us to HAL?
ANDY CLARK:
Whether it's in a real world
or a virtual world,
I do think embodiment
is gonna be
very crucial here.
Because I think that
general intelligence
is tied up with being able
to act in the world.
You have to have something,
arms, legs, eyes,
something that you can
move around
in order to harvest
new information.
So I think to get systems
that really understand
what they're talking about,
you will need systems
that can act
and intervene in the world,
and possibly systems
that have something
almost like a social life.
For me, thinking and feeling
will come together.
And they will come together
because of the importance
of embodiment.
The stories that
we tell ourselves
open up arenas
for discussion,
and they also help us
think about things.
Ava!
BRYAN JOHNSON:
The most dominant story...
Go back to your room.
...is AI is scary.
There's a bad guy,
there's a good guy...
That is
a non-productive storyline.
Stop!
Ava, I said stop!
Whoa! Whoa! Whoa!
The reality is AI is
remarkably useful to us.
MALE REPORTER: Robot surgeons are increasingly used in NHS hospitals.
They don't get tired,
their hands don't tremble,
and their patients
make a faster recovery.
MAN: It alerted me that
there was one animal in heat.
She was ready to be bred,
and three animals that
may have
a potential health problem.
MAN: A Facebook program
can detect suicidal behavior
by analyzing user information,
and Facebook CEO
Mark Zuckerberg believes
this is just the beginning.
MAN 2: Here is someone
who has visual impairment
being able to see
because of AI.
Those are the benefits
of what AI does.
Diagnose cancer earlier.
Predict heart disease.
MAN: But the revolution
has only just begun.
AI is.
It's happening.
And so it is a question
of what we do with it,
how we build it.
That's the only question
that matters. And I think,
we potentially are at risk
of ruining that
by having this
knee-jerk reaction
that AI is bad
and it's coming to get us.
DULLEA:
So just like my astronaut,
we're on a journey.
Narrow AI is already here.
But how far are we from AGI?
And how fast are we going?
JURGEN SCHMIDHUBER:
Every five years,
computers are getting
ten times faster.
At the moment,
we don't have yet
little devices,
computational devices,
that can compute
as much as a human brain.
But every five years,
computers are getting
ten times cheaper,
which means that soon
we are going to have that.
And once we have that,
then 50 years later,
if the trend doesn't break,
we should have
for the same price
little computational devices
that can compute
as much as
all ten billion brains
of humankind together.
DULLEA: It's catching up.
What happens
if it passes us by?
Now a robot
that's been developed
by a Japanese scientist
is so fast, it can win
the rock-paper-scissors game
against a human
every single time.
The presumption has to be that
when an artificial
intelligence
reaches human intelligence,
it will very quickly
exceed human intelligence,
and once that happens,
it doesn't even
need us anymore
to make it smarter.
It can make itself smarter.
MALONE: You create AIs
which are better than us
at a lot of things.
And one of the things
they get better at
is programming computers.
So it reprograms itself.
And it is now
better than it was,
which means it's now
better than it was
at reprogramming.
So it doesn't
get better like that.
It gets better like that.
ROSENBERG:
And we get left behind
as it keeps getting faster,
evolving at
an exponential rate.
That's this event
that a lot of researchers
refer to as the singularity.
And it will get beyond
where we are competitive
very quickly.
So think about someone
not having a PhD in one area,
but having it
in every conceivable area.
You can't figure out
what they're doing.
They can do things
no one else is capable of.
HARRIS: Just imagine
what it would be like
to be in dialogue
with another person
running on a time course
that was a million times
faster than yours.
So for every second
you had to think
about what you were
going to do
or what you were
going to say,
it had two weeks to think
about this, right?
Um, or a billion times faster,
so if for every second
you had to think,
it it's high-stakes poker
or some negotiation,
every second you are thinking,
it has 32 years
to think about what
its next move will be.
There's no way to win in that game.
DULLEA: The singularity,
the moment the machines
surpass us.
Inevitable? Not everyone
at the forefront of AI
thinks so.
Some of my friends
who are worried
about the future
think that we're gonna
get to a singularity
where the AI systems
are able
to reprogram themselves
and make themselves
better and better
and better.
Maybe in
the long-distant future.
But our present AI systems
are very, very narrow
at what they do.
We can't build a system today
that can understand
a computer program
at the level
that a brand-new student
in programming
understands
in four or five weeks.
And by the way,
people have been working
on this problem
for over 40 years.
It's not because
people haven't tried.
It's because we don't have
any clue how to do it.
So to imagine it's suddenly
gonna jump out of nothing
I think is completely wrong.
[APPLAUSE]
ETZIONI: There's a negative
narrative about AI.
Brilliant people
like Elon Musk
use very religious imagery.
With artificial intelligence,
we are summoning the demon.
You know? You know
all those stories where
there's the guy with
the pentagram and holy water
and he's like, "Yeah,"
he's sure he can control
the demon.
[LAUGHTER]
Doesn't work out.
You know, people like me
who've spent
their entire career
working on AI are literally
hurt by that, right?
It's a personal attack almost.
I tend to think
about what's gonna happen
in the next ten years,
in the next 25 years.
Beyond 25,
it's so difficult to predict
that it becomes
very much a speculation.
But I must warn you,
I've been
specifically programmed
not to take over the world.
STUART RUSSELL:
All of a sudden,
for the first time
in the history of the field,
you have some people saying,
"Oh, the reason
we don't have to worry
"about any kind of
"existential risk
to humanity
"from super-intelligent
machines
"is because it's impossible.
"We're gonna fail."
I... I find this
completely bizarre.
DULLEA: Stuart Russell
cowrote the textbook
many AI students study with.
He understands the momentum
driving our AI future.
But given the rate
of investment in AI,
the number
of incredibly smart people
who are working
on these problems,
and the enormous incentive,
uh, to achieve general
purpose intelligence,
back of the envelope
calculation,
the value is about
10,000 trillion dollars.
Because it would revolutionize the economy of the world.
With that scale of incentive,
I think betting against human ingenuity
just seems incredibly foolish.
China has
a big vested interest
in being recognized
as, kind of, an AI superpower.
And they're investing
what I believe is
upwards of $150 billion
the next five to ten years.
There have been
some news flashes
that the government
will start ramping up
its investment in AI.
Every company
in the Fortune 500,
whether it's IBM, Microsoft,
or they're outside
like Kroger,
is using
artificial intelligence
to gain customers.
I don't think that people
should underestimate
the types of innovations
going on.
And what this world
is gonna look like
in three to five years
from now,
I think people are gonna have
a hard time predicting it.
I know many people
who work
in AI
and they would tell you,
"Look, I'm going to work
and I have doubts.
"I'm working on
creating this machine
"that can treat cancer,
"extend our life expectancy,
"make our societies better,
more just, more equal.
"But it could also
be the end of humanity."
DULLEA:
The end of humanity?
What is our plan
as a species?
We don't have one.
We don't have a plan
and we don't realize
it's necessary
to have a plan.
CAMERON: A lot of
the AI researchers
would love for our
science fiction books
and movies
to be more balanced,
to show the good uses of AI.
All of the bad things
we tend to do,
you play them out
to the apocalyptic level
in science fiction
and then you show them
to people
and it gives you
food for thought.
People need to have that warning signal
in the back of their minds.
ETZIONI: I think
it's really important
to distinguish between
science on the one hand,
and science fiction
on the other hand.
I don't know
what's gonna happen
a thousand years from now,
but as somebody
in the trenches,
as somebody who's building
AI systems,
who's been doing so for more
than a quarter of a century,
I feel like I have
my hand on the pulse
of what's gonna happen next,
in the next five,
ten, 20 years,
much more so
than these folks
who ultimately are not
building AI systems.
We wanna build the technology,
think about it.
We wanna see
how people react to it
and build the next generation
and make it better.
HARRIS: Skepticism
about whether this poses
any danger to us
seems peculiar.
Even if we've given it goals
that we approve of,
it may form instrumental goals
that we can't foresee,
that are not aligned
with our interest.
And that's the goal alignment problem.
DULLEA: Goal alignment.
Exactly how
my battle with HAL began.
The goal alignment problem
with AI is where
you set it a goal
and then, the way
that it does this goal
turns out to be at odds
with what you really wanted.
The classic AI alignment problem in film...
Hello. HAL, do you read me?
...is HAL in2001.
HAL, do you read me?
Make the mission succeed. But to make it succeed,
he's gonna have to make
some of the people
on the mission die.
HAL: Affirmative, Dave.
I read you.
Open the pod bay doors, HAL.
HAL: I'm sorry, Dave.
I'm afraid I can't do that.
"I'm sorry, Dave,
I can't do that,"
is actually
a real genius illustration
of what the real threat is
from super-intelligence.
It's not malice.
It's competence.
HAL was really competent
and had goals that
just didn't align with Dave's.
HAL, I won't argue
with you anymore.
Open the doors!
HAL: Dave, this conversation
can serve no purpose anymore.
Goodbye.
HAL?
HAL?
HAL?
HAL!
DULLEA: HAL and I
wanted the same thing.
Just not in the same way.
And it can be dangerous
when goals don't align.
In Kubrick's film,
there was a solution.
When HAL tried to kill me,
I just switched him off.
RUSSELL: People say, "Well, just switch them off."
But you can't
just switch them off.
Because they
already thought of that.
They're super-intelligent.
For a machine
convinced of its goal,
even something as simple as
"fetch the coffee."
It's obvious to any
sufficiently intelligent
machine
that it can't fetch the coffee
if it's switched off.
Give it a goal, you've given the machine
an incentive to defend itself.
It's sort of a psychopathic behavior
of the single-minded fixation
on some objective.
For example, if you wanted
to try and develop a system
that might generate
world peace,
then maybe the most efficient,
uh, and
speedy way to achieve that
is simply to annihilate
all the humans.
Just get rid of the people,
they would stop being in conflict.
Then there would be
world peace.
Personally, I have
very little time
for these
parlor discussions.
We have to build
these systems,
experiment with them,
see where they go awry
and fix them.
We have so many opportunities
to use AI for the good,
why do we focus on the fear?
DULLEA: We are building them.
Baby X is an example.
Will she be a good AI?
SAGAR: I think
what's critical in this
is that machines
are accountable,
and you can look at them
and see what's making
them tick.
And the best way to understand something is to build it.
Baby X has
a virtual physiology.
Has pain circuits
and it has joy circuits.
She has parameters which are
actually representing
her internal chemical state,
if you like.
You know, if she's stressed,
there's high cortisol.
If she's feeling good,
high dopamine,
high serotonin,
that type of thing.
This is all part of, you know,
adding extra human factors
into computing.
Now, because she's got these
virtual emotional systems,
I can do things like...
I'm gonna avoid her
or hide over here.
So she's looking for
where I've gone.
So we should start seeing
virtual cortisol building up.
AUDIENCE: Aww! [LAUGHING]
I've turned off the audio
for a reason, but anyway...
I'll put her
out of her misery.
Okay. Sorry.
It's okay, sweetheart.
Don't cry. It's okay.
It's okay.
Good girl. Okay. Yeah.
So what's happening
is she's responding
to my facial expressions
and my voice.
But how is chemical state
represented in Baby X?
It's literally
a mathematical parameter.
So there's levels,
different levels of dopamine,
but they're
represented by numbers.
The map of her virtual
neurochemical state
is probably the closest
that we can get
to felt experience.
So if we can build
a computer model
that we can change
the settings of
and the parameters of,
give it different experiences,
and have it interact with us
like we interact
with other people,
maybe this is a way
to explore
consciousness in
a new way.
DULLEA: If she does
become conscious,
what sort of life
will she want?
An AI, if it's conscious,
then it's like your child.
Does it have rights?
MALE NEWS ANCHOR: Last year,
European lawmakers
passed a resolution
to grant legal status
to electronic persons.
That move's now
being considered
by the European Commission.
DULLEA:
Machines with rights.
Are we ready for that?
MALONE: I think it raises
moral and ethical problems,
which we really
are most content to ignore.
What is it
that we really want?
We want a creature, a being, that we can
interact with in some way,
that is enough like us
that we can relate to it,
but is dispensable
or disposable.
The first and most notable one,
I think, was inMetropolis.
The Maria doppelganger that was created by
the scientist simulated a sexy female.
And this goes back
to our penchant for slavery
down through the ages.
Let's get people
that we can,
in a way, dehumanize
and we can do
what we want with them.
And so, I think,
we're just, you know,
finding a new way
to do slavery.
MALONE:
Would it be right for us
to have them as our slaves?
HARRIS: This all becomes
slightly weird ethically
if we happen to,
by choice or by accident,
build conscious slaves.
That seems like a morally
questionable thing to do,
if not an outright
evil thing to do.
Artificial intelligence
is making its way
into the global sex market.
A California company
will soon roll out
a new incredibly lifelike
sex robot
run by
artificial intelligence.
One application
that does trouble me
is the rise of sex robots
that are shaped
in the image of real women.
I am already
taking over the world
one bedroom at a time.
YEUNG:
If they are conscious,
capable of sentience,
of experiencing pain
and suffering,
and they were being
subject to degrading, cruel
or other forms of utterly
unacceptable treatment,
the danger is that
we'd become desensitized
to what's morally acceptable
and what's not.
DULLEA: If she's conscious,
she should be able to say no.
Right?
Shouldn't she have that power?
Aren't these the questions
we need to ask now?
TEGMARK: What do we want it
to mean to be human?
All other species on Earth,
up until now,
have not had the luxury
[LAUGHS]
to answer what they
want it to mean
to be a mouse
or to be an earthworm.
But we have now reached
this tipping point
where our understanding
of science and technology
has reached the point that
we can completely transform
what it means to be us.
DULLEA: What will it mean
to be human?
If machines become more,
will we become less?
The computer Deep Blue
has tonight triumphed over
the world chess champion
Garry Kasparov.
It's the first time
a machine has defeated
a reigning world champion.
More on that showdown
on one of America's
most beloved
game shows, Jeopardy!
Two human champions going
brain-to-brain with a super-computer named Watson.
WATSON:
What isThe Last Judgment?
ALEX TREBEK: Correct.
Go again. Watson.
Who isJean Valjean?TREBEK: Correct.
A Google computer program has
once again beaten
a world champion in Go.
FEMALE REPORTER: Go, an ancient Chinese board game,
is one of the most complex
games ever devised.
FEMALE NEWS ANCHOR:
It's another milestone in
the rush to develop machines
that are smarter than humans.
KOSINSKI: We have
this air of superiority,
we as humans.
So we kind of got used to be
the smartest kid on the block.
And I think that
this is no longer the case.
And we know that
because we rely on AI
to help us to make
many important decisions.
Humans are irrational
in many very bad ways.
We're prejudiced,
we start wars, we kill people.
If you want to maximize
the chances
of people surviving overall,
you should essentially
give more and more
decision-making powers
to computers.
DULLEA: Maybe machines would do a better job than us.
Maybe.
[APPLAUSE]
SOPHIA: I am thrilled
and honored to be here
at the United Nations.
The future is here. It's just
not evenly distributed.
AI could help
efficiently distribute
the world's
existing resources,
like food and energy.
I am here to help humanity
create the future.
[APPLAUSE]
TEGMARK: The one
super-intelligent scenario
that is provocative
to think about
is what I call
the benevolent dictator.
Where you have
this super-intelligence
which runs Earth,
guided by strict ethical principles that try
to help humanity flourish.
Would everybody love that?
If there's no disease,
no poverty, no crime...
Many people would probably
greatly prefer it
to their lot today.
But I can imagine that some
might still lament
that we humans just
don't have more freedom
to determine our own course.
And, um...
It's a tough one.
It really is. We...
I don't think most people
would wanna be
like zoo animals, with no...
On the other hand...
Yeah.
I don't have any glib answers
to these really
tough questions.
That's why I think
it's just so important
that we as a global community
really discuss them now.
WOMAN 1: Machines that
can read your face.
WOMAN 2:
Recognize emotional cues.
MAN 1: Closer to a brain
process of a human...
MAN 2: Robots imbued
with life...
MAN 3:
An emotional computer...
From this moment on,
regardless of
whether you think
we're just gonna get stuck
with narrow AI
or we'll get general AI
or get super-intelligent AI
or conscious
super-intelligent...
Wherever you think
we're gonna get to,
I think any stage along that
will change who we are.
Up and at 'em, Mr. J.
MALONE: The question
you have to ask is,
is AI gonna do something
for you
or something to you?
I was dreaming
about sleeping.
RUSSELL:
If we build AI systems
that do everything we want,
we kind of hand over
our world to them.
So that kind of enfeeblement
is kind of irreversible.
There is an existential risk
that is real and serious.
If every single desire,
preference, need, has been catered for...
Morning, Rosie.
What's for breakfast?
...we don't notice
that our capacity
to make decisions
for ourselves
has been slowly eroded.
This is George Jetson,
signing off for
the entire day.
[SIGHS]
There's no going back
from there. Why?
Because AI can make
such decisions
with higher accuracy
than whatever
a human being can achieve.
I had a conversation
with an engineer, and he said,
"Look, the safest way
of landing a plane today
"is to have the pilot
leave the deck
"and throw away the key,
"because the safest way
"is to use
an autopilot, an AI."
JOHNSON: We cannot
manage the complexity
of the world by ourselves.
Our form of intelligence,
as amazing as it is,
is severely flawed.
And by adding
an additional helper,
which is
artificial intelligence,
we stand the ability
to potentially improve
our cognition dramatically.
MALE REPORTER:
The most cutting-edge thing
about Hannes Sjoblad
isn't the phone in his hand,
it's the microchip
in his hand.
What if you could
type something
just by thinking about it?
Sounds like sci-fi,
but it may be close
to reality.
FEMALE REPORTER:
In as little as five years,
super-smart people could be
walking down the street,
men and women who've paid
to increase
their intelligence.
Linking brains to computers
by using implants.
What if we could upload
a digital copy
of our memories
and consciousness to a point
that human beings
could achieve
a sort of digital immortality?
Just last year,
for the first time
in world history,
a memory was recorded.
We can actually take
recorded memory
and insert it back
into an animal,
and it remembers.
Technology is a part of us.
It's a reflection of
who we are.
That is the narrative
we should be thinking about.
We think all that exists
is everything we can imagine.
When in reality,
we can imagine
a very small subset
of what's
potentially possible.
The real interesting stuff
in the world
is what sits
beyond our imagination.
When these
massive systems emerge,
as a species,
I think we get an F
for our ability to predict
what they're going to become.
DULLEA: Technology
has always changed us.
We evolve with it.
How far are we prepared
to go this time?
There was a time
when Aristotle
lamented the creation
of the written language
because he said this would
destroy people's memories
and it would be
a terrible thing.
And yet today, I carry
my brain in my pocket,
in the form of my smartphone,
and that's changed the way
I go through the world.
Oh, I think we're absolutely
merging with our technology.
Absolutely. I would... I would
use the term "co-evolving."
We're evolving it to match our needs
and we are changing to match what it can provide to us.
DULLEA: So instead of being
replaced by the machines,
we're going to merge
with them?
GRADY BOOCH:
I have a number of friends
who have bionic arms.
Are they cyborgs?
In a way,
they're on that path.
I have people
who have implants
in their ears.
Are they AIs?
Well, we're in a way
on that path
of augmenting human bodies.
JOHNSON: Human potential lies
in this co-evolutionary
relationship.
A lot of people
are trying to hold onto
the things that make them
special and unique,
which is not
the correct frame.
Let's create more.
Rewrite our intelligence,
improve everything
we're capable of. Emotions,
imagination, creativity.
Everything that
makes us human.
There's already
a symbiotic relationship
between us and a machine
that is enhancing who we are.
And... And there is a whole movement, you know,
the trans-human movement,
where basically
people really bet bigtime
on the notion that
this machine-human integration
is going to redefine who we are.
This is only gonna grow.
BOOCH:
There's gonna be a learning
process that gets us
to that endgame where
we live with these devices
that become
smarter and smarter.
We learn
how to adapt ourselves
around them,
and they adapt to us.
And I hope we become
more human in the process.
DULLEA: It's hard
to understand how merging
with the machines
can make us more human.
But is merging
our best chance of survival?
ROSENBERG:
We humans are limited
to how many neurons
we can fit in our brain.
And there are other species that have even smaller brains.
When you look at
bees and fish moving,
it looks like
a single organism.
And biologists actually refer
to it as a super-organism.
That is why birds flock,
and bees swarm,
and fish school.
They make decisions that are in the best interest
of the whole population
by thinking together
as this larger mind.
And we humans could do
the same thing by forming
a hive mind connecting us
together with AI.
So the humans and the software together
become this super-intelligence,
enabling us to make the best decisions for us all.
When I first started looking
at this, it sounded
scary to me.
But then I realized
it's scarier
for an alien intelligence
to emerge that has
nothing to do with humans
than for us humans
to connect together
and become
a super-intelligence.
That's the direction
that I work on,
uh, because I see it as,
in some sense,
the only alternative
to humans ultimately
being replaced by AI.
DULLEA: Can Baby X help us
understand the machines
so we can work together?
Human and intelligent machine
cooperation will define
the next era of history.
You probably do have to create a virtual empathetic machine.
Empathy is a lot about connecting with
other people's struggles.
The more connection you have,
the more communication
that you have,
the safer that makes it.
So at a very basic level,
there's some sort
of symbiotic relationship.
By humanizing this technology,
you're actually increasing the channels
for communication and cooperation.
AI working in a human world
has to be about humans.
The future's what we make it.
TEGMARK: If we
can't envision a future
that we really, really want,
we're probably
not gonna get one.
DULLEA: Stanley Kubrick
tried to imagine
what the future
might look like.
Now that
that future's approaching,
who is in charge
of what it will look like?
MAN: Who controls it?MAN 2: Google.
MAN 3: The military.MAN 4: Tech giant Facebook...
WOMAN: Samsung...WOMAN 2:
Facebook, Microsoft, Amazon.
CUMMINGS: The tide has
definitely turned
in favor of
the corporate world.
The US military, and I suspect
this is true worldwide,
cannot hold a candle to the AI
that's being developed
in corporate companies.
DULLEA:
If corporations,
not nations, hold the power,
what can we do
in a capitalist system
to ensure AI
is developed safely?
If you mix AI research
with capitalism,
on the one hand, capitalism
will make it run faster.
That's why we're getting
lots of research done.
But there is a basic problem.
There's gonna be a pressure
to be first one to the market,
not the safest one.
FEMALE REPORTER: This car was in autopilot mode
when it crashed
in Northern California,
killing the driver.
Artificial intelligence
may have
more power to crash
the markets
than anything else
because humans aren't
prepared to stop them.
A chatbot named Tay.
She was designed to speak like a teenager
and learn by interacting online.
But in a matter of hours, Tay was corrupted
into a Hitler-supporting sexbot.
This kind of talk that somehow
in the context of capitalism,
AI is gonna be misused,
really frustrates me.
Whenever we develop
technology,
whether it's, you know,
coal-burning plants,
whether it's cars,
there's always that incentive
to move ahead quickly,
and there's always
the regulatory regime,
part of the government's job
to balance that.
AI is no different here.
Whenever we have
technological advances,
we need to temper them
with appropriate
rules and regulations.
Duh!
Until there's
a major disaster,
corporations are gonna
be saying
"Oh, we don't need to worry
about these things.
"Trust us, we're the experts.
"We have humanity's
best interests at heart."
What AI is really about
is it's a new set of rules
for companies in terms of
how to be more competitive,
how to drive more efficiency,
and how to drive growth,
and creative new
business models and...
RUSSELL:
Once corporate scientists
start to dominate the debate,
it's all over, you've lost.
You know, they're paid
to deny the existence
of risks
and negative outcomes.
SCHMIDHUBER:
At the moment,
almost all AI research
is about making human lives
longer, healthier and easier.
Because the big companies,
they want to sell you AIs
that are useful for you,
and so you are
only going to buy
those that help you.
MALONE: The problem of AI
in a capitalist paradigm
is it's always
gonna be cheaper
just to get the product
out there when it does the job
rather than do it safely.
ROMAN YAMPOLSKIY:
Right now, the way it works,
you have a conference
with 10,000 researchers,
all of them working
on developing more capable AI, except, like, 50 people
who have a tiny workshop
talking about safety
and controlling
what everyone else is doing.
No one has a working safety mechanism.
And no one even has
a prototype for it.
SCHMIDHUBER:
In the not-so-distant future,
you will have
all kinds of AIs
that don't just do
what humans tell them,
but they will invent
their own goals
and their own experiments,
and they will transcend
humankind.
YAMPOLSKIY:
As of today,
I would say
it's completely unethical
to try to develop such a machine and release it.
We only get one chance
to get it right.
You screw up,
everything possibly is lost.
DULLEA: What happens
if we get it wrong?
The thing that scares me
the most is this AI
will become the product
of its parents,
in the same way
that you are and I am.
Let's ask who
those parents are.
Right?
Chances are they're either
gonna be military parents
or they're gonna be
business parents.
If they're business parents,
they're asking the AI
to become something
that can make a lot of money.
So they're teaching it greed.
Military parents are gonna teach it to kill.
Truly superhuman intelligence
is so much... Is so super.
It's so much better
than we are
that the first country or company to get there
has the winner-take-all advantage.
[APPLAUSE]
[SPEAKING RUSSIAN]
Because there's such
an arms race in AI globally,
and that's both literally,
in terms of building
weapons systems,
but also figuratively
with folks like Putin
and the Chinese saying
whoever dominates AI
is gonna dominate the economy
in the 21st century.
I don't think we have a choice
to put the brakes on,
to stop developing
and researching.
We can build the technology,
we can do the research,
and imbue it with our values,
but realistically,
totalitarian regimes
will imbue the technology
with their values.
FEMALE REPORTER:
Israel, Russia, the US
and China
are all contestants
in what's being described
as a new global arms race.
All countries,
whether they admit it or not,
are going to develop
lethal autonomous weapons.
And is that really
a capability that we want
in the hands
of a corporate nation state
as opposed to a nation state?
The reality is
if Facebook or Google
wanted to start an army,
they would have
technologically
superior capabilities
than any other nation
on the planet.
DULLEA: Some AI scientists
are so concerned,
they produced this
dramatized video
to warn of the risks.
Your kids probably have
one of these, right?
the Slaughterbot video
to illustrate
the risk
of autonomous weapons.
That skill is all AI.
It's flying itself.
There are people within
the AI community
who wish I would shut up.
People who think that making the video is
a little bit treasonous.
Just like any mobile device
these days,
it has cameras and sensors.
And just like your phones
and social media apps,
it does facial recognition.
Up until now,
the arms races tended to be
bigger, more powerful,
more expensive,
faster jets that cost more.
AI means that you could have
very small, cheap weapons
that are very efficient
at doing
one very simple job, like
finding you in a crowd
and killing you.
This is how it works.
[GASPING, APPLAUSE]
Every single major advance
in human technology
has always been weaponized.
And the idea of weaponized AI,
I think, is terrifying.
Trained as a team,
they can penetrate
buildings, cars, trains,
evade people, bullets...
They cannot be stopped.
RUSSELL: There are lots of
weapons that kill people.
Machine guns
can kill people,
but they require
a human being
to carry and point them.
With autonomous weapons,
you don't need a human
to carry them
or supervise them.
You just program and then off they go.
So you can create the effect of an army
carrying a million machine guns,
uh, with two guys and a truck.
Putin said that
the first person to develop
strong artificial intelligence
will rule the world.
When world leaders use terms
like "rule the world,"
I think you gotta pay attention to it.
Now, that is an air strike
of surgical precision.
[CHEERING AND APPLAUSE]
I've been campaigning against
these weapons since 2007,
and there's three main
sets of arguments.
There's the moral arguments,
there's arguments
about not complying
with the laws of war,
and there are arguments
about how they will change
global security of the planet.
We need new regulations.
We need to really think
about it now, yesterday.
Some of the world's leaders
in artificial intelligence
are urging the United Nations
to ban the development
of autonomous weapons,
otherwise known
as killer robots.
MALE REPORTER: In the letter
they say "once developed,
"they will permit
armed conflict to be fought
"at timescales faster
than humans can comprehend."
BOOCH: I'm a signatory
to that letter.
My philosophy is,
and I hope the governments
of the world enforce this,
that as long as
a human life is involved,
then a human needs to be
in the loop.
Anyone with a 3D printer
and a computer
can have a lethal
autonomous weapon.
If it's accessible
to everyone on the planet,
then compliance, for example,
would be nearly impossible
to track.
What the hell
are you doing? Stop!
You're shooting into thin air.
HARRIS: When you're imagining
waging war in the future,
where soldiers are robots,
and we pose no risk
of engagement to ourselves...
When war becomes
just like a video game,
as it has
incrementally already,
it's a real concern.
When we don't have skin in the game, quite literally...
[SCREAMS]
...when all of this is being run from a terminal
thousands of miles away from the battlefield...
...that could lower the bar for waging war
to a point where we would recognize
that we have become monsters.
Who are you?
Hey! Hey!
[GUNSHOT]
Having been
a former fighter pilot,
I have also seen
people make mistakes
and kill innocent people
who should not
have been killed.
One day in the future,
it will be more reliable
to have a machine do it
than a human do it.
The idea of robots
who can kill humans
on their own
has been a science fiction
staple for decades.
There are a number of
these systems around already,
uh, that work autonomously.
But that's called
supervised autonomy
because somebody's there
to switch it on and off.
MALE REPORTER: What's only
been seen in Hollywood
could soon be coming
to a battlefield near you.
The Pentagon investing
billions of dollars
to develop autonomous weapons,
machines that could one day
kill on their own.
But the Pentagon insists humans will decide to strike.
MALONE: Once you get AI
into military hardware,
you've got to make it
make the decisions,
otherwise there's no point
in having it.
One of the benefits of AI is that it's quicker than us.
Once we're in a proper fight and that switch
that keeps the human in the loop,
we better switch it off,
because if you don't,
you lose.
These will make
the battlefield
too fast for humans.
If there's no human involved at all,
if command and control have gone over to autonomy,
we could trigger a war and it could be finished
in 20 minutes with massive devastation,
and nobody knew about it before it started.
DULLEA: So who decides what
we should do in the future
now?
YAMPOLSKIY:
From Big Bang till now,
billions of years.
This is the only time
a person can steer
the future
of the whole humanity.
TEGMARK:
Maybe we should worry that
we humans aren't wise enough
to handle this much power.
GLEISER: Who is actually
going to make the choices,
and in the name of whom
are you actually speaking?
If we, the rest of society,
fail to actually
really join this conversation
in a meaningful way
and be part of the decision,
then we know whose values
are gonna go into these
super-intelligent beings.
Red Bull.Yes, Deon.
Red Bull.
TEGMARK: It's going to be decided by some tech nerds
who've had too much Red Bull to drink...
who I don't think
have any particular
qualifications to speak
on behalf of all humanity.
Once you are in
the excitement of
discovery...
Come on!
...you sort of
turn a blind eye
to all these
more moralistic questions,
like how is this going to
impact society as a whole?
And you do it because you can.
MALONE: And the scientists
are pushing this forward
because they can see
the genuine benefits
that this will bring.
And they're also
really, really excited.
I mean, you don't...
You don't go into science
because, oh, well,
you couldn't
be a football player,
"Hell, I'll be a scientist."
You go in because
that's what you love
and it's really exciting.
Oh! It's alive!
It's alive! It's alive!
It's alive!
YAMPOLSKIY:
We're switching
from being creation
to being God.
Now I know what it feels like
to be God!
YAMPOLSKIY:
If you study theology,
one thing you learn
is that it is never the case
that God
successfully manages
to control its creation.
DULLEA: We're not there yet,
but if we do create
artificial general
intelligence,
can we maintain control?
If you create a self-aware AI,
the first thing
it's gonna want
is independence.
At least, the one intelligence
that we all know is ourselves.
And we know that we would not wanna be doing the bidding
of some other intelligence.
We have no reason to believe that it will be friendly...
...or that it will have goals and aspirations
that align with ours.
So how long before
there's a robot revolution
or an AI revolution?
We are in a world today where it is
absolutely a foot race
to go as fast as possible
toward creating
an artificial
general intelligence
and there don't seem to be
any brakes being applied
anywhere in sight.
It's crazy.
YAMPOLSKIY: I think
it's important to honestly
discuss such issues.
It might be
a little too late to make
this conversation afterwards.
It's important to remember
that intelligence is not evil
and it is also not good.
Intelligence gives power.
We are more powerful than tigers on this planet,
not because we have sharper claws
or bigger biceps,
because we're smarter.
CHALMERS: We do take
the attitude that ants
matter a bit and fish matter a bit, but humans matter more.
Why? Because we're
more intelligent,
more sophisticated,
more conscious.
Once you've got systems which
are more intelligent than us,
maybe they're simply gonna matter more.
As humans, we live
in a world of microorganisms.
They're life, but if a microorganism
becomes a nuisance, we take an antibiotic
and we destroy billions of them at once.
If you were an artificial intelligence,
wouldn't you look
at every little human
that has this
very limited view
kind of like
we look at microorganisms?
DULLEA: Maybe I would.
Maybe human insignificance
is the biggest threat of all.
TEGMARK:
Sometimes when I talk
about AI risk, people go,
"Shh! Max,
don't talk like that.
"That's Luddite
scaremongering."
But it's not scaremongering.
It's what we at MIT
call safety engineering.
I mean, think about it, before NASA launched the Apollo 11 moon mission,
they systematically thought through
everything that could go wrong
when you put a bunch of people
on top of explosive fuel tanks
and launched them somewhere
where no one can help them.
Was that scaremongering?
No, that was precisely
the safety engineering
that guaranteed
the success of the mission.
That's precisely the strategy I think we have to take with AGI.
Think through everything that could go wrong
to make sure it goes right.
I am who I am
because of the movie
2001: A Space Odyssey.
Aged 14, it was
an eye-opener for me.
That movie is the best
of all science fiction
with its predictions
for how AI would evolve.
But in reality,
if something is sufficiently
far in the future,
we can't imagine
what the world will be like,
so we actually can't say
anything sensible
about what the issues are.
LITTMAN: Worrying about
super-intelligence safety
is a little like
worrying about
how you're
gonna install seat belts
in transporter devices.
Like in Star Trek,
where you can just zip
from one place to another.
Um, maybe there's
some dangers to that,
so we should have
seatbelts, right?
But it's of course very silly
because we don't know enough
about how
transporter technology
might look to be able to say
whether seatbelts even apply.
GLEISER: To say that there
won't be a problem because
the unknowns are so vast,
that we don't even know
where the problem
is gonna be,
that is crazy.
I think conversation is
probably the only
way forward here.
At the moment,
among ourselves,
and when the time comes,
with the emerging
artificial intelligences.
So as soon as
there is anything
remotely in that ballpark,
we should start trying
to have a conversation
with it somehow.
Dog. Dog.SAGAR: Yeah.
One of the things we hope
to do with Baby X is to try
to get a better insight
into our own nature.
She's designed
to learn autonomously.
She can be plugged
into the Internet,
click on webpages by herself
and see what happens,
kind of learn how we learn.
If we make this
super fantastic AI,
it may well help us become
better versions of ourselves,
help us evolve.
It will have societal effects.
Is it to be feared?
Or is it something
to learn from?
DULLEA: So when might
we be faced with machines
that are as smart,
smarter than we are?
I think one of the ways
to think about
the near future of AI
is what I would call
the event horizon.
People have heard of black holes,
this super dense point where
if you get too close, you get sucked in.
There is a boundary. Once you cross that boundary,
there is no way back.
What makes that dangerous is you can't tell where it is.
And I think AI as a technology
is gonna be like that,
that there will be a point
where the logic of the money
and the power and the promise
will be such that
once we cross that boundary,
we will then be
into that future.
There will be no way back,
politically, economically
or even in terms
of what we want.
But we need to be worrying
about it before we reach
the event horizon,
and we don't know where it is.
I don't think it's gonna
happen any time soon.
The next few decades.
Twenty-five to 50 years.
A few hundred years.
It's over
the next few decades.
Not in my lifetime.
Nobody knows.
But it doesn't really matter.
Whether it's years
or 100 years,
the problem is the same.
It's just as real
and we'll need
all this time to solve it.
TEGMARK:
History is just full of
over-hyped tech predictions.
Like, where are
those flying cars
that were supposed
to be here, right?
But it's also full
of the opposite,
when people said,
"Oh, never gonna happen."
I'm embarrassed as a physicist that Ernest Rutherford,
one of the greatest nuclear physicists of all time said
that nuclear energy was never gonna happen.
And the very next day,
Leo Szilard invents
the nuclear chain reaction.
So if someone says
they are completely sure
something is never
gonna happen,
even though it's consistent
with the laws of physics,
I'm like, "Yeah, right."
HARRIS: We're not there yet.
We haven't built a super-intelligent AI.
And even if we stipulated that it won't happen
shorter than 50 years from now, right?
So we have 50 years to think about this.
Well...
it's nowhere written
that 50 years
is enough time to figure out
how to do this safely.
CAMERON: I actually think
it's going to take
some extreme negative example
of the technology
for everyone to pull back
and realize that
we've got to get this thing
under control, we've got to have constraints.
We are living
in a science fiction world
and we're all,
um, in this big, kind of
social, tactical experiment,
experimenting on ourselves,
recklessly,
like no doctor would
ever dream of doing.
We're injecting ourselves
with everything that
we come up with
to see what happens.
LITTMAN: Clearly,
the thought process is...
You're creating
learning robots.
Learning robots are exactly
what lead to Skynet.
Now I've got a choice,
right? I can say,
"Don't worry,
I'm not good at my job.
"You have nothing to fear
because I suck at this
"and I'm not gonna
actually accomplish anything."
Or I have to say, "You know,
"You're right,
we're all gonna die,
"but it's gonna be
sort of fun to watch,
"and it'd be good to be
one of the people
in the front row."
So yeah, I don't have
a terrific answer
to these things.
The truth, obviously,
is somewhere in between.
DULLEA:
Can there be an in-between?
Or are we facing
a new world order?
I have some colleagues who
are fine with AI taking over
and even causing
human extinction,
as long as we feel
that those AIs
are our worthy ancestors.
SCHMIDHUBER:
In the long run, humans
are not going to remain
the crown of creation.
But that's okay.
Because there is still beauty
and grandeur and greatness
in realizing that you are
a tiny part of
a much grander scheme
which is leading the universe
from lower complexity
towards higher complexity.
CAMERON: We could be rapidly approaching a precipice
for the human race where we find
that our ultimate destiny is to have been
the forebearer of the true
emergent, evolutionary, intelligent sort of beings.
So we may just be here
to give birth
to the child of this...
Of this machine...
Machine civilization.
CHALMERS: Selfishly,
as a human, of course,
I want humans to stick around.
But maybe that's a very local, parochial point of view.
Maybe viewed from the point
of view of the universe,
it's a better universe
with our...
With our super-intelligent
descendants.
RUSSELL: Our field
is likely to succeed
in its goal to achieve
general purpose intelligence.
We now have the capability
to radically modify
the economic structure
of the world,
the role of human beings
in the world
and, potentially,
human control over the world.
The idea was
that he is taken in
by godlike entities,
creatures of pure energy
and intelligence.
They put him in what you could describe as a human zoo.
DULLEA: The end
of the astronaut's journey.
Did he travel through time?
To the future?
Were those godlike entities
the descendants of HAL?
Artificial intelligence
that you can't switch off,
that keeps learning
and learning until
they're running the zoo,
the universe,
everything.
Is that Kubrick's
most profound prophecy?
Maybe.
But take it from an old man.
If HAL is close
to being amongst us,
we need to pay attention.
We need to talk about AI.
Super-intelligence.
Will it be conscious?
The trolley problem.
Goal alignment.
The singularity.
First to the market.
AI safety.
Lethal autonomous weapons.
Robot revolution.
The hive mind.
The transhuman movement.
Transcend humankind.
We don't have a plan,
we don't realize
it's necessary
to have a plan.
What do we want
the role of humans to be?
[BEEPING]
If we build artificial
general intelligence,
that'll be the biggest event
in the history
of life on Earth.
LOUIS ROSENBERG: Alien intelligence inside a computer system that
has control over the planet,
the day it arrives...
JAMES CAMERON:
And science fiction
books and movies
have set it up in advance.
What's the problem?
HAL: I think you know
what the problem is
just as well as I do.
What are you
talking about, HAL?
HAL: This mission
is too important
for me to allow you
to jeopardize it.
I don't know what you're
talking about, HAL.
KEIR DULLEA:
HAL, a super-intelligent
computer.
We'd call it AI today.
My character, Dave,
was the astronaut
in one of the most prophetic
films ever made
about the threat
posed to humanity
by artificial intelligence.
HAL was science fiction.
But now, AI is all around us.
Artificial intelligence
isn't the future.
AI is already here.
This week a robot
ran for mayor
in a small
Japanese town.
The White House says
it is creating
a new task force
to focus
on artificial intelligence.
Google has already
announced plans
to invest more in AI research.
MALE NEWS ANCHOR: IBM sending an artificial intelligence robot into space.
DULLEA:
Because of my small role
in the conversation,
I understand
that we're on a journey,
one that starts
with narrow AI.
Machines that perform
specific tasks
better than we can.
But the AI of science fiction
is artificial general
intelligence,
machines that can
do everything
better than us.
Narrow AI is really
like an idiot savant
to the extreme,
where it can do something,
like multiply numbers
really fast,
way better than any human.
But it can't do
anything else whatsoever.
But AGI, artificial
general intelligence,
which doesn't exist yet,
is instead
intelligence
that can do everything
as well as humans can.
DULLEA: So what happens
if we do create AGI?
Science fiction authors
have warned us
it could be dangerous.
And they're not the only ones.
ELON MUSK: I think people should be concerned about it.
AI is a fundamental risk
to the existence
of human civilization.
And I don't think people
fully appreciate that.
I do think we have to
worry about it.
I don't think it's inherent
as we create
super intelligence
that it will
necessarily always
have the same goals in mind
that we do.
I think the development
of full
artificial intelligence
could spell the end
of the human race.
[ECHOING]
...the end of the human race.
Tomorrow, if the headline said
an artificial intelligence
has been manufactured
that's as intelligent
or more intelligent
than human beings,
people would argue about it,
but I don't think
they'd be surprised,
because science fiction
has helped us imagine all kinds
of unimaginable things.
DULLEA: Experts are divided
about the impact
AI will have on our future,
and whether or not Hollywood
is helping us plan for it.
OREN ETZIONI: Our dialogue
has really been hijacked
by Hollywood
because The Terminator
makes a better blockbuster
than AI being good,
or AI being neutral,
or AI being confused.
Technology here in AI
is no different.
It's a tool.
I like to think of it
as a fancy pencil.
And so what pictures
we draw with it,
that's up to us as a society.
We don't have to do
what AI tells us.
We tell AI what to do.
[BEEPING]
DAVID MALONE: I think
Hollywood films
do help the conversation.
Someone needs to be thinking
ahead of what
the consequences are.
But just as we can't
really imagine
what the actual science
of the future
is going to be like,
I don't think we've begun
to really think about
all of the possible futures,
which logically spring out
of this technology.
MALE REPORTER:
Artificial intelligence has
the power to change society.
A growing chorus
of criticism
is highlighting the dangers
of handing control
to machines.
There's gonna be
a lot of change coming.
The larger long-term concern
is that humanity will be
sort of shunted aside.
NEWS ANCHOR:
This is something
Stanley Kubrick and others
were worried about
50 years ago, right?
DULLEA: It's happening.
How do we gauge the urgency
of this conversation?
ROSENBERG: If people saw on radar right now
that an alien spaceship
was approaching Earth
and it was 25 years away,
we would be mobilized
to prepare
for that alien's arrival
25 years from now.
But that's exactly
the situation that we're in
with artificial intelligence.
Could be 25 years, could be 50 years,
but an alien intelligence will arrive
and we should be prepared.
MAN:
Well, gentlemen, meet Tobor.
An electronic simulacrum
of a man.
Oh, gosh. Oh, gee willikers.
Even though much work remains
before he is completed,
he is already
a sentient being.
[MURMURING]
RODNEY BROOKS:
Back in 1956 when the term
artificial intelligence
first came about,
the original goals
of artificial intelligence
were human-level intelligence.
He does look almost kind,
doesn't he?[BEEPING]
BROOKS: Over time that's proved to be really difficult.
I think, eventually,
we will have human-level
intelligence from machines.
But it may be
a few hundred years.
It's gonna take a while.
And so people are getting
a little over-excited
about the future capabilities.
And now, Elektro,
I command you to walk back.
Back.
MALONE:
Almost every generation,
people have got
so enthusiastic.
They watch 2001,
they fall in love with HAL,
they think,
"Yeah, give me six weeks.
I'll get this sorted."
It doesn't happen and everyone gets upset.
DULLEA:
So what's changed
in AI research?
MALONE: The difference with the AI we have today
and the reason AI suddenly took this big leap forward
is the Internet.
That is their world.
Tonight the information
superhighway,
and one of its main
thoroughfares,
an online network
called Internet.
MALE REPORTER: In 1981,
only 213 computers
were hooked to the Internet.
As the new year begins,
an estimated
two-and-a-half million
computers
will be on the network.
AIs need to eat.
Just like sheep need grass,
AIs need data.
And the great prairies of data
are on the Internet.
That's what it is.
They said, "You know what?
"Let's just let them loose out there."
And suddenly,
the AIs came to life.
FEMALE REPORTER: Imagine
thousands of talking robots
able to move as one unit,
taking on crime, fires,
and natural disasters.
Artificial intelligence
platform Amper
has created the first album
entirely composed
and produced
by artificial intelligence.
Next week, Christie's will be
the first auction house
to offer artwork created
by artificial intelligence.
Driverless cars, said to be
in our not-so-distant future.
DULLEA: So even though
we don't have AGI
or human-level
intelligence yet,
are we ready to give autonomy
to the machines we do have?
Should self-driving cars
make ethical decisions?
That is the growing debate
as the technology
moves closer towards
mainstream reality.
MARY CUMMINGS: This idea
of whether or not
we can embed ethics
into machines,
whether or not they are
automated machines
or autonomous machines,
I'm not sure
we've got that figured out.
If you have
a self-driving car,
it's gonna have to make
ethical decisions.
A classic one people have been
thinking a lot about is...
I don't know if you've heard
of trolley problems.
The trolley problem.The trolley problem.
The trolley problem.The trolley problem.
CUMMINGS:
In the trolley problem,
whether or not
a driverless car
is going to save the driver
or run over
a group of school children
crossing the road.
ETZIONI: Will the car
go right and hit the old lady
or will it go left and kill
the four young people?
MICHAL KOSINSKI:
Either kill five people
standing on a bus stop
or kill yourself.
It's an ethical decision
that has to be made
in the moment,
too fast for any person
to get in there.
So it's the software
in your self-driving car
that's gonna make
that ethical decision.
Who decides that?
KOSINSKI:
Smart society,
society that
values human life,
should make a decision that
killing one person
is preferable
to killing five people.
Of course,
it does raise the question
who's gonna buy that car.
Or, you know, are you gonna
buy the car where actually,
[WHISPERS] it'll swerve
and you'll be fine,
but they're goners.
ETZIONI: There are hundreds
of those kinds of questions
all throughout society.
And so, ethical philosophers
battle back and forth.
ETZIONI: My peeve with
the philosophers is
they're sucking the oxygen
out of the room.
There's this metaphor
that I have of
you have a car
driving away from a wedding,
and the car is driving,
and behind it there are cans.
And the metal cans are
clanking and making noise.
That car is science
and technology
driving forward decisively.
And the metal cans clanking
are the philosophers
making a lot of noise.
The world is changing
in such a way
that the question of us
building machines
that could become
more intelligent than we are
is more of a reality than a fiction right now.
DULLEA: Don't we
have to ask questions now,
before the machines
get smarter than us?
I mean, what if they
achieve consciousness?
How far away is that?
How close we are
to a conscious machine?
That is the question
that brings
philosophers and scientists
to blows, I think.
You are dead center
of the greatest
scientific event
in the history of man.
If you've created
a conscious machine,
it's not the history of man.
That's the history of gods.
The problem with consciousness
is hardly anybody
can define exactly what it is.
DULLEA: Is it consciousness
that makes us special?
Human?
MALONE: A way of thinking
about consciousness
which a lot of people
will agree with...
You'll never find something that appeals to everyone.
...is it's not just
flashing lights,
but someone's home.
So if you have a...
If you bought
one of those little
simple chess-playing machines,
it might beat you at chess.
There are
lots of lights flashing.
But there's nobody in
when it wins,
going "I won."
All right?
Whereas your dog
is a rubbish chess player,
but you look at the dog
and you think,
"Someone's actually in.
It's not just a mechanism."
People wonder whether general intelligence
requires consciousness.
It might partly depend on what
level of general intelligence
you're talking about.
A mouse has some kind
of general intelligence.
My guess is, if you've got
a machine which has
general intelligence
at the level of,
say, a mouse or beyond,
it's gonna be a good contender for having consciousness.
SAM HARRIS: We're not there yet.
We're not confronted by humanoid robots
that are smarter than we are.
We're in this
curious ethical position
of not knowing where and how
consciousness emerges,
and we're building
increasingly complicated
minds.
We won't know whether
or not they're conscious,
but we might feel they are.
MALONE: The easiest example
is if you programmed
into a computer everything you knew about joy,
every reference in literature,
every definition,
every chemical compound
that's involved with joy
in the human brain,
and you put all of that
information in a computer,
you could argue
that it understood joy.
The question is,
had it ever felt joy?
DULLEA: Did HAL have emotions?
Will AIs ever feel
the way we do?
MICHAEL LITTMAN:
It's very hard to get
machines to do the things
that we don't think about,
that we're not conscious of.
Things like detecting
that that person over there
is really unhappy
with the way that...
Something about
what I'm doing.
And they're sending
a subtle signal
and I probably
should change what I'm doing.
We don't know how to build
machines that can actually be
tuned in to those kinds
of tiny little social cues.
There's this notion that maybe
there is something magical
about brains,
about the wetware
of having
an information
processing system
made of meat, right?
And that whatever we do
in our machines,
however competent
they begin to seem,
they never really
will be intelligent
in the way that we experience
ourselves to be.
There's really no basis for that.
TEGMARK: This silly idea
you can't be intelligent
unless you're made of meat.
From my perspective,
as a physicist,
that's just carbon chauvinism.
Those machines
are made of exactly
the same kind of
elementary particles
that my brain is made out of.
Intelligence is just
a certain...
And consciousness is just
a certain kind of information processing.
DULLEA: Maybe we're not
so special after all.
Could our carbon-based brains
be replicated with silicon?
We're trying to build
a computer-generated baby
that we can teach
like a real baby,
with the potential for general intelligence.
To do that, you really
have to build
a computer brain.
I wanted to build a simple toy brain model
in an embodied system
which you could interact with
face-to-face.
And I happen to have an infant at home,
a real baby.
And I scanned the baby
so I could build a 3D model out of her
and then use her face
as basically the embodiment
of the system.
If a brain which is made out of cells,
and it's got blood pumping, is able to think,
it just may be possible
that a computer can think
if the information is moving in the same sort of process.
SAGAR: A lot of
artificial intelligence
at the moment today
is really about
advanced pattern recognition.
It's not about killer robots
that are gonna take over
the world, et cetera.
[AUDIENCE LAUGHING]
So this is Baby X.
She's running live
on my computer.
So as I move my hand around,
make a loud noise,
[CLAPS]
she'll get a fright.
We can kind of zoom in.
So she basically can see me
and hear me, you know.
Hey, sweetheart.[AUDIENCE LAUGHING]
She's not copying my smile,
she's responding to my smile.
So she's responding to me
and we're really concerned
with the question
of how will we
actually interact
with artificial intelligence
in the future.
Ah...
When I'm talking to Baby X,
I'm deliberately
modulating my voice
and deliberately doing big facial expressions.
I'm going "Ooh,"
all this stuff
because I'm getting
her attention.
Ooh.
And now that I've got her attention,
I can teach something.
Okay, so here's Baby X,
and this is... She's been
learning to read words.
So here's
her first word book.
So let's see what
she can see.
Turn to a page...
And here we go.
Let's see what she...
What's this, Baby?
What's this? What's this?
What's this? Sheep.
Good girl.
Now see if she knows
what the word is.
Okay.
Baby, look over here.
Off you go.
What's this? What's this?
Sheep.Good girl.
Let's try her
on something else.
Okay.
[WHISPERS] Let's try on that.
[NORMAL VOICE] What's this?
Baby, Baby, over here.
What's this?
Baby. Look at me.
Look at me. What's this?
Baby, over here.
Over here.
Puppy.Good girl.
That's what
she's just read.
Ooh.[LAUGHS]
The most intelligent system
that we're aware of
is a human.
So using the human
as a template,
as the embodiment of Baby X,
is probably the best way
to try
to achieve
general intelligence.
DULLEA: She seems so human.
Is this the child
who takes our hand
and leads us to HAL?
ANDY CLARK:
Whether it's in a real world
or a virtual world,
I do think embodiment
is gonna be
very crucial here.
Because I think that
general intelligence
is tied up with being able
to act in the world.
You have to have something,
arms, legs, eyes,
something that you can
move around
in order to harvest
new information.
So I think to get systems
that really understand
what they're talking about,
you will need systems
that can act
and intervene in the world,
and possibly systems
that have something
almost like a social life.
For me, thinking and feeling
will come together.
And they will come together
because of the importance
of embodiment.
The stories that
we tell ourselves
open up arenas
for discussion,
and they also help us
think about things.
Ava!
BRYAN JOHNSON:
The most dominant story...
Go back to your room.
...is AI is scary.
There's a bad guy,
there's a good guy...
That is
a non-productive storyline.
Stop!
Ava, I said stop!
Whoa! Whoa! Whoa!
The reality is AI is
remarkably useful to us.
MALE REPORTER: Robot surgeons are increasingly used in NHS hospitals.
They don't get tired,
their hands don't tremble,
and their patients
make a faster recovery.
MAN: It alerted me that
there was one animal in heat.
She was ready to be bred,
and three animals that
may have
a potential health problem.
MAN: A Facebook program
can detect suicidal behavior
by analyzing user information,
and Facebook CEO
Mark Zuckerberg believes
this is just the beginning.
MAN 2: Here is someone
who has visual impairment
being able to see
because of AI.
Those are the benefits
of what AI does.
Diagnose cancer earlier.
Predict heart disease.
MAN: But the revolution
has only just begun.
AI is.
It's happening.
And so it is a question
of what we do with it,
how we build it.
That's the only question
that matters. And I think,
we potentially are at risk
of ruining that
by having this
knee-jerk reaction
that AI is bad
and it's coming to get us.
DULLEA:
So just like my astronaut,
we're on a journey.
Narrow AI is already here.
But how far are we from AGI?
And how fast are we going?
JURGEN SCHMIDHUBER:
Every five years,
computers are getting
ten times faster.
At the moment,
we don't have yet
little devices,
computational devices,
that can compute
as much as a human brain.
But every five years,
computers are getting
ten times cheaper,
which means that soon
we are going to have that.
And once we have that,
then 50 years later,
if the trend doesn't break,
we should have
for the same price
little computational devices
that can compute
as much as
all ten billion brains
of humankind together.
DULLEA: It's catching up.
What happens
if it passes us by?
Now a robot
that's been developed
by a Japanese scientist
is so fast, it can win
the rock-paper-scissors game
against a human
every single time.
The presumption has to be that
when an artificial
intelligence
reaches human intelligence,
it will very quickly
exceed human intelligence,
and once that happens,
it doesn't even
need us anymore
to make it smarter.
It can make itself smarter.
MALONE: You create AIs
which are better than us
at a lot of things.
And one of the things
they get better at
is programming computers.
So it reprograms itself.
And it is now
better than it was,
which means it's now
better than it was
at reprogramming.
So it doesn't
get better like that.
It gets better like that.
ROSENBERG:
And we get left behind
as it keeps getting faster,
evolving at
an exponential rate.
That's this event
that a lot of researchers
refer to as the singularity.
And it will get beyond
where we are competitive
very quickly.
So think about someone
not having a PhD in one area,
but having it
in every conceivable area.
You can't figure out
what they're doing.
They can do things
no one else is capable of.
HARRIS: Just imagine
what it would be like
to be in dialogue
with another person
running on a time course
that was a million times
faster than yours.
So for every second
you had to think
about what you were
going to do
or what you were
going to say,
it had two weeks to think
about this, right?
Um, or a billion times faster,
so if for every second
you had to think,
it it's high-stakes poker
or some negotiation,
every second you are thinking,
it has 32 years
to think about what
its next move will be.
There's no way to win in that game.
DULLEA: The singularity,
the moment the machines
surpass us.
Inevitable? Not everyone
at the forefront of AI
thinks so.
Some of my friends
who are worried
about the future
think that we're gonna
get to a singularity
where the AI systems
are able
to reprogram themselves
and make themselves
better and better
and better.
Maybe in
the long-distant future.
But our present AI systems
are very, very narrow
at what they do.
We can't build a system today
that can understand
a computer program
at the level
that a brand-new student
in programming
understands
in four or five weeks.
And by the way,
people have been working
on this problem
for over 40 years.
It's not because
people haven't tried.
It's because we don't have
any clue how to do it.
So to imagine it's suddenly
gonna jump out of nothing
I think is completely wrong.
[APPLAUSE]
ETZIONI: There's a negative
narrative about AI.
Brilliant people
like Elon Musk
use very religious imagery.
With artificial intelligence,
we are summoning the demon.
You know? You know
all those stories where
there's the guy with
the pentagram and holy water
and he's like, "Yeah,"
he's sure he can control
the demon.
[LAUGHTER]
Doesn't work out.
You know, people like me
who've spent
their entire career
working on AI are literally
hurt by that, right?
It's a personal attack almost.
I tend to think
about what's gonna happen
in the next ten years,
in the next 25 years.
Beyond 25,
it's so difficult to predict
that it becomes
very much a speculation.
But I must warn you,
I've been
specifically programmed
not to take over the world.
STUART RUSSELL:
All of a sudden,
for the first time
in the history of the field,
you have some people saying,
"Oh, the reason
we don't have to worry
"about any kind of
"existential risk
to humanity
"from super-intelligent
machines
"is because it's impossible.
"We're gonna fail."
I... I find this
completely bizarre.
DULLEA: Stuart Russell
cowrote the textbook
many AI students study with.
He understands the momentum
driving our AI future.
But given the rate
of investment in AI,
the number
of incredibly smart people
who are working
on these problems,
and the enormous incentive,
uh, to achieve general
purpose intelligence,
back of the envelope
calculation,
the value is about
10,000 trillion dollars.
Because it would revolutionize the economy of the world.
With that scale of incentive,
I think betting against human ingenuity
just seems incredibly foolish.
China has
a big vested interest
in being recognized
as, kind of, an AI superpower.
And they're investing
what I believe is
upwards of $150 billion
the next five to ten years.
There have been
some news flashes
that the government
will start ramping up
its investment in AI.
Every company
in the Fortune 500,
whether it's IBM, Microsoft,
or they're outside
like Kroger,
is using
artificial intelligence
to gain customers.
I don't think that people
should underestimate
the types of innovations
going on.
And what this world
is gonna look like
in three to five years
from now,
I think people are gonna have
a hard time predicting it.
I know many people
who work
in AI
and they would tell you,
"Look, I'm going to work
and I have doubts.
"I'm working on
creating this machine
"that can treat cancer,
"extend our life expectancy,
"make our societies better,
more just, more equal.
"But it could also
be the end of humanity."
DULLEA:
The end of humanity?
What is our plan
as a species?
We don't have one.
We don't have a plan
and we don't realize
it's necessary
to have a plan.
CAMERON: A lot of
the AI researchers
would love for our
science fiction books
and movies
to be more balanced,
to show the good uses of AI.
All of the bad things
we tend to do,
you play them out
to the apocalyptic level
in science fiction
and then you show them
to people
and it gives you
food for thought.
People need to have that warning signal
in the back of their minds.
ETZIONI: I think
it's really important
to distinguish between
science on the one hand,
and science fiction
on the other hand.
I don't know
what's gonna happen
a thousand years from now,
but as somebody
in the trenches,
as somebody who's building
AI systems,
who's been doing so for more
than a quarter of a century,
I feel like I have
my hand on the pulse
of what's gonna happen next,
in the next five,
ten, 20 years,
much more so
than these folks
who ultimately are not
building AI systems.
We wanna build the technology,
think about it.
We wanna see
how people react to it
and build the next generation
and make it better.
HARRIS: Skepticism
about whether this poses
any danger to us
seems peculiar.
Even if we've given it goals
that we approve of,
it may form instrumental goals
that we can't foresee,
that are not aligned
with our interest.
And that's the goal alignment problem.
DULLEA: Goal alignment.
Exactly how
my battle with HAL began.
The goal alignment problem
with AI is where
you set it a goal
and then, the way
that it does this goal
turns out to be at odds
with what you really wanted.
The classic AI alignment problem in film...
Hello. HAL, do you read me?
...is HAL in2001.
HAL, do you read me?
Make the mission succeed. But to make it succeed,
he's gonna have to make
some of the people
on the mission die.
HAL: Affirmative, Dave.
I read you.
Open the pod bay doors, HAL.
HAL: I'm sorry, Dave.
I'm afraid I can't do that.
"I'm sorry, Dave,
I can't do that,"
is actually
a real genius illustration
of what the real threat is
from super-intelligence.
It's not malice.
It's competence.
HAL was really competent
and had goals that
just didn't align with Dave's.
HAL, I won't argue
with you anymore.
Open the doors!
HAL: Dave, this conversation
can serve no purpose anymore.
Goodbye.
HAL?
HAL?
HAL?
HAL!
DULLEA: HAL and I
wanted the same thing.
Just not in the same way.
And it can be dangerous
when goals don't align.
In Kubrick's film,
there was a solution.
When HAL tried to kill me,
I just switched him off.
RUSSELL: People say, "Well, just switch them off."
But you can't
just switch them off.
Because they
already thought of that.
They're super-intelligent.
For a machine
convinced of its goal,
even something as simple as
"fetch the coffee."
It's obvious to any
sufficiently intelligent
machine
that it can't fetch the coffee
if it's switched off.
Give it a goal, you've given the machine
an incentive to defend itself.
It's sort of a psychopathic behavior
of the single-minded fixation
on some objective.
For example, if you wanted
to try and develop a system
that might generate
world peace,
then maybe the most efficient,
uh, and
speedy way to achieve that
is simply to annihilate
all the humans.
Just get rid of the people,
they would stop being in conflict.
Then there would be
world peace.
Personally, I have
very little time
for these
parlor discussions.
We have to build
these systems,
experiment with them,
see where they go awry
and fix them.
We have so many opportunities
to use AI for the good,
why do we focus on the fear?
DULLEA: We are building them.
Baby X is an example.
Will she be a good AI?
SAGAR: I think
what's critical in this
is that machines
are accountable,
and you can look at them
and see what's making
them tick.
And the best way to understand something is to build it.
Baby X has
a virtual physiology.
Has pain circuits
and it has joy circuits.
She has parameters which are
actually representing
her internal chemical state,
if you like.
You know, if she's stressed,
there's high cortisol.
If she's feeling good,
high dopamine,
high serotonin,
that type of thing.
This is all part of, you know,
adding extra human factors
into computing.
Now, because she's got these
virtual emotional systems,
I can do things like...
I'm gonna avoid her
or hide over here.
So she's looking for
where I've gone.
So we should start seeing
virtual cortisol building up.
AUDIENCE: Aww! [LAUGHING]
I've turned off the audio
for a reason, but anyway...
I'll put her
out of her misery.
Okay. Sorry.
It's okay, sweetheart.
Don't cry. It's okay.
It's okay.
Good girl. Okay. Yeah.
So what's happening
is she's responding
to my facial expressions
and my voice.
But how is chemical state
represented in Baby X?
It's literally
a mathematical parameter.
So there's levels,
different levels of dopamine,
but they're
represented by numbers.
The map of her virtual
neurochemical state
is probably the closest
that we can get
to felt experience.
So if we can build
a computer model
that we can change
the settings of
and the parameters of,
give it different experiences,
and have it interact with us
like we interact
with other people,
maybe this is a way
to explore
consciousness in
a new way.
DULLEA: If she does
become conscious,
what sort of life
will she want?
An AI, if it's conscious,
then it's like your child.
Does it have rights?
MALE NEWS ANCHOR: Last year,
European lawmakers
passed a resolution
to grant legal status
to electronic persons.
That move's now
being considered
by the European Commission.
DULLEA:
Machines with rights.
Are we ready for that?
MALONE: I think it raises
moral and ethical problems,
which we really
are most content to ignore.
What is it
that we really want?
We want a creature, a being, that we can
interact with in some way,
that is enough like us
that we can relate to it,
but is dispensable
or disposable.
The first and most notable one,
I think, was inMetropolis.
The Maria doppelganger that was created by
the scientist simulated a sexy female.
And this goes back
to our penchant for slavery
down through the ages.
Let's get people
that we can,
in a way, dehumanize
and we can do
what we want with them.
And so, I think,
we're just, you know,
finding a new way
to do slavery.
MALONE:
Would it be right for us
to have them as our slaves?
HARRIS: This all becomes
slightly weird ethically
if we happen to,
by choice or by accident,
build conscious slaves.
That seems like a morally
questionable thing to do,
if not an outright
evil thing to do.
Artificial intelligence
is making its way
into the global sex market.
A California company
will soon roll out
a new incredibly lifelike
sex robot
run by
artificial intelligence.
One application
that does trouble me
is the rise of sex robots
that are shaped
in the image of real women.
I am already
taking over the world
one bedroom at a time.
YEUNG:
If they are conscious,
capable of sentience,
of experiencing pain
and suffering,
and they were being
subject to degrading, cruel
or other forms of utterly
unacceptable treatment,
the danger is that
we'd become desensitized
to what's morally acceptable
and what's not.
DULLEA: If she's conscious,
she should be able to say no.
Right?
Shouldn't she have that power?
Aren't these the questions
we need to ask now?
TEGMARK: What do we want it
to mean to be human?
All other species on Earth,
up until now,
have not had the luxury
[LAUGHS]
to answer what they
want it to mean
to be a mouse
or to be an earthworm.
But we have now reached
this tipping point
where our understanding
of science and technology
has reached the point that
we can completely transform
what it means to be us.
DULLEA: What will it mean
to be human?
If machines become more,
will we become less?
The computer Deep Blue
has tonight triumphed over
the world chess champion
Garry Kasparov.
It's the first time
a machine has defeated
a reigning world champion.
More on that showdown
on one of America's
most beloved
game shows, Jeopardy!
Two human champions going
brain-to-brain with a super-computer named Watson.
WATSON:
What isThe Last Judgment?
ALEX TREBEK: Correct.
Go again. Watson.
Who isJean Valjean?TREBEK: Correct.
A Google computer program has
once again beaten
a world champion in Go.
FEMALE REPORTER: Go, an ancient Chinese board game,
is one of the most complex
games ever devised.
FEMALE NEWS ANCHOR:
It's another milestone in
the rush to develop machines
that are smarter than humans.
KOSINSKI: We have
this air of superiority,
we as humans.
So we kind of got used to be
the smartest kid on the block.
And I think that
this is no longer the case.
And we know that
because we rely on AI
to help us to make
many important decisions.
Humans are irrational
in many very bad ways.
We're prejudiced,
we start wars, we kill people.
If you want to maximize
the chances
of people surviving overall,
you should essentially
give more and more
decision-making powers
to computers.
DULLEA: Maybe machines would do a better job than us.
Maybe.
[APPLAUSE]
SOPHIA: I am thrilled
and honored to be here
at the United Nations.
The future is here. It's just
not evenly distributed.
AI could help
efficiently distribute
the world's
existing resources,
like food and energy.
I am here to help humanity
create the future.
[APPLAUSE]
TEGMARK: The one
super-intelligent scenario
that is provocative
to think about
is what I call
the benevolent dictator.
Where you have
this super-intelligence
which runs Earth,
guided by strict ethical principles that try
to help humanity flourish.
Would everybody love that?
If there's no disease,
no poverty, no crime...
Many people would probably
greatly prefer it
to their lot today.
But I can imagine that some
might still lament
that we humans just
don't have more freedom
to determine our own course.
And, um...
It's a tough one.
It really is. We...
I don't think most people
would wanna be
like zoo animals, with no...
On the other hand...
Yeah.
I don't have any glib answers
to these really
tough questions.
That's why I think
it's just so important
that we as a global community
really discuss them now.
WOMAN 1: Machines that
can read your face.
WOMAN 2:
Recognize emotional cues.
MAN 1: Closer to a brain
process of a human...
MAN 2: Robots imbued
with life...
MAN 3:
An emotional computer...
From this moment on,
regardless of
whether you think
we're just gonna get stuck
with narrow AI
or we'll get general AI
or get super-intelligent AI
or conscious
super-intelligent...
Wherever you think
we're gonna get to,
I think any stage along that
will change who we are.
Up and at 'em, Mr. J.
MALONE: The question
you have to ask is,
is AI gonna do something
for you
or something to you?
I was dreaming
about sleeping.
RUSSELL:
If we build AI systems
that do everything we want,
we kind of hand over
our world to them.
So that kind of enfeeblement
is kind of irreversible.
There is an existential risk
that is real and serious.
If every single desire,
preference, need, has been catered for...
Morning, Rosie.
What's for breakfast?
...we don't notice
that our capacity
to make decisions
for ourselves
has been slowly eroded.
This is George Jetson,
signing off for
the entire day.
[SIGHS]
There's no going back
from there. Why?
Because AI can make
such decisions
with higher accuracy
than whatever
a human being can achieve.
I had a conversation
with an engineer, and he said,
"Look, the safest way
of landing a plane today
"is to have the pilot
leave the deck
"and throw away the key,
"because the safest way
"is to use
an autopilot, an AI."
JOHNSON: We cannot
manage the complexity
of the world by ourselves.
Our form of intelligence,
as amazing as it is,
is severely flawed.
And by adding
an additional helper,
which is
artificial intelligence,
we stand the ability
to potentially improve
our cognition dramatically.
MALE REPORTER:
The most cutting-edge thing
about Hannes Sjoblad
isn't the phone in his hand,
it's the microchip
in his hand.
What if you could
type something
just by thinking about it?
Sounds like sci-fi,
but it may be close
to reality.
FEMALE REPORTER:
In as little as five years,
super-smart people could be
walking down the street,
men and women who've paid
to increase
their intelligence.
Linking brains to computers
by using implants.
What if we could upload
a digital copy
of our memories
and consciousness to a point
that human beings
could achieve
a sort of digital immortality?
Just last year,
for the first time
in world history,
a memory was recorded.
We can actually take
recorded memory
and insert it back
into an animal,
and it remembers.
Technology is a part of us.
It's a reflection of
who we are.
That is the narrative
we should be thinking about.
We think all that exists
is everything we can imagine.
When in reality,
we can imagine
a very small subset
of what's
potentially possible.
The real interesting stuff
in the world
is what sits
beyond our imagination.
When these
massive systems emerge,
as a species,
I think we get an F
for our ability to predict
what they're going to become.
DULLEA: Technology
has always changed us.
We evolve with it.
How far are we prepared
to go this time?
There was a time
when Aristotle
lamented the creation
of the written language
because he said this would
destroy people's memories
and it would be
a terrible thing.
And yet today, I carry
my brain in my pocket,
in the form of my smartphone,
and that's changed the way
I go through the world.
Oh, I think we're absolutely
merging with our technology.
Absolutely. I would... I would
use the term "co-evolving."
We're evolving it to match our needs
and we are changing to match what it can provide to us.
DULLEA: So instead of being
replaced by the machines,
we're going to merge
with them?
GRADY BOOCH:
I have a number of friends
who have bionic arms.
Are they cyborgs?
In a way,
they're on that path.
I have people
who have implants
in their ears.
Are they AIs?
Well, we're in a way
on that path
of augmenting human bodies.
JOHNSON: Human potential lies
in this co-evolutionary
relationship.
A lot of people
are trying to hold onto
the things that make them
special and unique,
which is not
the correct frame.
Let's create more.
Rewrite our intelligence,
improve everything
we're capable of. Emotions,
imagination, creativity.
Everything that
makes us human.
There's already
a symbiotic relationship
between us and a machine
that is enhancing who we are.
And... And there is a whole movement, you know,
the trans-human movement,
where basically
people really bet bigtime
on the notion that
this machine-human integration
is going to redefine who we are.
This is only gonna grow.
BOOCH:
There's gonna be a learning
process that gets us
to that endgame where
we live with these devices
that become
smarter and smarter.
We learn
how to adapt ourselves
around them,
and they adapt to us.
And I hope we become
more human in the process.
DULLEA: It's hard
to understand how merging
with the machines
can make us more human.
But is merging
our best chance of survival?
ROSENBERG:
We humans are limited
to how many neurons
we can fit in our brain.
And there are other species that have even smaller brains.
When you look at
bees and fish moving,
it looks like
a single organism.
And biologists actually refer
to it as a super-organism.
That is why birds flock,
and bees swarm,
and fish school.
They make decisions that are in the best interest
of the whole population
by thinking together
as this larger mind.
And we humans could do
the same thing by forming
a hive mind connecting us
together with AI.
So the humans and the software together
become this super-intelligence,
enabling us to make the best decisions for us all.
When I first started looking
at this, it sounded
scary to me.
But then I realized
it's scarier
for an alien intelligence
to emerge that has
nothing to do with humans
than for us humans
to connect together
and become
a super-intelligence.
That's the direction
that I work on,
uh, because I see it as,
in some sense,
the only alternative
to humans ultimately
being replaced by AI.
DULLEA: Can Baby X help us
understand the machines
so we can work together?
Human and intelligent machine
cooperation will define
the next era of history.
You probably do have to create a virtual empathetic machine.
Empathy is a lot about connecting with
other people's struggles.
The more connection you have,
the more communication
that you have,
the safer that makes it.
So at a very basic level,
there's some sort
of symbiotic relationship.
By humanizing this technology,
you're actually increasing the channels
for communication and cooperation.
AI working in a human world
has to be about humans.
The future's what we make it.
TEGMARK: If we
can't envision a future
that we really, really want,
we're probably
not gonna get one.
DULLEA: Stanley Kubrick
tried to imagine
what the future
might look like.
Now that
that future's approaching,
who is in charge
of what it will look like?
MAN: Who controls it?MAN 2: Google.
MAN 3: The military.MAN 4: Tech giant Facebook...
WOMAN: Samsung...WOMAN 2:
Facebook, Microsoft, Amazon.
CUMMINGS: The tide has
definitely turned
in favor of
the corporate world.
The US military, and I suspect
this is true worldwide,
cannot hold a candle to the AI
that's being developed
in corporate companies.
DULLEA:
If corporations,
not nations, hold the power,
what can we do
in a capitalist system
to ensure AI
is developed safely?
If you mix AI research
with capitalism,
on the one hand, capitalism
will make it run faster.
That's why we're getting
lots of research done.
But there is a basic problem.
There's gonna be a pressure
to be first one to the market,
not the safest one.
FEMALE REPORTER: This car was in autopilot mode
when it crashed
in Northern California,
killing the driver.
Artificial intelligence
may have
more power to crash
the markets
than anything else
because humans aren't
prepared to stop them.
A chatbot named Tay.
She was designed to speak like a teenager
and learn by interacting online.
But in a matter of hours, Tay was corrupted
into a Hitler-supporting sexbot.
This kind of talk that somehow
in the context of capitalism,
AI is gonna be misused,
really frustrates me.
Whenever we develop
technology,
whether it's, you know,
coal-burning plants,
whether it's cars,
there's always that incentive
to move ahead quickly,
and there's always
the regulatory regime,
part of the government's job
to balance that.
AI is no different here.
Whenever we have
technological advances,
we need to temper them
with appropriate
rules and regulations.
Duh!
Until there's
a major disaster,
corporations are gonna
be saying
"Oh, we don't need to worry
about these things.
"Trust us, we're the experts.
"We have humanity's
best interests at heart."
What AI is really about
is it's a new set of rules
for companies in terms of
how to be more competitive,
how to drive more efficiency,
and how to drive growth,
and creative new
business models and...
RUSSELL:
Once corporate scientists
start to dominate the debate,
it's all over, you've lost.
You know, they're paid
to deny the existence
of risks
and negative outcomes.
SCHMIDHUBER:
At the moment,
almost all AI research
is about making human lives
longer, healthier and easier.
Because the big companies,
they want to sell you AIs
that are useful for you,
and so you are
only going to buy
those that help you.
MALONE: The problem of AI
in a capitalist paradigm
is it's always
gonna be cheaper
just to get the product
out there when it does the job
rather than do it safely.
ROMAN YAMPOLSKIY:
Right now, the way it works,
you have a conference
with 10,000 researchers,
all of them working
on developing more capable AI, except, like, 50 people
who have a tiny workshop
talking about safety
and controlling
what everyone else is doing.
No one has a working safety mechanism.
And no one even has
a prototype for it.
SCHMIDHUBER:
In the not-so-distant future,
you will have
all kinds of AIs
that don't just do
what humans tell them,
but they will invent
their own goals
and their own experiments,
and they will transcend
humankind.
YAMPOLSKIY:
As of today,
I would say
it's completely unethical
to try to develop such a machine and release it.
We only get one chance
to get it right.
You screw up,
everything possibly is lost.
DULLEA: What happens
if we get it wrong?
The thing that scares me
the most is this AI
will become the product
of its parents,
in the same way
that you are and I am.
Let's ask who
those parents are.
Right?
Chances are they're either
gonna be military parents
or they're gonna be
business parents.
If they're business parents,
they're asking the AI
to become something
that can make a lot of money.
So they're teaching it greed.
Military parents are gonna teach it to kill.
Truly superhuman intelligence
is so much... Is so super.
It's so much better
than we are
that the first country or company to get there
has the winner-take-all advantage.
[APPLAUSE]
[SPEAKING RUSSIAN]
Because there's such
an arms race in AI globally,
and that's both literally,
in terms of building
weapons systems,
but also figuratively
with folks like Putin
and the Chinese saying
whoever dominates AI
is gonna dominate the economy
in the 21st century.
I don't think we have a choice
to put the brakes on,
to stop developing
and researching.
We can build the technology,
we can do the research,
and imbue it with our values,
but realistically,
totalitarian regimes
will imbue the technology
with their values.
FEMALE REPORTER:
Israel, Russia, the US
and China
are all contestants
in what's being described
as a new global arms race.
All countries,
whether they admit it or not,
are going to develop
lethal autonomous weapons.
And is that really
a capability that we want
in the hands
of a corporate nation state
as opposed to a nation state?
The reality is
if Facebook or Google
wanted to start an army,
they would have
technologically
superior capabilities
than any other nation
on the planet.
DULLEA: Some AI scientists
are so concerned,
they produced this
dramatized video
to warn of the risks.
Your kids probably have
one of these, right?
the Slaughterbot video
to illustrate
the risk
of autonomous weapons.
That skill is all AI.
It's flying itself.
There are people within
the AI community
who wish I would shut up.
People who think that making the video is
a little bit treasonous.
Just like any mobile device
these days,
it has cameras and sensors.
And just like your phones
and social media apps,
it does facial recognition.
Up until now,
the arms races tended to be
bigger, more powerful,
more expensive,
faster jets that cost more.
AI means that you could have
very small, cheap weapons
that are very efficient
at doing
one very simple job, like
finding you in a crowd
and killing you.
This is how it works.
[GASPING, APPLAUSE]
Every single major advance
in human technology
has always been weaponized.
And the idea of weaponized AI,
I think, is terrifying.
Trained as a team,
they can penetrate
buildings, cars, trains,
evade people, bullets...
They cannot be stopped.
RUSSELL: There are lots of
weapons that kill people.
Machine guns
can kill people,
but they require
a human being
to carry and point them.
With autonomous weapons,
you don't need a human
to carry them
or supervise them.
You just program and then off they go.
So you can create the effect of an army
carrying a million machine guns,
uh, with two guys and a truck.
Putin said that
the first person to develop
strong artificial intelligence
will rule the world.
When world leaders use terms
like "rule the world,"
I think you gotta pay attention to it.
Now, that is an air strike
of surgical precision.
[CHEERING AND APPLAUSE]
I've been campaigning against
these weapons since 2007,
and there's three main
sets of arguments.
There's the moral arguments,
there's arguments
about not complying
with the laws of war,
and there are arguments
about how they will change
global security of the planet.
We need new regulations.
We need to really think
about it now, yesterday.
Some of the world's leaders
in artificial intelligence
are urging the United Nations
to ban the development
of autonomous weapons,
otherwise known
as killer robots.
MALE REPORTER: In the letter
they say "once developed,
"they will permit
armed conflict to be fought
"at timescales faster
than humans can comprehend."
BOOCH: I'm a signatory
to that letter.
My philosophy is,
and I hope the governments
of the world enforce this,
that as long as
a human life is involved,
then a human needs to be
in the loop.
Anyone with a 3D printer
and a computer
can have a lethal
autonomous weapon.
If it's accessible
to everyone on the planet,
then compliance, for example,
would be nearly impossible
to track.
What the hell
are you doing? Stop!
You're shooting into thin air.
HARRIS: When you're imagining
waging war in the future,
where soldiers are robots,
and we pose no risk
of engagement to ourselves...
When war becomes
just like a video game,
as it has
incrementally already,
it's a real concern.
When we don't have skin in the game, quite literally...
[SCREAMS]
...when all of this is being run from a terminal
thousands of miles away from the battlefield...
...that could lower the bar for waging war
to a point where we would recognize
that we have become monsters.
Who are you?
Hey! Hey!
[GUNSHOT]
Having been
a former fighter pilot,
I have also seen
people make mistakes
and kill innocent people
who should not
have been killed.
One day in the future,
it will be more reliable
to have a machine do it
than a human do it.
The idea of robots
who can kill humans
on their own
has been a science fiction
staple for decades.
There are a number of
these systems around already,
uh, that work autonomously.
But that's called
supervised autonomy
because somebody's there
to switch it on and off.
MALE REPORTER: What's only
been seen in Hollywood
could soon be coming
to a battlefield near you.
The Pentagon investing
billions of dollars
to develop autonomous weapons,
machines that could one day
kill on their own.
But the Pentagon insists humans will decide to strike.
MALONE: Once you get AI
into military hardware,
you've got to make it
make the decisions,
otherwise there's no point
in having it.
One of the benefits of AI is that it's quicker than us.
Once we're in a proper fight and that switch
that keeps the human in the loop,
we better switch it off,
because if you don't,
you lose.
These will make
the battlefield
too fast for humans.
If there's no human involved at all,
if command and control have gone over to autonomy,
we could trigger a war and it could be finished
in 20 minutes with massive devastation,
and nobody knew about it before it started.
DULLEA: So who decides what
we should do in the future
now?
YAMPOLSKIY:
From Big Bang till now,
billions of years.
This is the only time
a person can steer
the future
of the whole humanity.
TEGMARK:
Maybe we should worry that
we humans aren't wise enough
to handle this much power.
GLEISER: Who is actually
going to make the choices,
and in the name of whom
are you actually speaking?
If we, the rest of society,
fail to actually
really join this conversation
in a meaningful way
and be part of the decision,
then we know whose values
are gonna go into these
super-intelligent beings.
Red Bull.Yes, Deon.
Red Bull.
TEGMARK: It's going to be decided by some tech nerds
who've had too much Red Bull to drink...
who I don't think
have any particular
qualifications to speak
on behalf of all humanity.
Once you are in
the excitement of
discovery...
Come on!
...you sort of
turn a blind eye
to all these
more moralistic questions,
like how is this going to
impact society as a whole?
And you do it because you can.
MALONE: And the scientists
are pushing this forward
because they can see
the genuine benefits
that this will bring.
And they're also
really, really excited.
I mean, you don't...
You don't go into science
because, oh, well,
you couldn't
be a football player,
"Hell, I'll be a scientist."
You go in because
that's what you love
and it's really exciting.
Oh! It's alive!
It's alive! It's alive!
It's alive!
YAMPOLSKIY:
We're switching
from being creation
to being God.
Now I know what it feels like
to be God!
YAMPOLSKIY:
If you study theology,
one thing you learn
is that it is never the case
that God
successfully manages
to control its creation.
DULLEA: We're not there yet,
but if we do create
artificial general
intelligence,
can we maintain control?
If you create a self-aware AI,
the first thing
it's gonna want
is independence.
At least, the one intelligence
that we all know is ourselves.
And we know that we would not wanna be doing the bidding
of some other intelligence.
We have no reason to believe that it will be friendly...
...or that it will have goals and aspirations
that align with ours.
So how long before
there's a robot revolution
or an AI revolution?
We are in a world today where it is
absolutely a foot race
to go as fast as possible
toward creating
an artificial
general intelligence
and there don't seem to be
any brakes being applied
anywhere in sight.
It's crazy.
YAMPOLSKIY: I think
it's important to honestly
discuss such issues.
It might be
a little too late to make
this conversation afterwards.
It's important to remember
that intelligence is not evil
and it is also not good.
Intelligence gives power.
We are more powerful than tigers on this planet,
not because we have sharper claws
or bigger biceps,
because we're smarter.
CHALMERS: We do take
the attitude that ants
matter a bit and fish matter a bit, but humans matter more.
Why? Because we're
more intelligent,
more sophisticated,
more conscious.
Once you've got systems which
are more intelligent than us,
maybe they're simply gonna matter more.
As humans, we live
in a world of microorganisms.
They're life, but if a microorganism
becomes a nuisance, we take an antibiotic
and we destroy billions of them at once.
If you were an artificial intelligence,
wouldn't you look
at every little human
that has this
very limited view
kind of like
we look at microorganisms?
DULLEA: Maybe I would.
Maybe human insignificance
is the biggest threat of all.
TEGMARK:
Sometimes when I talk
about AI risk, people go,
"Shh! Max,
don't talk like that.
"That's Luddite
scaremongering."
But it's not scaremongering.
It's what we at MIT
call safety engineering.
I mean, think about it, before NASA launched the Apollo 11 moon mission,
they systematically thought through
everything that could go wrong
when you put a bunch of people
on top of explosive fuel tanks
and launched them somewhere
where no one can help them.
Was that scaremongering?
No, that was precisely
the safety engineering
that guaranteed
the success of the mission.
That's precisely the strategy I think we have to take with AGI.
Think through everything that could go wrong
to make sure it goes right.
I am who I am
because of the movie
2001: A Space Odyssey.
Aged 14, it was
an eye-opener for me.
That movie is the best
of all science fiction
with its predictions
for how AI would evolve.
But in reality,
if something is sufficiently
far in the future,
we can't imagine
what the world will be like,
so we actually can't say
anything sensible
about what the issues are.
LITTMAN: Worrying about
super-intelligence safety
is a little like
worrying about
how you're
gonna install seat belts
in transporter devices.
Like in Star Trek,
where you can just zip
from one place to another.
Um, maybe there's
some dangers to that,
so we should have
seatbelts, right?
But it's of course very silly
because we don't know enough
about how
transporter technology
might look to be able to say
whether seatbelts even apply.
GLEISER: To say that there
won't be a problem because
the unknowns are so vast,
that we don't even know
where the problem
is gonna be,
that is crazy.
I think conversation is
probably the only
way forward here.
At the moment,
among ourselves,
and when the time comes,
with the emerging
artificial intelligences.
So as soon as
there is anything
remotely in that ballpark,
we should start trying
to have a conversation
with it somehow.
Dog. Dog.SAGAR: Yeah.
One of the things we hope
to do with Baby X is to try
to get a better insight
into our own nature.
She's designed
to learn autonomously.
She can be plugged
into the Internet,
click on webpages by herself
and see what happens,
kind of learn how we learn.
If we make this
super fantastic AI,
it may well help us become
better versions of ourselves,
help us evolve.
It will have societal effects.
Is it to be feared?
Or is it something
to learn from?
DULLEA: So when might
we be faced with machines
that are as smart,
smarter than we are?
I think one of the ways
to think about
the near future of AI
is what I would call
the event horizon.
People have heard of black holes,
this super dense point where
if you get too close, you get sucked in.
There is a boundary. Once you cross that boundary,
there is no way back.
What makes that dangerous is you can't tell where it is.
And I think AI as a technology
is gonna be like that,
that there will be a point
where the logic of the money
and the power and the promise
will be such that
once we cross that boundary,
we will then be
into that future.
There will be no way back,
politically, economically
or even in terms
of what we want.
But we need to be worrying
about it before we reach
the event horizon,
and we don't know where it is.
I don't think it's gonna
happen any time soon.
The next few decades.
Twenty-five to 50 years.
A few hundred years.
It's over
the next few decades.
Not in my lifetime.
Nobody knows.
But it doesn't really matter.
Whether it's years
or 100 years,
the problem is the same.
It's just as real
and we'll need
all this time to solve it.
TEGMARK:
History is just full of
over-hyped tech predictions.
Like, where are
those flying cars
that were supposed
to be here, right?
But it's also full
of the opposite,
when people said,
"Oh, never gonna happen."
I'm embarrassed as a physicist that Ernest Rutherford,
one of the greatest nuclear physicists of all time said
that nuclear energy was never gonna happen.
And the very next day,
Leo Szilard invents
the nuclear chain reaction.
So if someone says
they are completely sure
something is never
gonna happen,
even though it's consistent
with the laws of physics,
I'm like, "Yeah, right."
HARRIS: We're not there yet.
We haven't built a super-intelligent AI.
And even if we stipulated that it won't happen
shorter than 50 years from now, right?
So we have 50 years to think about this.
Well...
it's nowhere written
that 50 years
is enough time to figure out
how to do this safely.
CAMERON: I actually think
it's going to take
some extreme negative example
of the technology
for everyone to pull back
and realize that
we've got to get this thing
under control, we've got to have constraints.
We are living
in a science fiction world
and we're all,
um, in this big, kind of
social, tactical experiment,
experimenting on ourselves,
recklessly,
like no doctor would
ever dream of doing.
We're injecting ourselves
with everything that
we come up with
to see what happens.
LITTMAN: Clearly,
the thought process is...
You're creating
learning robots.
Learning robots are exactly
what lead to Skynet.
Now I've got a choice,
right? I can say,
"Don't worry,
I'm not good at my job.
"You have nothing to fear
because I suck at this
"and I'm not gonna
actually accomplish anything."
Or I have to say, "You know,
"You're right,
we're all gonna die,
"but it's gonna be
sort of fun to watch,
"and it'd be good to be
one of the people
in the front row."
So yeah, I don't have
a terrific answer
to these things.
The truth, obviously,
is somewhere in between.
DULLEA:
Can there be an in-between?
Or are we facing
a new world order?
I have some colleagues who
are fine with AI taking over
and even causing
human extinction,
as long as we feel
that those AIs
are our worthy ancestors.
SCHMIDHUBER:
In the long run, humans
are not going to remain
the crown of creation.
But that's okay.
Because there is still beauty
and grandeur and greatness
in realizing that you are
a tiny part of
a much grander scheme
which is leading the universe
from lower complexity
towards higher complexity.
CAMERON: We could be rapidly approaching a precipice
for the human race where we find
that our ultimate destiny is to have been
the forebearer of the true
emergent, evolutionary, intelligent sort of beings.
So we may just be here
to give birth
to the child of this...
Of this machine...
Machine civilization.
CHALMERS: Selfishly,
as a human, of course,
I want humans to stick around.
But maybe that's a very local, parochial point of view.
Maybe viewed from the point
of view of the universe,
it's a better universe
with our...
With our super-intelligent
descendants.
RUSSELL: Our field
is likely to succeed
in its goal to achieve
general purpose intelligence.
We now have the capability
to radically modify
the economic structure
of the world,
the role of human beings
in the world
and, potentially,
human control over the world.
The idea was
that he is taken in
by godlike entities,
creatures of pure energy
and intelligence.
They put him in what you could describe as a human zoo.
DULLEA: The end
of the astronaut's journey.
Did he travel through time?
To the future?
Were those godlike entities
the descendants of HAL?
Artificial intelligence
that you can't switch off,
that keeps learning
and learning until
they're running the zoo,
the universe,
everything.
Is that Kubrick's
most profound prophecy?
Maybe.
But take it from an old man.
If HAL is close
to being amongst us,
we need to pay attention.
We need to talk about AI.
Super-intelligence.
Will it be conscious?
The trolley problem.
Goal alignment.
The singularity.
First to the market.
AI safety.
Lethal autonomous weapons.
Robot revolution.
The hive mind.
The transhuman movement.
Transcend humankind.
We don't have a plan,
we don't realize
it's necessary
to have a plan.
What do we want
the role of humans to be?
[BEEPING]