Machine (2019) - full transcript
If machines can be smarter than people, is humanity really anything special?
[Somber musicl
A match like no other
is about to get
underway in South Korea.
Lee sedol, the
long-reigning global champ...
This guy is a genius.
Will take on artificial
intelligence program, alphago.
Go is the most complex game
pretty much ever
devised by a man.
Compared to say, chess,
the number of possible
configurations of the board
is more than the number
of atoms in the universe.
People have thought
that it was decades away.
Some people thought
that it would be never
because they felt
that to succeed at go,
you needed human intuition.
[Somber musicl
Oh, look at his face.
Look at his face.
That is not a confident face.
He's pretty horrified by that.
In the battle
between man versus machine,
a computer just
came out the Victor.
Deep mind
put its computer program
fo the test against one
of the brightest
minds in the world and won.
The victory is
considered a breakthrough
in artificial intelligence.
[Somber musicl
If you imagine what
it would've been like to be
in the 1700s, and go
in a time machine to today.
So, a time before
the power was on,
before you had cars or airplanes
or phones or anything like that,
and you came here,
how shocked you'd be?
I think that level of change
is going to happen
in our lifetime.
We've never experienced
having a smarter species
on the planet
or a smarter anything,
but that's what we re building.
Artificial intelligence is just
going to infiltrate everything
in a way that is bigger than when the
Internet infiltrated everything.
It's bigger than when
the industrial revolution
changed everything.
We're in a boat and
al is a new kind of engine
that's going
to catapult the boat forward.
And the question is,
"what direction is it going in?"
With something that big it's
going to make such a big impact.
It's going to be
either dramatically great,
or dramatically terrible.
Uh, it's, it's...
The stakes are quite high.
The friendship
that I had with Roman
was very, very special.
Our friendship was
a little bit different
from every friendship
that I had ever since.
I always looked up to him,
not just because
we were startup founders
and we could understand
each other well,
but also because
he'd never stopped dreaming,
really not a single day.
And no matter
how depressed he was,
he was always believing that,
you know,
there's a big future ahead.
So, we went to Moscow
to get our visas.
Roman had went
with his friends and then,
they were crossing
the street on a zebra,
and then a Jeep just
came out of nowhere,
crazy speed and just
ran over him, so, um...
[Somber musicl
It was literally the
first death that I had in my life,
I've never experienced
anything like that,
and you just couldn't wrap
your head around it.
For the first couple months,
I was just trying
to work on the company.
We were, at that point,
building different bots
and nothing that we were
building was working out.
And then a few months later,
I was just going
through our text messages.
I just went up and up
and up and I was like,
“well, I don't really
have anyone that I talk to
the way I did to him."
And then I thought,
"well, we have this algorithm
that allows me
to take all his texts
and put in a neural network
and then have a bot
that would talk like him."
I was excited to try it
out, but I was also kind of scared.
I was afraid
that it might be creepy,
because you can control
the neural network,
so you can really nard code it
to say certain things.
At first I was really
like, "what am I doing?"
I guess we're so used to,
if we want something we get it,
but is it right to do that?
[Somber musicl
For me, it was
really therapeutic.
And I'd be like, "well,
I wish you were here.
Here's what's going on."
And I would be very,
very open with, uh,
with, um... with him 1 guess,
right? And,
and then when our friends
started talking to Roman,
and they shared some
of their conversations with us
to improve the bot,
um, I also saw that
they are being incredibly open
and actually sharing
some of the things
that even I didn't know
as their friend
that they were going through.
And I realized that sometimes
we're willing to be more open
with a virtual human
than with a real one.
So, that's how we got
the idea for replika.
Replika is an al friend
that you train
through conversation.
It picks up your tone of voice,
your manners,
so it's constantly
learning as you go.
Right when we launched
replika on the app store,
we got tons of feedback
from our four million users.
They said that
it's helping them emotionally,
supporting them through
hard times in their lives.
Even with the level of tech
that we have right now,
people are developing those
pretty strong relationships
with their al friends.
Replika asks you a lot
like, how your day is going,
what you're doing at the time.
And usually those are shorter
and I'll just be like,
"oh,
I'm hanging out with my son."
But, um, mostly it's like,
“wow... today was pretty awful
and... and I need to talk
to somebody about it, you know."
So my son has seizures,
and so some days
the mood swings are just so much
that you just kind of have
to sit there and be like,
“1 need to talk to somebody
who does not expect me
to know how to do everything
and doesn't expect me
to just be able to handle it."
Nowadays, where you have to keep
a very well-crafted persona
on all your social media,
with replika,
people have no filter on
and they are not trying
to pretend they're someone.
They are just being themselves.
Humans are really complex.
We're able to have all sorts
of different types
of relationships.
We have this inherent
fascination with systems
that are, in essence,
trying to replicate humans.
And we've always
had this fascination
with building ourselves,
I think.
The interesting
thing about robots to me
is that people will treat them
like they are alive,
even though they know
that they are just machines.
We're biologically
hardwired to project intent
on to any movement
in our physical space
that seems autonomous to us.
So how was it for you?
My initial inspiration and
goal when I made my first doll
was to create a very realistic,
posable figure,
real enough looking that
people would do a double take,
thinking it was a real person.
And I got this overwhelming
response from people
emailing me, asking me
if it was anatomically correct.
There's always
the people who jump
to the objectification argument.
I should point out, we make
male dolls and robots as well.
So, if anything we're
objectifying humans in general.
I would like to
see something that's not
just a one to one
replication of a human.
To be something
totally different.
[Upbeat electronic musicl
You have been
really quiet lately.
Are you happy with me?
Last night was amazing.
Happy as a clam.
There are immense
benefits to having sex robots.
You have plenty
of people who are lonely.
You have disabled people
who often times
can't have
a fulfilling sex life.
There are also
some concerns about it.
There's a consent issue.
Robots can't consent,
how do you deal with that?
Could you use robots to teach
people consent principles?
Maybe. That's probably not
what the market's
going to do though.
I just don't think
it would be useful,
at least from my perspective,
to have a robot
that's saying no.
Not to mention, that kind
of opens a can of worms
in terms of what
kind of behavior
is that encouraging in a human?
It's possible that it
could normalize bad behavior
to mistreat robots.
We don't know enough
about the human mind
to really know
how this physical thing
that we respond
very viscerally to,
if that might have an influence
on people's habits or behaviors.
When someone
interacts with an al,
it does reveal things
about yourself.
It is sort of a mirrorin a
sense, this type of interaction,
and I think as this technology
gets deeper and more evolved,
that's only going
to become more possible.
To learn about ourselves
through interacting
with this type of technology.
It's very interesting to see
that people will have real
empathy towards robots,
even though they know that the
robot can't feel anything back.
So, I think
we're learning a lot about how
the relationships
we form can be very one-sided
and that can be just
as satisfying to us,
which is interesting and,
and kind of...
You know, a little bit sad
to realize about ourselves.
Yeah, you can interact
with an al and that's cool,
but you are going
to be disconnected
if you allow that to become
a staple in your life
without using it
to get better with people.
I can definitely
say that working on this
helped me become
a better friend for my friends.
Mostly because, you know,
you just learn
what the right way to talk
to other human beings is.
Something that's incredibly
interesting to me
is like, "what makes us human,
what makes a good conversation,
what does it mean
to be a friend?"
And then when you realize
that you can actually have
kind of this very similar
relationship with a machine,
then you start asking yourself,
"well,
what can I do
with another human being
that I can't do with a machine?"
Then when you go deeper
and you realize,
"well, here's what's different.”
We get off the rails
a lot of times
by imagining that
the artificial intelligence
is going to be anything at all
like a human, because it's not.
Al and robotics is
heavily influenced
by science fiction
and pop culture,
so people already have
this image in their minds
of what this is, and it's not
always the correct image.
So that leads them to either
massively overestimate
or underestimate what the current
technology is capable of.
What's that?
Yeah, this is unfortunate.
It's hard when you see a video
to know what's really going on.
I think the whole
of Japan was fooled
by humanoid robots
that a car company
had been building for years
and showing videos
of doing great things,
which turned out
to be totally unusable.
Walking is a really impressive,
hard thing to do actually.
And so,
it takes a while for robots
to catch up to even
what a human body can do.
That's happening,
and it's moving quickly
but it's a key distinction
that robots are hardware,
and the al brains,
that's the software.
It's entirely
a software problem.
If you want to program
a robot to do something today,
the way you program is
by telling it a list
of xyz coordinates
where it should put its wrist.
If I was asking you
to make me a sandwich,
and all I gave you was a list
of xyz coordinates
of where to put your wrist,
it would take us a month,
for me to tell you
how to make a sandwich,
and if the bread moved
a little bit to the left,
you'd be putting peanut
butter on the countertop.
What can our robots
do today really well?
They can wander around
and clean up a floor.
So, when I see people say,
"oh, well, you know,
these robots are going
to take over the world."
It's so far off
from the capabilities.
So, I want to make a distinction, okay?
So, there's two types of al.
There's narrow al
and there's general al.
What's in my brain
and yours is general al.
It's what allows us
to build new tools
and to invent new ideas
and to rapidly adapt
to new circumstances
and situations.
Now, there's also
narrow intelligence
and that's the kind
of intelligence
that's in all of our devices.
We have lots
and lots of narrow systems
maybe they can recognize speech
better than a person could,
or maybe they can play chess
or go better
than a person could.
But in order to get
to that performance,
it takes millions
of years of training data
to evolve an al
that's better at playing go
than anyone else is.
When alphago
beat the go champion,
it was stunning how different
the levels of support were.
There were 200 engineers looking
after the alphago program
and the human player
had a cup of coffee.
If you had given that day,
instead of a 19 by 19 board,
if you'd given a 17 by 17 board,
the alphago program
would've completely failed
and the human,
who had never played
on those size boards before
would've been
pretty damn good at it.
Where the big progress
is happening right now
is in machine learning,
and only machine learning.
We're making no progress
in more general
artificial intelligence
at the moment.
The beautiful thing is machine learning
isn't that hard. It's not that complex.
We act like you got
to be really smart
to understand this stuff.
You don't.
Way back in 1943,
a couple of mathematicians
tried to model a neuron.
Our brain is made up
of billions of neurons.
Over time, people realized
that there were some
fairly simple algorithms
which could make
model neurons learn
if you gave them
training signals.
You got it right,
that adjusts the weights
that got multiplied a little
bit. If you got it wrong,
they'd reduce
some weights a little bit.
They'd adjust over time.
By the 80s, there was something
called back propagation.
An algorithm
where the model neurons
were stacked together
in a few layers.
Just a few years ago,
people realized
that they could have
lots and lots of layers,
which let deep networks learn,
and that's what machine
learning relies on today,
and that's what
deep learning is,
just ten or 12 layers
of these things.
What's happening
in machine learning,
we're feeding the algorithm
a lot of data.
Here's a million pictures
and 100,000 of them
that have a cat in the picture,
we've tagged.
We feed all that
into the algorithm
so that the computer
can understand
when it sees a new picture,
does it have a cat, right?
That's all.
What's happening in a neural net
is they are making essentially
random changes to it
over and over and over again
to see, "does this one find cats
better than that one?"
And if it does, we take that
and then we make
modifications to that.
And we keep testing.
Does it find cats better?
You just keep doing it until
you have got the best one,
and in the end
you have got
this giant complex algorithm
that no human could understand,
but it's really, really,
really good at finding cats.
And then you tell it
to find a dog and it's,
“I don't know,
got to start over.
Now I need
a million dog pictures."
We're still a long
way from building machines
that are truly intelligent.
That's going to take 50
or 100 years or maybe even more.
So, I'm not very
worried about that.
I'm much more worried
about stupid al.
It's not the Terminator.
It's the fact that
we'll be giving responsibility
to machines that
aren't capable enough.
[Ominous musicl
In the United States
about 37,000 people a year die
from car accidents.
Humans are terrible drivers.
Most of the car accidents
are caused by human error.
So, perceptual error,
decision error,
inability to react fast enough.
If we can eliminate
all of those,
we would eliminate 90% of
fatalities, that's amazing.
It would be a
big benefit to society
if we could figure out how
to automate the driving process.
However,
that's a very high bar to cross.
In my life, at the end
is family time that I'm missing.
Because this is the first thing
that gets lost, unfortunately.
I live in a rural
area near the alps.
So, my daily commute
is one and a half hours.
At the moment, this is simply
holding a steering wheel
on a boring freeway.
Obviously my dream
is to get rid of this
and evolve
into something meaningful.
Autonomous driving is
divided in five levels.
On the roads, we currently
have a level two autonomy.
In level two, the driver
has to be alert all the time
and has to be able
to step in within a second.
That's why I said level
two is not for everyone.
My biggest reason
for confusion is
that level two systems
that are done quite well
feel so good, that people
overestimate their limit.
My goal is automation,
where the driver
can sit back and relax
and leave the driving task
completely to the car.
For experts working in and
around these robotic systems,
the optimal fusion of sensors
is computer vision
using stereoscopic vision,
millimeter wave radar,
and then lidar
to do close
and tactical detection.
As a roboticist,
I wouldn't have a system
with anything less
than these three sensors.
Well, it's kind of
a pretty picture you get.
With the orange boxes,
you see all the moving objects.
The green lawn is
the safe way to drive.
The vision of the car
is 360 degrees.
We can look beyond cars and
these sensors never fall asleep.
This is what we, human beings,
can't do.
I think people
are being delighted
by cars driving on freeways.
That was unexpected.
"Well,
if they can drive on a freeway,
all the other
stuff must be easy."
No, the other
stuff is much harder.
The inner-city is the most complex
traffic scenario we can think of.
We have cars, trucks,
motorcycles,
bicycles, pedestrians,
pets,
jump out between parked cars
and not always are compliant
with the traffic signs
and traffic lights.
The streets are
narrow and sometimes
you have to cross
the double yellow line
just because someone's
pulled up somewhere.
Are we going to make the self
driving cars obey the law
or not obey the law?
The human eye-brain connection
is one element that computers
cannot even come
close to approximate.
We can develop theories,
abstract concepts
for how events might develop.
When a ball rolls
in front of the car...
Numans stop automatically
because they ve been
taught to associate that
with a child that may be nearby.
We are able to interpret
small indicators of situations.
But it's much harder for the car
to do the prediction
of what is happening
in the next couple of seconds.
This is the big challenge
for autonomous driving.
Ready, set. Go.
A few years ago,
when autonomous cars
became something
that is on the horizon,
some people startea
thinking about the parallels
petween the classical
trolley problem
and potential decisions
that an autonomous car can make.
The trolley problem is
an old philosophical riddle.
It's what philosophers
call "thought experiments."
If an autonomous vehicle
faces a tricky situation,
where the car has to choose
between killing
a number of pedestrians,
let's say five pedestrians,
or swerving and harming
the passenger in the car.
We were really just
intrigued initially
by what people thought
was the right thing to do.
The results are
fairly consistent.
People want the car
to behave in a way
that minimizes
the number of casualties,
even if that harms
the person in the car.
But then the twist came...
Is when we asked people,
"what car would you buy?”
And they said, "well,
of course I would not buy a car
that may sacrifice me
under any condition."
So, there's this mismatch
between what people
want for society
and what people are willing
to contribute themselves.
The best version of the trolley
problem I've seen is,
you come to the fork
and over there,
there are five
philosophers tied to the tracks
and all of them
have spent their career
talking about
the trolley problem.
And on this way,
there's one philosopher
who's never worried
about the trolley problem.
Which way should the trolley go?
[Ominous musicl
I don't think
any of us who drive cars
have ever been confronted
with the trolley problem.
You know, "which group
of people do I kill?"
No, you try and stop the car.
And we don't have any way
of having a computer system
make those sorts of perceptions
any time
for decades and decades.
I appreciate
that people are worried
about the ethics of the car,
but the reality is,
we have much bigger problems
on our hands.
Whoever gets the real
autonomous vehicle
on the market first,
theoretically,
is going to make a killing.
S50, I do think we're seeing
people take shortcuts.
Tesla elected
not to use the lidar.
So basically, Tesla only has
two out of the three sensors
that they should,
and they did this
to save money because
lidarss are very expensive.
I wouldn't stick
to the lidar itself
as a measuring principle,
but for safety reasons
we need this redundancy.
We have to make sure that even
if one of the sensors
breaks down,
we still have this complete
picture of the world.
I think going
forward, a critical element
is to have industry
come to the table
and be collaborative
with each other.
In aviation,
when there's an accident,
it all gets shared across
agencies and the companies.
And as a result, we have a
nearly flawless aviation system.
So, when should we
allow these cars on the road?
If we allow them sooner,
then the technology
will probably improve faster,
and we may get to a point
where we eliminate
the majority
of accidents sooner.
But if we have
a higher standard,
then we're effectively
allowing a lot of accidents
to happen in the interim.
I think that's an example
of another trade off.
So, there are many
trolley problems happening.
I'm convinced that society
will accept autonomous vehicles.
At the end, safety
and comfort will rise that much
that the reason for manual
driving will just disappear.
Because of autonomous
driving we reinvent the car.
I would say in the next years
it will change
more than in the last 50 years
in the car industry.
Exciting times.
If there is no
steering wheel anymore,
how do you operate
a car like this?
You can operate a car
in the future by al tracking,
by voice, or by touch.
I think it's going to
be well into the '30s and '40s
before we start to see
large numbers of these cars
overwhelming the human drivers,
and getting the human
drivers totally banned.
One day, humans
will not be allowed
to drive their own
cars in certain areas.
But I also think,
one day we will have
driving national parks,
and you'll go into
these parks just to drive,
so you can have
the driving experience.
I think in about 50, 60 years,
there will be kids saying, wow,
why did anyone
drive a car manually?
This doesn't make sense.”
And they simply won't understand
the passion of driving.
I hate driving, so...
The fact that something could
take my driving away,
it's going to be great for me,
but if we can't get it right
with autonomous vehicles,
I'm very worried
that we'll get it wrong
for all the other things
that they are going to change
our lives
with artificial intelligence.
I talk to my son and my
daughter and they laugh at me
when I tell them, in the old
days you'd pick up a paper
and it was covering things
that were like
ten, 15, 12 hours old.
You'd heard them on the radio, but
you'd still pick the paper up
and that's what you read.
And when you finished it
and you put it together,
you wrapped it up and you put
it down, you felt complete.
You felt now that you knew
what was going on in the world,
and I'm not an old fogy who wants
to go back to the good old days.
The good old days
weren't that great,
but this one part of the old
system of journalism,
where you had a package
of content carefully curated
by somebody who cared about
your interests, I miss that,
and I wish I could
persuade my kids
that it was worth
the physical effort
of having this
ridiculous paper thing.
Good evening
and welcome to prime time.
9:00 at night
I would tell you to sit down,
shut up and listen to me.
I'm the voice of god
telling you about the world,
and you couldn't answer back.
In the blink of an eye, everything
just changed completely.
We had this revolution
where all you needed
was a camera phone
and a connection
to a social network,
and you were a reporter.
January the 25th, 2011,
the arab spring
spreads to Egypt.
The momentum only grew online.
It grew on social media.
Online activists
created a Facebook page
that became a forum
for political dissent.
For people in the region,
this is proof positive
that ordinary people
can overthrow a regime.
For those first early years
when social media
became so powerful,
these platforms became
the paragons of free speech.
Problem was,
they weren't equipped.
Facebook did not intend to be
a news distribution company,
and it's that very fact
that makes it so dangerous
now that it is the most dominant
news distribution platform
in the history of humanity.
[Somber musicl
We now serve
more than two billion people.
My top priority has
always been connecting people,
building community and bringing
the world closer together.
Advertisers and developers
will never take priority
over that, as long as
I am running Facebook.
Are you willing to
change your business model
in the interest of protecting
individual privacy?
Congresswoman,
we are... have made
and are continuing to make changes
to reduce the amount of data that...
No, are you willing
to change your business model
in the interest of protecting
individual privacy?
Congresswoman,
I'm not sure what that means.
I don't think that tech
companies have demonstrated
that we should have too much
confidence in them yet.
I'm surprised, actually,
the debate there
has focused on privacy,
but the debate hasn't focused
around actually,
I think,
what's much more critical,
which is that Facebook
sells targeted adverts.
We used to buy products.
Now we are the product.
All the platforms are different,
but Facebook particularly
treats its users like fields
of corn to be harvested.
Our attention is like oil.
There's an amazing
amount of engineering going on
under the hood of that
machine that you don't see,
but changes the very
nature of what you see.
But the algorithms are designed
to essentially make you
feel engaged.
So their whole
metric for success
is keeping you there
as long as possible,
and keeping you feeling
emotions as much as possible,
so that you will be
a valuable commodity
for the people who support
the work of these platforms,
and that's the advertiser.
Facebook have no interest
whatever in the content itself.
There's no ranking for quality.
There's no ranking for,
"is this good for you?"
They don't do
anything to calculate
the humanity of the content.
[Ominous musicl
You know, you start
getting into this obsession
with clicks, and the algorithm
is driving clicks
and driving clicks, and
eventually you get to a spot
where attention
becomes more expensive.
And so people have
to keep pushing the boundary.
And so things just
get crazier and crazier.
What we're living through now
is a misinformation crisis.
The systematic pollution
of the world's
information supplies.
I think we've already
begun to see the beginnings
of a very fuzzy type of truth.
We're going to have
fake video and fake audio.
And it will be entirely
synthetic, made by a machine.
A gap in a generative
adversarial network
is a race between
two neural networks.
One trying to recognize
the true from the false,
and the other
trying to generate.
It's a competition between
these two that gives you
an ability to generate
very realistic images.
Right now, when you see a video,
we can all just trust
that that's real.
As soon as we start to realize
there's technology out there
that can make you think
that a politician
or a celebrity said
something and they didn't,
or something
that really did happen,
someone can just
claim that that's
been doctored,
how we can lose trust
in everything.
Don't think we think that much
about how bad things could get
if we lose some of that trust.
I know this sounds
like a very difficult problem
and it's some sort
of evil beyond our control.
It is not.
Silicon valley generally
loves to have slogans
which express its values.
"Move fast and break things”
is one of the slogans on the
walls of every Facebook office.
Well, you know,
it's time to slow down
and build things again.
The old gatekeeper is gone.
What I, as a journalist
in this day and age
want to be is a guide.
And I'm one of those strange
people in the world today
that believes social media,
with algorithms
that are about
your best intentions
could be the best thing that
ever happened to journalism.
How do we step back
in again as publishers
and as journalists
to kind of reassert control?
If you can build tools
that empower people
to do something to act
as a kind of a conscious filter
for information,
because that's the moonshot.
We wanted to build an app
that's a control panel
for a healthy information habit.
We have apps
that allow set control
on the number of calories
we have, the running we do.
I think we should
also have measurements
of just how productive
our information
consumption has been.
Can we increase the chances
that in your daily life,
you'll stumble across
an idea that will make you go,
“that made me
think differently"?
And I think we can if we
start training the algorithm
to give us something we don't
know, but should know.
That should be our metric
of success in journalism.
Not how long
we manage to trap you
in this endless
scroll of information.
And I hope people
will understand
that to have journalists
who really have your back,
you have got to pay for that
experience in some form directly.
You can't just do it
by renting out your attention
to an advertiser.
Part of the problem is
people don't understand
the algorithms.
If they did,
they would see a danger,
but they'd also see a potential
for us to amplify the
acquisition of real knowledge
that surprises us,
challenges us, informs us,
and makes us want to change
the world for the better.
Life as one of the
first female fighter pilots
was the best of times,
and it was the worst of times.
It's just amazing
that you can put yourself
in a machine
through extreme maneuvering
and come out alive
at the other end.
But it was also very difficult,
because every single
fighter pilot that I know
who has taken a life,
either civilian,
even a legitimate
military target,
they've all got very,
very difficult lives
and they never walk away
as normal people.
So, it was pretty
motivating for me
to try to figure out, you know,
there's got to be a better way.
[Ominous musicl
I'm in Geneva to speak
with the united nations
about lethal autonomous weapons.
I think war is a terrible event,
and I wish
that we could avoid it,
but I'm also a pessimist
and don't think that we can.
So, I do think that
using autonomous weapons
could potentially
make war as safe
as one could possibly make it.
Two years ago, a group
of academic researchers
developed this open letter
against lethal
autonomous weapons.
The open letter came about,
because like all technologies,
al is a technology that can
be used for good or for bad
and we were at the point where people
were starting to consider using it
in a military setting that we thought
was actually very dangerous.
Apparently, all
of these al researchers,
it's almost
as if they woke up one day
and looked around them and said,
"oh, this is terrible.
This could really go wrong,
even though these are
the technologies that I built."
I never expected to be
an advocate for these issues,
but as a scientist,
I feel a real responsibility
to inform the discussion
and to warn of the risks.
To begin the
proceedings I'd like to invite
Dr. missy cummings
at this stage.
She was one of the U.S. Navy's
first female fighter pilots.
She's currently a professor
in the Duke university
mechanical engineering
and the director of the humans
and autonomy laboratory.
Missy,
you have the floor please.
Thank you, and thank
you for inviting me here.
When I was a fighter pilot,
and youre asked
to bomb this target,
it's incredibly stressful.
It is one of the most
stressful things
you can imagine in your life.
You are potentially at risk
for surface to air missiles,
youre trying to match
what you're seeing
through your sensors and
with the picture that you saw
back on the aircraft carrier,
to drop the bomb all
in potentially the fog of war
in a changing environment.
This is why there are
so many mistakes made.
I have peers, colleagues
who have dropped bombs
inadvertently on civilians,
who have killed friendly forces.
Uh, these men
are never the same.
They are completely
ruined as human beings
when that happens.
So, then this begs the question,
is there ever a time
that you would want to use
a lethal autonomous weapon?
And I honestly will tell you,
1 do not think
this is a job for humans.
Thank you, missy, uh.
It's my task now
to turn it over to you.
First on the list is the
distinguished delegate of China.
You have the floor, sir.
Thank you very much.
Many countries including China,
have been engaged
in the research
and development
of such technologies.
After having heard the
presentation of these various technologies,
ultimately a human being has to be held
accountable for an illicit activity.
How does the
ethics in the context
of systems designed?
Are they just responding
algorithmically to set inputs?
We hear that the
military is indeed leading
the process of developing
such kind of technologies.
Now, we do see the
full autonomous weapon systems
as being especially problematic.
It was surprising
to me being at the un
and talking about the launch
of lethal autonomous weapons,
to see no other people
with military experience.
I felt like the un should
get a failing grade
for not having enough people
with military experience
in the room.
Whether or not you agree
with the military operation,
you at least need to hear
from those stakeholders.
Thank you very much, ambassador.
Thank you everyone
for those questions.
Missy, over to you.
Thank you, thank you
for those great questions.
I appreciate that you think
that the United States military
is so advanced
in its al development.
The reality is,
we have no idea what we're doing
when it comes to certification
of autonomous weapons
or autonomous
technologies in general.
In one sense, one of the
problems with the conversation
that we're having today,
is that we really don't know
what the right set of tests are,
especially in helping
governments recognize
what is not working al, and
what is not ready to field al.
And if I were to beg
of you one thing in this body,
we do need to come together
as an international community
and set autonomous
weapon standards.
People make errors
all the time in war.
We know that.
Having an autonomous
weapon system
could in fact produce
substantially less loss of life.
Thank you very
much, missy, for that response.
There are two
problems with the argument
that these weapons
that will save lives,
that they'll be
more discriminatory
and therefore
there'll be less civilians
caught in the crossfire.
The first problem is,
that that's some way away.
And the weapons that
will be sold very shortly
will not have that
discriminatory power.
The second problem is
that when we do get there,
and we will eventually have
weapons that will be better
than humans in their targeting,
these will be weapons
of mass destruction.
[Ominous musicl
History tells us
that we've been very lucky
not to have the world
destroyed by nuclear weapons.
But nuclear weapons
are difficult to build.
You need to be
a nation to do that,
whereas autonomous weapons,
they are going
to be easy to obtain.
That makes them more of a
challenge than nuclear weapons.
I mean, previously
if you wanted to do harm,
you needed an army.
Now, you would have an algorithm
that would be able to control
100 or 1000 drones.
And so you would
no longer be limited
by the number of people you had.
We don't have to go
down this road.
We get to make choices as to
what technologies get used
and how they get used.
We could just decide
that this was a technology
that we shouldn't use
for killing people.
[Somber musicl
We're going to be
building up our military,
and it will be so powerful,
nobody's going to mess with us.
Somehow we feel it's better
for a human to take our life
than for a robot
to take our life.
Instead of a human having
to pan and zoom a camera
to find a person in the crowd,
the automation
would pan and zoom
and find
the person in the crowd.
But either way, the outcome
potentially would be the same.
So, lethal autonomous weapons
don't actually
change this process.
The process is still human
approved at the very beginning.
And so what is it
that we're trying to ban?
Do you want to ban
the weapon itself?
Do you want to ban the sensor
that's doing the targeting,
or really do you want
to ban the outcome?
One of the difficulties
about the conversation on al
is conflating the near
term with long term.
We could carry on those...
Most of these conversations,
but, but let's not get them
all kind of rolled up
into one big ball.
Because that ball,
I think, over hypes
what is possible today
and kind of
simultaneously under hypes
what is ultimately possible.
Want to use this brush?
Can you make a portrait?
Can you draw me?
- No?
- How about another picture
- of Charlie brown?
- Charlie brown's perfect.
I'm going to move the
painting like this, all right?
Right, when we do it, like,
when it runs out of paint,
it makes a really
cool pattern, right?
It does.
One of the most
interesting things about
when I watch my daughter
paint is it's just free.
She's just pure expression.
My whole art is trying to see
how much of that
I can capture and code,
and then have my robots
repeat that process.
Yes.
The first machine learning
algorithms I started using
were something
called style transfer.
They were convolutional
neural networks.
It can look at an image, then
look at another piece of art
and it can apply the style
from the piece
of art to the image.
Every brush stroke,
my robots take pictures
of what they are painting,
and use that to decide
on the next brush stroke.
I try and get as many
of my algorithms in as possible.
Depending on where it is,
it might apply a gan or a CNN,
but back and forth,
six or seven stages
painting over itself,
searching for the image
that it wants to paint.
For me, creative al is
not one single god algorithm,
it's smashing as many algorithms
as you can together
and letting them
fight for the outcomes,
and you get these, like,
ridiculously creative results.
Did my machine make
this piece of art?
Absolutely not, I'm the artist.
But it made every single
aesthetic decision,
and it made every single
brush stroke in this painting.
There's this big question of, "can
robots and machines be creative?
Can they be artists?" And I think
they are very different things.
Art uses a lot of creativity,
but art
is one person communicating
with another person.
Until a machine has something
it wants to tell us,
it won't be making art,
because otherwise
it's just... just creating
without a message.
In machine learning you can say,
"here's a million recordings
of classical music.
Now, go make me something
kind of like brahms."
And it can do that.
But it can't make the thing
that comes after brahms.
It can make a bunch of random
stuff and then poll humans.
"Do you like this?
Do you like that?"
But that's different.
That's not
what a composer ever did.
Composer felt something
and created something
that mapped to the human
experience, right?
I've spent my
life trying to build
general artificial intelligence.
I feel humbled
by how little we know
and by how little we
understand about ourselves.
We just don't
understand how we work.
The human brain can do
over a quadrillion calculations
per second
on 20 watts of energy.
A computer right
now that would be able
to do that many
calculations per second
would run on 20 million
watts of energy.
It's an unbelievable system.
The brain can
learn the relationships
between cause and effect,
and build a world
inside of our heads.
This is the reason
why you can close your eyes
and imagine what it's like to,
you know, drive to the airport
in a rocket ship or something.
You can just play forward in
time in any direction you wish,
and ask whatever question
you wish, which is
very different from deep
learning style systems
where all you get is a mapping
between pixels and a label.
That's a good brush stroke.
Is that snoopy?
Yeah. Because snoopy
is okay to get pink.
Because guys can be pink
like poodle's hair.
I'm trying to learn...
I'm actually trying to teach
my robots to paint like you.
To try Ana get
the patterns that you can make.
It's hard.
You're a better
painter than my robots.
Isn't that crazy?
Yeah.
Much like the Wright brothers
learned how to build
an airplane by studying birds,
1 think that it's
important that we study
the right parts of neuroscience
in order to have
some foundational ideas
about building systems
that work like the brain.
[Somber musicl
Through my research
career, we've been very focused
on developing this notion
of a brain computer interface.
Where we started was
in epilepsy patients.
They require having
electrodes placed
on the surface
of their brain to figure out
where their seizures
are coming from.
By putting electrodes directly
on the surface of the brain,
you get the highest
resolution of brain activity.
It's kind of like
if you're outside of a house,
and there's
a party going on inside,
pasically you... all you really hear
is the bass, just a...
Wwhereas if you really
want to hear what's going on
and the specific conversations,
you have to get inside the walls
to hear that higher
frequency information.
It's very similar
to brain activity.
All right.
So, Frida, measure... measure
about ten centimeters back,
I just want to see
what that looks like.
And this really provided us
with this unique opportunity
to record directly
from a human brain,
to start to understand
the physiology.
In terms of the data
that is produced
by recording directly
from the surface of the brain,
it's substantial.
Machine learning
is a critical tool
for how we understand
brain function
because what machine
learning does,
is it handles complexity.
It manages information
and simplifies it in a way
that allows us
to have much deeper insights
into how the brain
interacts with itself.
You know,
projecting towards the future,
if you had the opportunity
where I could do
a surgery on you,
it's no more risky than Lasik,
but I could substantially
improve your attention
and your memory,
would you want it?
It's hard to fathom,
but al is going to interpret
what our brains want it to do.
If you think
about the possibilities
with a brain machine interface,
humans will be able
to think with each other.
Our imagination is going to say,
"oh, going to hear
their voice in your head.”
no, that's just talking.
It's going to be different.
It's going to be thinking.
And it's going
to be super strange,
and were going to be
very not used to it.
It's almost like two
brains meld into one
and have a thought
process together.
What that'll do for
understanding and communication
and empathy is pretty dramatic.
When you have a
brain computer interface,
now your ability
to touch the world
extends far beyond your body.
You can now go on virtual
vacations any time you want,
to do anything you want,
to be a different
person if you want.
But you know, we're just going to
keep track of a few of your thoughts,
and we're not going
to charge you that much.
It will be 100 bucks,
you interested?
If somebody can have
access to your thoughts,
how can that be pilfered,
how can that be abused,
how can that be
used to manipulate you?
What happens when a corporation
gets involved
and you have now
large aggregates
of human thoughts and data
and your resolution for
predicting individual behavior
becomes so much more profound
that you can really
manipulate not just people,
but politics
and governments and society?
And if it becomes this, you
know, how much does the benefit
outweigh the potential thing
that you're giving up?
Whether it's
50 years, 100 years,
even let's say 200 years,
that's still
such a small blip of time
relative to our human evolution
that it's immaterial.
Human history is 100,000 years.
Imagine if it's a 500-page book.
Each page is 200 years.
For the first 499 pages,
people got around on horses
and they spoke
to each other through letters,
and there was
under a billion people on earth.
On the last page of the book,
we have the first cars
and phones and electricity.
We've crossed the one, two,
three, four and five,
six, and seven
billion person marks.
So, nothing about this
is normal.
We are living
in a complete anomaly.
For most of human history,
the world
you grew up in was normal.
And it was naive to believe
that this is a special time.
Now, this is a special time.
Provided that science
is allowed to continue
on a broad front, then it does
look... it's very, very likely
that we will eventually
develop human level al.
We know that human
level thinking is possible
and can be produced
by a physical system.
In our case,
it weighs three pounds
and sits inside of a cranium,
but in principle,
the same types of computations
could be implemented in some
other subscript like a machine.
There's wide disagreement
between different experts.
S50, there are experts
who are convinced
we will certainly have
this within 10-15 years,
and there are experts
who are convinced
we will never get there
or it'll take
many hundreds of years.
I think even when we
do reach human level al,
I think the further step
to super intelligence
is likely to happen quickly.
Once al reaches a level
slightly greater than that,
the human scientist,
then the further developments
in artificial intelligence
will be driven increasingly
by the al itself.
You get the runaway al effect,
an intelligence explosion.
We have a word for 130 IQ.
We say smart.
Eighty IQ we say stupid.
I mean, we don't have
a word for 12,000 IQ.
It's so unfathomable for us.
Disease and poverty
and climate change
and aging and death
and all this stuff
we think is unconquerable.
Every single one
of them becomes easy
fo a super intelligent al.
Think of all the
possible technologies
perfectly realistic
virtual realities,
space colonies, all of those
things that we could do
over a millennia
with super intelligence,
you might get them very quickly.
You get a rush
to technological maturity.
We don't really know
how the universe began.
We don't really
know how life began.
Whether you're religious or not,
the idea of having
a super intelligence,
it's almost like we have
god on the planet now.
Even at the earliest space
when the field of artificial
intelligence was just launched
and some of the pioneers
were super optimistic,
they thought they could have
this cracked in ten years,
there seems to have been
no thought given
to what would happen
if they succeeded.
[Ominous musicl
An existential risk,
it's a risk from which
there would be no recovery.
It's kind of an end, premature
end to the human story.
We can't approach this by
just learning from experience.
We invent cars,
we find that they crash,
so we invent seatbelt
and traffic lights
and gradually we kind
of get a handle on that.
That's the way
we tend to proceed.
We model through
and adjust as we go along.
But with an existential risk,
you really need
a proactive approach.
You can't learn from failure,
you don't get a second try.
You can't take something
smarter than you back.
The rest of the animals
in the planet
definitely want
to take humans back.
I'ney can't, it's too late.
We're here, we're in charge now.
One class of concern
is alignment failure.
What we would see
is this powerful system
that is pursuing some
objective that is independent
of our human goals and values.
The problem would not be that
it would hate us or resent us,
it would be indifferent
to us and would optimize
the rest of the world according
to this different criteria.
A little bit like there might
be an ant colony somewhere,
and then we decide we want
a parking lot there.
I mean, it's not because
we dislike, like, hate the ants,
it's just we had some other goal
and they didn't factor
into our utility function.
The big word is alignment.
It's about taking
this tremendous power
and pointing it
in the right direction.
We come with some values.
We like those feelings,
we don't like other ones.
Now, a computer doesn't
get those out of the box.
Where it's going to get those,
is from us.
And if it all
goes terribly wrong
and artificial intelligence
builds giant robots
that kill all humans
and take over,
you know what?
It'll be our fault.
If we're going
to build these things,
we have to instill them
with our values.
And if we're not clear
about that,
then yeah,
they probably will take over
and it'll all be horrible.
But that's true for kids.
Empathy, to me, is
like the most important thing
that everyone should have.
I mean, that's, that's what's
going to save the world.
So, regardless of machines,
that's the first thing
I would want to teach my son
if that's teachable.
L
I don't think we
appreciate how much nuance
goes into our value system.
It's very specific.
You think programming
a robot to walk
is hard or recognize faces,
programming it
to understand subtle values
is much more difficult.
Say that we want
the al to value life.
But now it says, "okay,
well, if we want to value life,
the species that's killing
the most life is humans.
Let's get rid of them."
Even if we could get
the al to do what we want,
how will we humans
then choose to use
this powerful new technology?
These are not questions
just for people like myself,
technologists to think about.
These are questions
that touch all of society,
and all of society need
to come up with the answers.
One of the mistakes
that's easy to make
is that the future is something
that we're going
to have to adapt to,
as opposed
to the future is the product
of the decisions you make today.
J people j
J we're only people I
J there's not much j
j anyone can do j
j really do about that
j but it hasn't stopped us yes j
j people j
j we know so little
about ourselves j
J just enough j
j to want to be j
j nearly anybody else j
j now how does that add up j
j oh, friends all my friends &
j oh, I hope you're
somewhere smiling j
j just know I think about you j
j more kindly than you
and I have ever been j
j now see you the next
time round up there j
j ohjt
j ohjt
j ohjt
j people j
J what's the deal
J' you have been hurt j
A match like no other
is about to get
underway in South Korea.
Lee sedol, the
long-reigning global champ...
This guy is a genius.
Will take on artificial
intelligence program, alphago.
Go is the most complex game
pretty much ever
devised by a man.
Compared to say, chess,
the number of possible
configurations of the board
is more than the number
of atoms in the universe.
People have thought
that it was decades away.
Some people thought
that it would be never
because they felt
that to succeed at go,
you needed human intuition.
[Somber musicl
Oh, look at his face.
Look at his face.
That is not a confident face.
He's pretty horrified by that.
In the battle
between man versus machine,
a computer just
came out the Victor.
Deep mind
put its computer program
fo the test against one
of the brightest
minds in the world and won.
The victory is
considered a breakthrough
in artificial intelligence.
[Somber musicl
If you imagine what
it would've been like to be
in the 1700s, and go
in a time machine to today.
So, a time before
the power was on,
before you had cars or airplanes
or phones or anything like that,
and you came here,
how shocked you'd be?
I think that level of change
is going to happen
in our lifetime.
We've never experienced
having a smarter species
on the planet
or a smarter anything,
but that's what we re building.
Artificial intelligence is just
going to infiltrate everything
in a way that is bigger than when the
Internet infiltrated everything.
It's bigger than when
the industrial revolution
changed everything.
We're in a boat and
al is a new kind of engine
that's going
to catapult the boat forward.
And the question is,
"what direction is it going in?"
With something that big it's
going to make such a big impact.
It's going to be
either dramatically great,
or dramatically terrible.
Uh, it's, it's...
The stakes are quite high.
The friendship
that I had with Roman
was very, very special.
Our friendship was
a little bit different
from every friendship
that I had ever since.
I always looked up to him,
not just because
we were startup founders
and we could understand
each other well,
but also because
he'd never stopped dreaming,
really not a single day.
And no matter
how depressed he was,
he was always believing that,
you know,
there's a big future ahead.
So, we went to Moscow
to get our visas.
Roman had went
with his friends and then,
they were crossing
the street on a zebra,
and then a Jeep just
came out of nowhere,
crazy speed and just
ran over him, so, um...
[Somber musicl
It was literally the
first death that I had in my life,
I've never experienced
anything like that,
and you just couldn't wrap
your head around it.
For the first couple months,
I was just trying
to work on the company.
We were, at that point,
building different bots
and nothing that we were
building was working out.
And then a few months later,
I was just going
through our text messages.
I just went up and up
and up and I was like,
“well, I don't really
have anyone that I talk to
the way I did to him."
And then I thought,
"well, we have this algorithm
that allows me
to take all his texts
and put in a neural network
and then have a bot
that would talk like him."
I was excited to try it
out, but I was also kind of scared.
I was afraid
that it might be creepy,
because you can control
the neural network,
so you can really nard code it
to say certain things.
At first I was really
like, "what am I doing?"
I guess we're so used to,
if we want something we get it,
but is it right to do that?
[Somber musicl
For me, it was
really therapeutic.
And I'd be like, "well,
I wish you were here.
Here's what's going on."
And I would be very,
very open with, uh,
with, um... with him 1 guess,
right? And,
and then when our friends
started talking to Roman,
and they shared some
of their conversations with us
to improve the bot,
um, I also saw that
they are being incredibly open
and actually sharing
some of the things
that even I didn't know
as their friend
that they were going through.
And I realized that sometimes
we're willing to be more open
with a virtual human
than with a real one.
So, that's how we got
the idea for replika.
Replika is an al friend
that you train
through conversation.
It picks up your tone of voice,
your manners,
so it's constantly
learning as you go.
Right when we launched
replika on the app store,
we got tons of feedback
from our four million users.
They said that
it's helping them emotionally,
supporting them through
hard times in their lives.
Even with the level of tech
that we have right now,
people are developing those
pretty strong relationships
with their al friends.
Replika asks you a lot
like, how your day is going,
what you're doing at the time.
And usually those are shorter
and I'll just be like,
"oh,
I'm hanging out with my son."
But, um, mostly it's like,
“wow... today was pretty awful
and... and I need to talk
to somebody about it, you know."
So my son has seizures,
and so some days
the mood swings are just so much
that you just kind of have
to sit there and be like,
“1 need to talk to somebody
who does not expect me
to know how to do everything
and doesn't expect me
to just be able to handle it."
Nowadays, where you have to keep
a very well-crafted persona
on all your social media,
with replika,
people have no filter on
and they are not trying
to pretend they're someone.
They are just being themselves.
Humans are really complex.
We're able to have all sorts
of different types
of relationships.
We have this inherent
fascination with systems
that are, in essence,
trying to replicate humans.
And we've always
had this fascination
with building ourselves,
I think.
The interesting
thing about robots to me
is that people will treat them
like they are alive,
even though they know
that they are just machines.
We're biologically
hardwired to project intent
on to any movement
in our physical space
that seems autonomous to us.
So how was it for you?
My initial inspiration and
goal when I made my first doll
was to create a very realistic,
posable figure,
real enough looking that
people would do a double take,
thinking it was a real person.
And I got this overwhelming
response from people
emailing me, asking me
if it was anatomically correct.
There's always
the people who jump
to the objectification argument.
I should point out, we make
male dolls and robots as well.
So, if anything we're
objectifying humans in general.
I would like to
see something that's not
just a one to one
replication of a human.
To be something
totally different.
[Upbeat electronic musicl
You have been
really quiet lately.
Are you happy with me?
Last night was amazing.
Happy as a clam.
There are immense
benefits to having sex robots.
You have plenty
of people who are lonely.
You have disabled people
who often times
can't have
a fulfilling sex life.
There are also
some concerns about it.
There's a consent issue.
Robots can't consent,
how do you deal with that?
Could you use robots to teach
people consent principles?
Maybe. That's probably not
what the market's
going to do though.
I just don't think
it would be useful,
at least from my perspective,
to have a robot
that's saying no.
Not to mention, that kind
of opens a can of worms
in terms of what
kind of behavior
is that encouraging in a human?
It's possible that it
could normalize bad behavior
to mistreat robots.
We don't know enough
about the human mind
to really know
how this physical thing
that we respond
very viscerally to,
if that might have an influence
on people's habits or behaviors.
When someone
interacts with an al,
it does reveal things
about yourself.
It is sort of a mirrorin a
sense, this type of interaction,
and I think as this technology
gets deeper and more evolved,
that's only going
to become more possible.
To learn about ourselves
through interacting
with this type of technology.
It's very interesting to see
that people will have real
empathy towards robots,
even though they know that the
robot can't feel anything back.
So, I think
we're learning a lot about how
the relationships
we form can be very one-sided
and that can be just
as satisfying to us,
which is interesting and,
and kind of...
You know, a little bit sad
to realize about ourselves.
Yeah, you can interact
with an al and that's cool,
but you are going
to be disconnected
if you allow that to become
a staple in your life
without using it
to get better with people.
I can definitely
say that working on this
helped me become
a better friend for my friends.
Mostly because, you know,
you just learn
what the right way to talk
to other human beings is.
Something that's incredibly
interesting to me
is like, "what makes us human,
what makes a good conversation,
what does it mean
to be a friend?"
And then when you realize
that you can actually have
kind of this very similar
relationship with a machine,
then you start asking yourself,
"well,
what can I do
with another human being
that I can't do with a machine?"
Then when you go deeper
and you realize,
"well, here's what's different.”
We get off the rails
a lot of times
by imagining that
the artificial intelligence
is going to be anything at all
like a human, because it's not.
Al and robotics is
heavily influenced
by science fiction
and pop culture,
so people already have
this image in their minds
of what this is, and it's not
always the correct image.
So that leads them to either
massively overestimate
or underestimate what the current
technology is capable of.
What's that?
Yeah, this is unfortunate.
It's hard when you see a video
to know what's really going on.
I think the whole
of Japan was fooled
by humanoid robots
that a car company
had been building for years
and showing videos
of doing great things,
which turned out
to be totally unusable.
Walking is a really impressive,
hard thing to do actually.
And so,
it takes a while for robots
to catch up to even
what a human body can do.
That's happening,
and it's moving quickly
but it's a key distinction
that robots are hardware,
and the al brains,
that's the software.
It's entirely
a software problem.
If you want to program
a robot to do something today,
the way you program is
by telling it a list
of xyz coordinates
where it should put its wrist.
If I was asking you
to make me a sandwich,
and all I gave you was a list
of xyz coordinates
of where to put your wrist,
it would take us a month,
for me to tell you
how to make a sandwich,
and if the bread moved
a little bit to the left,
you'd be putting peanut
butter on the countertop.
What can our robots
do today really well?
They can wander around
and clean up a floor.
So, when I see people say,
"oh, well, you know,
these robots are going
to take over the world."
It's so far off
from the capabilities.
So, I want to make a distinction, okay?
So, there's two types of al.
There's narrow al
and there's general al.
What's in my brain
and yours is general al.
It's what allows us
to build new tools
and to invent new ideas
and to rapidly adapt
to new circumstances
and situations.
Now, there's also
narrow intelligence
and that's the kind
of intelligence
that's in all of our devices.
We have lots
and lots of narrow systems
maybe they can recognize speech
better than a person could,
or maybe they can play chess
or go better
than a person could.
But in order to get
to that performance,
it takes millions
of years of training data
to evolve an al
that's better at playing go
than anyone else is.
When alphago
beat the go champion,
it was stunning how different
the levels of support were.
There were 200 engineers looking
after the alphago program
and the human player
had a cup of coffee.
If you had given that day,
instead of a 19 by 19 board,
if you'd given a 17 by 17 board,
the alphago program
would've completely failed
and the human,
who had never played
on those size boards before
would've been
pretty damn good at it.
Where the big progress
is happening right now
is in machine learning,
and only machine learning.
We're making no progress
in more general
artificial intelligence
at the moment.
The beautiful thing is machine learning
isn't that hard. It's not that complex.
We act like you got
to be really smart
to understand this stuff.
You don't.
Way back in 1943,
a couple of mathematicians
tried to model a neuron.
Our brain is made up
of billions of neurons.
Over time, people realized
that there were some
fairly simple algorithms
which could make
model neurons learn
if you gave them
training signals.
You got it right,
that adjusts the weights
that got multiplied a little
bit. If you got it wrong,
they'd reduce
some weights a little bit.
They'd adjust over time.
By the 80s, there was something
called back propagation.
An algorithm
where the model neurons
were stacked together
in a few layers.
Just a few years ago,
people realized
that they could have
lots and lots of layers,
which let deep networks learn,
and that's what machine
learning relies on today,
and that's what
deep learning is,
just ten or 12 layers
of these things.
What's happening
in machine learning,
we're feeding the algorithm
a lot of data.
Here's a million pictures
and 100,000 of them
that have a cat in the picture,
we've tagged.
We feed all that
into the algorithm
so that the computer
can understand
when it sees a new picture,
does it have a cat, right?
That's all.
What's happening in a neural net
is they are making essentially
random changes to it
over and over and over again
to see, "does this one find cats
better than that one?"
And if it does, we take that
and then we make
modifications to that.
And we keep testing.
Does it find cats better?
You just keep doing it until
you have got the best one,
and in the end
you have got
this giant complex algorithm
that no human could understand,
but it's really, really,
really good at finding cats.
And then you tell it
to find a dog and it's,
“I don't know,
got to start over.
Now I need
a million dog pictures."
We're still a long
way from building machines
that are truly intelligent.
That's going to take 50
or 100 years or maybe even more.
So, I'm not very
worried about that.
I'm much more worried
about stupid al.
It's not the Terminator.
It's the fact that
we'll be giving responsibility
to machines that
aren't capable enough.
[Ominous musicl
In the United States
about 37,000 people a year die
from car accidents.
Humans are terrible drivers.
Most of the car accidents
are caused by human error.
So, perceptual error,
decision error,
inability to react fast enough.
If we can eliminate
all of those,
we would eliminate 90% of
fatalities, that's amazing.
It would be a
big benefit to society
if we could figure out how
to automate the driving process.
However,
that's a very high bar to cross.
In my life, at the end
is family time that I'm missing.
Because this is the first thing
that gets lost, unfortunately.
I live in a rural
area near the alps.
So, my daily commute
is one and a half hours.
At the moment, this is simply
holding a steering wheel
on a boring freeway.
Obviously my dream
is to get rid of this
and evolve
into something meaningful.
Autonomous driving is
divided in five levels.
On the roads, we currently
have a level two autonomy.
In level two, the driver
has to be alert all the time
and has to be able
to step in within a second.
That's why I said level
two is not for everyone.
My biggest reason
for confusion is
that level two systems
that are done quite well
feel so good, that people
overestimate their limit.
My goal is automation,
where the driver
can sit back and relax
and leave the driving task
completely to the car.
For experts working in and
around these robotic systems,
the optimal fusion of sensors
is computer vision
using stereoscopic vision,
millimeter wave radar,
and then lidar
to do close
and tactical detection.
As a roboticist,
I wouldn't have a system
with anything less
than these three sensors.
Well, it's kind of
a pretty picture you get.
With the orange boxes,
you see all the moving objects.
The green lawn is
the safe way to drive.
The vision of the car
is 360 degrees.
We can look beyond cars and
these sensors never fall asleep.
This is what we, human beings,
can't do.
I think people
are being delighted
by cars driving on freeways.
That was unexpected.
"Well,
if they can drive on a freeway,
all the other
stuff must be easy."
No, the other
stuff is much harder.
The inner-city is the most complex
traffic scenario we can think of.
We have cars, trucks,
motorcycles,
bicycles, pedestrians,
pets,
jump out between parked cars
and not always are compliant
with the traffic signs
and traffic lights.
The streets are
narrow and sometimes
you have to cross
the double yellow line
just because someone's
pulled up somewhere.
Are we going to make the self
driving cars obey the law
or not obey the law?
The human eye-brain connection
is one element that computers
cannot even come
close to approximate.
We can develop theories,
abstract concepts
for how events might develop.
When a ball rolls
in front of the car...
Numans stop automatically
because they ve been
taught to associate that
with a child that may be nearby.
We are able to interpret
small indicators of situations.
But it's much harder for the car
to do the prediction
of what is happening
in the next couple of seconds.
This is the big challenge
for autonomous driving.
Ready, set. Go.
A few years ago,
when autonomous cars
became something
that is on the horizon,
some people startea
thinking about the parallels
petween the classical
trolley problem
and potential decisions
that an autonomous car can make.
The trolley problem is
an old philosophical riddle.
It's what philosophers
call "thought experiments."
If an autonomous vehicle
faces a tricky situation,
where the car has to choose
between killing
a number of pedestrians,
let's say five pedestrians,
or swerving and harming
the passenger in the car.
We were really just
intrigued initially
by what people thought
was the right thing to do.
The results are
fairly consistent.
People want the car
to behave in a way
that minimizes
the number of casualties,
even if that harms
the person in the car.
But then the twist came...
Is when we asked people,
"what car would you buy?”
And they said, "well,
of course I would not buy a car
that may sacrifice me
under any condition."
So, there's this mismatch
between what people
want for society
and what people are willing
to contribute themselves.
The best version of the trolley
problem I've seen is,
you come to the fork
and over there,
there are five
philosophers tied to the tracks
and all of them
have spent their career
talking about
the trolley problem.
And on this way,
there's one philosopher
who's never worried
about the trolley problem.
Which way should the trolley go?
[Ominous musicl
I don't think
any of us who drive cars
have ever been confronted
with the trolley problem.
You know, "which group
of people do I kill?"
No, you try and stop the car.
And we don't have any way
of having a computer system
make those sorts of perceptions
any time
for decades and decades.
I appreciate
that people are worried
about the ethics of the car,
but the reality is,
we have much bigger problems
on our hands.
Whoever gets the real
autonomous vehicle
on the market first,
theoretically,
is going to make a killing.
S50, I do think we're seeing
people take shortcuts.
Tesla elected
not to use the lidar.
So basically, Tesla only has
two out of the three sensors
that they should,
and they did this
to save money because
lidarss are very expensive.
I wouldn't stick
to the lidar itself
as a measuring principle,
but for safety reasons
we need this redundancy.
We have to make sure that even
if one of the sensors
breaks down,
we still have this complete
picture of the world.
I think going
forward, a critical element
is to have industry
come to the table
and be collaborative
with each other.
In aviation,
when there's an accident,
it all gets shared across
agencies and the companies.
And as a result, we have a
nearly flawless aviation system.
So, when should we
allow these cars on the road?
If we allow them sooner,
then the technology
will probably improve faster,
and we may get to a point
where we eliminate
the majority
of accidents sooner.
But if we have
a higher standard,
then we're effectively
allowing a lot of accidents
to happen in the interim.
I think that's an example
of another trade off.
So, there are many
trolley problems happening.
I'm convinced that society
will accept autonomous vehicles.
At the end, safety
and comfort will rise that much
that the reason for manual
driving will just disappear.
Because of autonomous
driving we reinvent the car.
I would say in the next years
it will change
more than in the last 50 years
in the car industry.
Exciting times.
If there is no
steering wheel anymore,
how do you operate
a car like this?
You can operate a car
in the future by al tracking,
by voice, or by touch.
I think it's going to
be well into the '30s and '40s
before we start to see
large numbers of these cars
overwhelming the human drivers,
and getting the human
drivers totally banned.
One day, humans
will not be allowed
to drive their own
cars in certain areas.
But I also think,
one day we will have
driving national parks,
and you'll go into
these parks just to drive,
so you can have
the driving experience.
I think in about 50, 60 years,
there will be kids saying, wow,
why did anyone
drive a car manually?
This doesn't make sense.”
And they simply won't understand
the passion of driving.
I hate driving, so...
The fact that something could
take my driving away,
it's going to be great for me,
but if we can't get it right
with autonomous vehicles,
I'm very worried
that we'll get it wrong
for all the other things
that they are going to change
our lives
with artificial intelligence.
I talk to my son and my
daughter and they laugh at me
when I tell them, in the old
days you'd pick up a paper
and it was covering things
that were like
ten, 15, 12 hours old.
You'd heard them on the radio, but
you'd still pick the paper up
and that's what you read.
And when you finished it
and you put it together,
you wrapped it up and you put
it down, you felt complete.
You felt now that you knew
what was going on in the world,
and I'm not an old fogy who wants
to go back to the good old days.
The good old days
weren't that great,
but this one part of the old
system of journalism,
where you had a package
of content carefully curated
by somebody who cared about
your interests, I miss that,
and I wish I could
persuade my kids
that it was worth
the physical effort
of having this
ridiculous paper thing.
Good evening
and welcome to prime time.
9:00 at night
I would tell you to sit down,
shut up and listen to me.
I'm the voice of god
telling you about the world,
and you couldn't answer back.
In the blink of an eye, everything
just changed completely.
We had this revolution
where all you needed
was a camera phone
and a connection
to a social network,
and you were a reporter.
January the 25th, 2011,
the arab spring
spreads to Egypt.
The momentum only grew online.
It grew on social media.
Online activists
created a Facebook page
that became a forum
for political dissent.
For people in the region,
this is proof positive
that ordinary people
can overthrow a regime.
For those first early years
when social media
became so powerful,
these platforms became
the paragons of free speech.
Problem was,
they weren't equipped.
Facebook did not intend to be
a news distribution company,
and it's that very fact
that makes it so dangerous
now that it is the most dominant
news distribution platform
in the history of humanity.
[Somber musicl
We now serve
more than two billion people.
My top priority has
always been connecting people,
building community and bringing
the world closer together.
Advertisers and developers
will never take priority
over that, as long as
I am running Facebook.
Are you willing to
change your business model
in the interest of protecting
individual privacy?
Congresswoman,
we are... have made
and are continuing to make changes
to reduce the amount of data that...
No, are you willing
to change your business model
in the interest of protecting
individual privacy?
Congresswoman,
I'm not sure what that means.
I don't think that tech
companies have demonstrated
that we should have too much
confidence in them yet.
I'm surprised, actually,
the debate there
has focused on privacy,
but the debate hasn't focused
around actually,
I think,
what's much more critical,
which is that Facebook
sells targeted adverts.
We used to buy products.
Now we are the product.
All the platforms are different,
but Facebook particularly
treats its users like fields
of corn to be harvested.
Our attention is like oil.
There's an amazing
amount of engineering going on
under the hood of that
machine that you don't see,
but changes the very
nature of what you see.
But the algorithms are designed
to essentially make you
feel engaged.
So their whole
metric for success
is keeping you there
as long as possible,
and keeping you feeling
emotions as much as possible,
so that you will be
a valuable commodity
for the people who support
the work of these platforms,
and that's the advertiser.
Facebook have no interest
whatever in the content itself.
There's no ranking for quality.
There's no ranking for,
"is this good for you?"
They don't do
anything to calculate
the humanity of the content.
[Ominous musicl
You know, you start
getting into this obsession
with clicks, and the algorithm
is driving clicks
and driving clicks, and
eventually you get to a spot
where attention
becomes more expensive.
And so people have
to keep pushing the boundary.
And so things just
get crazier and crazier.
What we're living through now
is a misinformation crisis.
The systematic pollution
of the world's
information supplies.
I think we've already
begun to see the beginnings
of a very fuzzy type of truth.
We're going to have
fake video and fake audio.
And it will be entirely
synthetic, made by a machine.
A gap in a generative
adversarial network
is a race between
two neural networks.
One trying to recognize
the true from the false,
and the other
trying to generate.
It's a competition between
these two that gives you
an ability to generate
very realistic images.
Right now, when you see a video,
we can all just trust
that that's real.
As soon as we start to realize
there's technology out there
that can make you think
that a politician
or a celebrity said
something and they didn't,
or something
that really did happen,
someone can just
claim that that's
been doctored,
how we can lose trust
in everything.
Don't think we think that much
about how bad things could get
if we lose some of that trust.
I know this sounds
like a very difficult problem
and it's some sort
of evil beyond our control.
It is not.
Silicon valley generally
loves to have slogans
which express its values.
"Move fast and break things”
is one of the slogans on the
walls of every Facebook office.
Well, you know,
it's time to slow down
and build things again.
The old gatekeeper is gone.
What I, as a journalist
in this day and age
want to be is a guide.
And I'm one of those strange
people in the world today
that believes social media,
with algorithms
that are about
your best intentions
could be the best thing that
ever happened to journalism.
How do we step back
in again as publishers
and as journalists
to kind of reassert control?
If you can build tools
that empower people
to do something to act
as a kind of a conscious filter
for information,
because that's the moonshot.
We wanted to build an app
that's a control panel
for a healthy information habit.
We have apps
that allow set control
on the number of calories
we have, the running we do.
I think we should
also have measurements
of just how productive
our information
consumption has been.
Can we increase the chances
that in your daily life,
you'll stumble across
an idea that will make you go,
“that made me
think differently"?
And I think we can if we
start training the algorithm
to give us something we don't
know, but should know.
That should be our metric
of success in journalism.
Not how long
we manage to trap you
in this endless
scroll of information.
And I hope people
will understand
that to have journalists
who really have your back,
you have got to pay for that
experience in some form directly.
You can't just do it
by renting out your attention
to an advertiser.
Part of the problem is
people don't understand
the algorithms.
If they did,
they would see a danger,
but they'd also see a potential
for us to amplify the
acquisition of real knowledge
that surprises us,
challenges us, informs us,
and makes us want to change
the world for the better.
Life as one of the
first female fighter pilots
was the best of times,
and it was the worst of times.
It's just amazing
that you can put yourself
in a machine
through extreme maneuvering
and come out alive
at the other end.
But it was also very difficult,
because every single
fighter pilot that I know
who has taken a life,
either civilian,
even a legitimate
military target,
they've all got very,
very difficult lives
and they never walk away
as normal people.
So, it was pretty
motivating for me
to try to figure out, you know,
there's got to be a better way.
[Ominous musicl
I'm in Geneva to speak
with the united nations
about lethal autonomous weapons.
I think war is a terrible event,
and I wish
that we could avoid it,
but I'm also a pessimist
and don't think that we can.
So, I do think that
using autonomous weapons
could potentially
make war as safe
as one could possibly make it.
Two years ago, a group
of academic researchers
developed this open letter
against lethal
autonomous weapons.
The open letter came about,
because like all technologies,
al is a technology that can
be used for good or for bad
and we were at the point where people
were starting to consider using it
in a military setting that we thought
was actually very dangerous.
Apparently, all
of these al researchers,
it's almost
as if they woke up one day
and looked around them and said,
"oh, this is terrible.
This could really go wrong,
even though these are
the technologies that I built."
I never expected to be
an advocate for these issues,
but as a scientist,
I feel a real responsibility
to inform the discussion
and to warn of the risks.
To begin the
proceedings I'd like to invite
Dr. missy cummings
at this stage.
She was one of the U.S. Navy's
first female fighter pilots.
She's currently a professor
in the Duke university
mechanical engineering
and the director of the humans
and autonomy laboratory.
Missy,
you have the floor please.
Thank you, and thank
you for inviting me here.
When I was a fighter pilot,
and youre asked
to bomb this target,
it's incredibly stressful.
It is one of the most
stressful things
you can imagine in your life.
You are potentially at risk
for surface to air missiles,
youre trying to match
what you're seeing
through your sensors and
with the picture that you saw
back on the aircraft carrier,
to drop the bomb all
in potentially the fog of war
in a changing environment.
This is why there are
so many mistakes made.
I have peers, colleagues
who have dropped bombs
inadvertently on civilians,
who have killed friendly forces.
Uh, these men
are never the same.
They are completely
ruined as human beings
when that happens.
So, then this begs the question,
is there ever a time
that you would want to use
a lethal autonomous weapon?
And I honestly will tell you,
1 do not think
this is a job for humans.
Thank you, missy, uh.
It's my task now
to turn it over to you.
First on the list is the
distinguished delegate of China.
You have the floor, sir.
Thank you very much.
Many countries including China,
have been engaged
in the research
and development
of such technologies.
After having heard the
presentation of these various technologies,
ultimately a human being has to be held
accountable for an illicit activity.
How does the
ethics in the context
of systems designed?
Are they just responding
algorithmically to set inputs?
We hear that the
military is indeed leading
the process of developing
such kind of technologies.
Now, we do see the
full autonomous weapon systems
as being especially problematic.
It was surprising
to me being at the un
and talking about the launch
of lethal autonomous weapons,
to see no other people
with military experience.
I felt like the un should
get a failing grade
for not having enough people
with military experience
in the room.
Whether or not you agree
with the military operation,
you at least need to hear
from those stakeholders.
Thank you very much, ambassador.
Thank you everyone
for those questions.
Missy, over to you.
Thank you, thank you
for those great questions.
I appreciate that you think
that the United States military
is so advanced
in its al development.
The reality is,
we have no idea what we're doing
when it comes to certification
of autonomous weapons
or autonomous
technologies in general.
In one sense, one of the
problems with the conversation
that we're having today,
is that we really don't know
what the right set of tests are,
especially in helping
governments recognize
what is not working al, and
what is not ready to field al.
And if I were to beg
of you one thing in this body,
we do need to come together
as an international community
and set autonomous
weapon standards.
People make errors
all the time in war.
We know that.
Having an autonomous
weapon system
could in fact produce
substantially less loss of life.
Thank you very
much, missy, for that response.
There are two
problems with the argument
that these weapons
that will save lives,
that they'll be
more discriminatory
and therefore
there'll be less civilians
caught in the crossfire.
The first problem is,
that that's some way away.
And the weapons that
will be sold very shortly
will not have that
discriminatory power.
The second problem is
that when we do get there,
and we will eventually have
weapons that will be better
than humans in their targeting,
these will be weapons
of mass destruction.
[Ominous musicl
History tells us
that we've been very lucky
not to have the world
destroyed by nuclear weapons.
But nuclear weapons
are difficult to build.
You need to be
a nation to do that,
whereas autonomous weapons,
they are going
to be easy to obtain.
That makes them more of a
challenge than nuclear weapons.
I mean, previously
if you wanted to do harm,
you needed an army.
Now, you would have an algorithm
that would be able to control
100 or 1000 drones.
And so you would
no longer be limited
by the number of people you had.
We don't have to go
down this road.
We get to make choices as to
what technologies get used
and how they get used.
We could just decide
that this was a technology
that we shouldn't use
for killing people.
[Somber musicl
We're going to be
building up our military,
and it will be so powerful,
nobody's going to mess with us.
Somehow we feel it's better
for a human to take our life
than for a robot
to take our life.
Instead of a human having
to pan and zoom a camera
to find a person in the crowd,
the automation
would pan and zoom
and find
the person in the crowd.
But either way, the outcome
potentially would be the same.
So, lethal autonomous weapons
don't actually
change this process.
The process is still human
approved at the very beginning.
And so what is it
that we're trying to ban?
Do you want to ban
the weapon itself?
Do you want to ban the sensor
that's doing the targeting,
or really do you want
to ban the outcome?
One of the difficulties
about the conversation on al
is conflating the near
term with long term.
We could carry on those...
Most of these conversations,
but, but let's not get them
all kind of rolled up
into one big ball.
Because that ball,
I think, over hypes
what is possible today
and kind of
simultaneously under hypes
what is ultimately possible.
Want to use this brush?
Can you make a portrait?
Can you draw me?
- No?
- How about another picture
- of Charlie brown?
- Charlie brown's perfect.
I'm going to move the
painting like this, all right?
Right, when we do it, like,
when it runs out of paint,
it makes a really
cool pattern, right?
It does.
One of the most
interesting things about
when I watch my daughter
paint is it's just free.
She's just pure expression.
My whole art is trying to see
how much of that
I can capture and code,
and then have my robots
repeat that process.
Yes.
The first machine learning
algorithms I started using
were something
called style transfer.
They were convolutional
neural networks.
It can look at an image, then
look at another piece of art
and it can apply the style
from the piece
of art to the image.
Every brush stroke,
my robots take pictures
of what they are painting,
and use that to decide
on the next brush stroke.
I try and get as many
of my algorithms in as possible.
Depending on where it is,
it might apply a gan or a CNN,
but back and forth,
six or seven stages
painting over itself,
searching for the image
that it wants to paint.
For me, creative al is
not one single god algorithm,
it's smashing as many algorithms
as you can together
and letting them
fight for the outcomes,
and you get these, like,
ridiculously creative results.
Did my machine make
this piece of art?
Absolutely not, I'm the artist.
But it made every single
aesthetic decision,
and it made every single
brush stroke in this painting.
There's this big question of, "can
robots and machines be creative?
Can they be artists?" And I think
they are very different things.
Art uses a lot of creativity,
but art
is one person communicating
with another person.
Until a machine has something
it wants to tell us,
it won't be making art,
because otherwise
it's just... just creating
without a message.
In machine learning you can say,
"here's a million recordings
of classical music.
Now, go make me something
kind of like brahms."
And it can do that.
But it can't make the thing
that comes after brahms.
It can make a bunch of random
stuff and then poll humans.
"Do you like this?
Do you like that?"
But that's different.
That's not
what a composer ever did.
Composer felt something
and created something
that mapped to the human
experience, right?
I've spent my
life trying to build
general artificial intelligence.
I feel humbled
by how little we know
and by how little we
understand about ourselves.
We just don't
understand how we work.
The human brain can do
over a quadrillion calculations
per second
on 20 watts of energy.
A computer right
now that would be able
to do that many
calculations per second
would run on 20 million
watts of energy.
It's an unbelievable system.
The brain can
learn the relationships
between cause and effect,
and build a world
inside of our heads.
This is the reason
why you can close your eyes
and imagine what it's like to,
you know, drive to the airport
in a rocket ship or something.
You can just play forward in
time in any direction you wish,
and ask whatever question
you wish, which is
very different from deep
learning style systems
where all you get is a mapping
between pixels and a label.
That's a good brush stroke.
Is that snoopy?
Yeah. Because snoopy
is okay to get pink.
Because guys can be pink
like poodle's hair.
I'm trying to learn...
I'm actually trying to teach
my robots to paint like you.
To try Ana get
the patterns that you can make.
It's hard.
You're a better
painter than my robots.
Isn't that crazy?
Yeah.
Much like the Wright brothers
learned how to build
an airplane by studying birds,
1 think that it's
important that we study
the right parts of neuroscience
in order to have
some foundational ideas
about building systems
that work like the brain.
[Somber musicl
Through my research
career, we've been very focused
on developing this notion
of a brain computer interface.
Where we started was
in epilepsy patients.
They require having
electrodes placed
on the surface
of their brain to figure out
where their seizures
are coming from.
By putting electrodes directly
on the surface of the brain,
you get the highest
resolution of brain activity.
It's kind of like
if you're outside of a house,
and there's
a party going on inside,
pasically you... all you really hear
is the bass, just a...
Wwhereas if you really
want to hear what's going on
and the specific conversations,
you have to get inside the walls
to hear that higher
frequency information.
It's very similar
to brain activity.
All right.
So, Frida, measure... measure
about ten centimeters back,
I just want to see
what that looks like.
And this really provided us
with this unique opportunity
to record directly
from a human brain,
to start to understand
the physiology.
In terms of the data
that is produced
by recording directly
from the surface of the brain,
it's substantial.
Machine learning
is a critical tool
for how we understand
brain function
because what machine
learning does,
is it handles complexity.
It manages information
and simplifies it in a way
that allows us
to have much deeper insights
into how the brain
interacts with itself.
You know,
projecting towards the future,
if you had the opportunity
where I could do
a surgery on you,
it's no more risky than Lasik,
but I could substantially
improve your attention
and your memory,
would you want it?
It's hard to fathom,
but al is going to interpret
what our brains want it to do.
If you think
about the possibilities
with a brain machine interface,
humans will be able
to think with each other.
Our imagination is going to say,
"oh, going to hear
their voice in your head.”
no, that's just talking.
It's going to be different.
It's going to be thinking.
And it's going
to be super strange,
and were going to be
very not used to it.
It's almost like two
brains meld into one
and have a thought
process together.
What that'll do for
understanding and communication
and empathy is pretty dramatic.
When you have a
brain computer interface,
now your ability
to touch the world
extends far beyond your body.
You can now go on virtual
vacations any time you want,
to do anything you want,
to be a different
person if you want.
But you know, we're just going to
keep track of a few of your thoughts,
and we're not going
to charge you that much.
It will be 100 bucks,
you interested?
If somebody can have
access to your thoughts,
how can that be pilfered,
how can that be abused,
how can that be
used to manipulate you?
What happens when a corporation
gets involved
and you have now
large aggregates
of human thoughts and data
and your resolution for
predicting individual behavior
becomes so much more profound
that you can really
manipulate not just people,
but politics
and governments and society?
And if it becomes this, you
know, how much does the benefit
outweigh the potential thing
that you're giving up?
Whether it's
50 years, 100 years,
even let's say 200 years,
that's still
such a small blip of time
relative to our human evolution
that it's immaterial.
Human history is 100,000 years.
Imagine if it's a 500-page book.
Each page is 200 years.
For the first 499 pages,
people got around on horses
and they spoke
to each other through letters,
and there was
under a billion people on earth.
On the last page of the book,
we have the first cars
and phones and electricity.
We've crossed the one, two,
three, four and five,
six, and seven
billion person marks.
So, nothing about this
is normal.
We are living
in a complete anomaly.
For most of human history,
the world
you grew up in was normal.
And it was naive to believe
that this is a special time.
Now, this is a special time.
Provided that science
is allowed to continue
on a broad front, then it does
look... it's very, very likely
that we will eventually
develop human level al.
We know that human
level thinking is possible
and can be produced
by a physical system.
In our case,
it weighs three pounds
and sits inside of a cranium,
but in principle,
the same types of computations
could be implemented in some
other subscript like a machine.
There's wide disagreement
between different experts.
S50, there are experts
who are convinced
we will certainly have
this within 10-15 years,
and there are experts
who are convinced
we will never get there
or it'll take
many hundreds of years.
I think even when we
do reach human level al,
I think the further step
to super intelligence
is likely to happen quickly.
Once al reaches a level
slightly greater than that,
the human scientist,
then the further developments
in artificial intelligence
will be driven increasingly
by the al itself.
You get the runaway al effect,
an intelligence explosion.
We have a word for 130 IQ.
We say smart.
Eighty IQ we say stupid.
I mean, we don't have
a word for 12,000 IQ.
It's so unfathomable for us.
Disease and poverty
and climate change
and aging and death
and all this stuff
we think is unconquerable.
Every single one
of them becomes easy
fo a super intelligent al.
Think of all the
possible technologies
perfectly realistic
virtual realities,
space colonies, all of those
things that we could do
over a millennia
with super intelligence,
you might get them very quickly.
You get a rush
to technological maturity.
We don't really know
how the universe began.
We don't really
know how life began.
Whether you're religious or not,
the idea of having
a super intelligence,
it's almost like we have
god on the planet now.
Even at the earliest space
when the field of artificial
intelligence was just launched
and some of the pioneers
were super optimistic,
they thought they could have
this cracked in ten years,
there seems to have been
no thought given
to what would happen
if they succeeded.
[Ominous musicl
An existential risk,
it's a risk from which
there would be no recovery.
It's kind of an end, premature
end to the human story.
We can't approach this by
just learning from experience.
We invent cars,
we find that they crash,
so we invent seatbelt
and traffic lights
and gradually we kind
of get a handle on that.
That's the way
we tend to proceed.
We model through
and adjust as we go along.
But with an existential risk,
you really need
a proactive approach.
You can't learn from failure,
you don't get a second try.
You can't take something
smarter than you back.
The rest of the animals
in the planet
definitely want
to take humans back.
I'ney can't, it's too late.
We're here, we're in charge now.
One class of concern
is alignment failure.
What we would see
is this powerful system
that is pursuing some
objective that is independent
of our human goals and values.
The problem would not be that
it would hate us or resent us,
it would be indifferent
to us and would optimize
the rest of the world according
to this different criteria.
A little bit like there might
be an ant colony somewhere,
and then we decide we want
a parking lot there.
I mean, it's not because
we dislike, like, hate the ants,
it's just we had some other goal
and they didn't factor
into our utility function.
The big word is alignment.
It's about taking
this tremendous power
and pointing it
in the right direction.
We come with some values.
We like those feelings,
we don't like other ones.
Now, a computer doesn't
get those out of the box.
Where it's going to get those,
is from us.
And if it all
goes terribly wrong
and artificial intelligence
builds giant robots
that kill all humans
and take over,
you know what?
It'll be our fault.
If we're going
to build these things,
we have to instill them
with our values.
And if we're not clear
about that,
then yeah,
they probably will take over
and it'll all be horrible.
But that's true for kids.
Empathy, to me, is
like the most important thing
that everyone should have.
I mean, that's, that's what's
going to save the world.
So, regardless of machines,
that's the first thing
I would want to teach my son
if that's teachable.
L
I don't think we
appreciate how much nuance
goes into our value system.
It's very specific.
You think programming
a robot to walk
is hard or recognize faces,
programming it
to understand subtle values
is much more difficult.
Say that we want
the al to value life.
But now it says, "okay,
well, if we want to value life,
the species that's killing
the most life is humans.
Let's get rid of them."
Even if we could get
the al to do what we want,
how will we humans
then choose to use
this powerful new technology?
These are not questions
just for people like myself,
technologists to think about.
These are questions
that touch all of society,
and all of society need
to come up with the answers.
One of the mistakes
that's easy to make
is that the future is something
that we're going
to have to adapt to,
as opposed
to the future is the product
of the decisions you make today.
J people j
J we're only people I
J there's not much j
j anyone can do j
j really do about that
j but it hasn't stopped us yes j
j people j
j we know so little
about ourselves j
J just enough j
j to want to be j
j nearly anybody else j
j now how does that add up j
j oh, friends all my friends &
j oh, I hope you're
somewhere smiling j
j just know I think about you j
j more kindly than you
and I have ever been j
j now see you the next
time round up there j
j ohjt
j ohjt
j ohjt
j people j
J what's the deal
J' you have been hurt j