Nova (1974–…): Season 46, Episode 18 - Look Who's Driving - full transcript
Exploring the possibility of self-driving cars, including whether they could really be safer than human drivers.
A lethal technology
we can't live without.
But what if cars got smarter?
Automated vehicles offer
the promise
of dramatically reducing
collisions and fatalities
on our roads and highways.
The big dream is to make a car
that will never be responsible
for a collision.
The potential payoff is huge.
In the mid-2030s,
the market could be worth
a few trillion U.S. dollars,
with a T in there.
Pursuing that pot of gold,
companies are already testing
their cars on our streets.
This is a bold
and ambitious mission.
Is this the beginning
of a mobility revolution?
We're venturing into domains
that ten years ago
would be deemed science fiction.
Or are we simply moving
too fast?
The technologies
are deeply flawed.
It's just simply not ready
for public consumption.
Will we ever be safe
with a robot behind the wheel?
"Look Who's Driving,"
right now, on "NOVA."
♪
Major funding for "NOVA"
is provided by the following:
♪
911, what is your emergency?
Um, yes, I, um...
I hit a bicyclist.
Do you need paramedics?
I don't, but they do.
♪
Every day in the United States,
there are about 100
fatal car crashes.
But on March 18, 2018,
one attracts
particular attention,
because the woman behind
the wheel, Rafaela Vasquez,
is not driving... a computer is.
♪
The crash puts a spotlight on
a controversial new technology:
the self-driving car.
So what exactly happened?
Well, the car was in auto-drive.
Uh-huh.
And all of a sudden, I...
The car didn't see it,
I didn't see it.
And all of a sudden,
she's just there.
But she shot out in front.
And then I think I...
I know I hit 'em.
The victim is 49-year-old
Elaine Herzberg.
The car that kills her
is being tested by Uber,
the ride-hailing company.
It's a modified Volvo equipped
with Uber's self-driving
technology.
Test drives are permitted
in Arizona,
as long as there's
a safety driver to take over
in case of trouble.
♪
But on this night,
Uber's experiment badly fails.
Elaine Herzberg is
the first person in history
killed by a self-driving car.
♪
A woman was pushing a bike
across a road.
This is the sweet spot,
in theory,
where autonomy would be
at its best,
particularly compared to humans,
who have terrible vision
at night.
And I think that's a, sadly, um,
it's a very good example
of just how far away from safe
these cars really are.
Now, the question I have
for you guys is,
do you guys have remote access
to the cameras and stuff
like that that are in there?
We technically should, yeah.
Herzberg's death quickly becomes
an international story.
Killed on the street
by a self-driving Uber car...
Uber is now suspending all
of its self-driving testing...
Self-driving cars
under intense scrutiny after...
Uber's C.E.O. tweeting,
"Incredibly sad news.
We're thinking of
the victim's family..."
So the question here is,
"What went wrong?"
♪
The Tempe crash stokes
public fears
about self-driving cars.
Nearly three out of four
Americans
say they'd be afraid
to ride in one.
So why is anyone even trying
to make a car
that drives itself?
Self-driving vehicles
don't get distracted.
They don't get fatigued,
uh, they don't fall asleep.
Uh, and, you know,
they don't drive drunk.
In other words,
they promise safety.
Each year in the U.S.,
some 35,000 people die
in car crashes,
nearly all caused
by human error.
They're because of a choice
or an error that we make,
a choice to pick up the phone,
drive drunk, drive drugged,
drive distracted, drive drowsy.
You're looking one direction
when you should have been
watching in the other direction.
94% of crashes.
The hope is that computers
will be able to do a better job.
I really believe
that with self-driving cars,
we will eliminate
road accidents.
If we get the technology right,
those cars will know everything
about the road situation,
the road condition,
well before a human would.
So that if there is somebody
running around the corner
about to jump in front
of the car,
the car will know that,
and there will be no accidents.
This is the dream.
And that dream is
about much more than safety.
Proponents say self-driving cars
could bring about
the biggest changes
in transportation
since horses gave way
to the automobile.
Instead of having to own a car,
people could simply summon one
from a circulating fleet
of robotaxis,
reducing the number of cars
on the road,
cutting pollution,
and eliminating the need
to have so many private cars
sitting idle all day long.
For people who can't drive,
self-driving cars could also
provide greater mobility.
You can put your kid
on an autonomous car,
and it will take him to school,
and problem solved.
Some see a future
where cars talk to each other,
reducing traffic jams.
Others are less sure.
It could also
lead to the nightmare scenario,
as well,
where inexpensive mobility
leads to dramatic consumption
of mobility,
congested freeways,
unsustainable use of energy,
and an acceleration
of climate change
and other issues
that we're facing now.
This is a disruption.
I would call this
"Mobility 2.0."
If, if cars today is 1.0,
and, uh, horse carriages
was 0.0,
this is a new era of mobility.
An era which in some places
seems very close at hand.
Like here, on the streets
of San Francisco.
All right, I'm going to go ahead
and slide to go.
So we're off
on our autonomous way.
Jesse Levinson demonstrates
how his company's
self-driving car
navigates the city streets.
On the screen here,
you can see what the vehicle
is planning to do.
That's the green corridor.
The display shows how
the route is constantly adjusted
in response to data from cameras
and scanners that use radar
and lasers.
The system is designed to handle
anything that might happen.
But on test drives, the company
always has a safety driver.
Here's a really interesting
situation
we're about to encounter,
is a six-way
unprotected intersection.
This intersection is
so complicated,
I'm not sure
I know how to drive it.
But what we're doing here is,
we're going to make a left turn.
We're checking
all the oncoming traffic.
The car scans its surroundings
to determine
if its path is clear.
We're also yielding
for all these pedestrians
in the crosswalk.
The computer won't allow the car
to proceed
until it's safe.
And literally tracking
hundreds of dynamic agents
at the same time.
Now, we've just made
our way back.
That was a 100% autonomous drive
with absolutely
no manual interventions.
Pretty cool, huh?
Test drives like this
are impressive.
But some warn
it will be a long time
before computers can
consistently drive more safely
than humans do.
Because driving,
though it may seem easy,
is actually
a very difficult task.
Let me make the following
bold assertion.
Driving is the most complex
activity
that most adults on the planet
engage in
on a regular basis.
When we drive,
the vehicle is moving,
the environment is changing
on a continuing basis,
all these pieces of information
actually coming
towards our senses,
goes to the brain,
The brain basically applies
the rules of the road.
And for the most part,
we drive safely.
Each of the 35,000 annual
crash deaths in the U.S.
is tragic.
But they're statistically rare.
On average, there's only one
for every 100 million miles
of driving.
That translates into
3.4 million hours of driving.
3.4 million hours is 390 years
of continuous, 24-hours-a-day,
seven-days-a-week driving
in between fatal crashes.
That's a very high bar
for technology to clear.
Think about our modern
electronic devices
that are powered by software
that we use every day.
And try to imagine
that those devices could run
without ever giving you
the spinning blue doughnut
that said it's not ready to give
you the answer you wanted.
Because if that computer
was driving your vehicle,
you crashed.
So getting to the point
where we have
a software-intensive device
that can operate without a fault
is a huge, huge challenge.
This is a bold and ambitious
mission.
A mission that really
isn't going
to be accomplished overnight.
Michael Fleming speaks
from hard experience.
He's been working
on self-driving cars
for more than a decade.
And like many in the field
today,
he got his start thanks to
a push from a surprising place:
the Pentagon.
In 2000, hoping to reduce
battlefield casualties,
Congress orders the military
to develop combat vehicles
that can drive themselves.
Two years later,
the Pentagon's research agency,
announces what it calls
the Grand Challenge:
a driverless car race,
142 miles through
the California desert.
Whoever finishes first
will win $1 million.
March 13, 2004.
13 vehicles... rigged with
sensors to detect what's ahead
and software to control speed
and steering...
set out.
And we and a lot of other teams
went out to compete in the
DARPA, you know, Grand Challenge
with high hopes of bringing home
a million-dollar prize.
Right away, the vehicles run
into trouble.
The course is littered
with rocks, cliffs,
cattle grates, and river beds...
obstacles that the sensors
sometimes miss.
We got 100 yards
out of the gate,
and the vehicle
just stopped working.
And we failed miserably
with everyone else.
Of the full 142 miles,
no vehicle goes farther
than eight.
But a year later,
DARPA provides a second chance:
a new challenge
that doubles the payoff
to $2 million.
Among the 2005 competitors,
a team from Stanford,
confident that their car,
named Stanley,
can make it all the way
to the finish line.
Their ace in the hole:
cutting-edge software that uses
A.I... artificial intelligence.
We built a computer system
using artificial intelligence
that's able to actually find
the road very, very reliably.
What you see here is
a map of the environment,
that's being where
the Stanley drives.
The red stuff is stuff that
it doesn't want to drive over,
it's dangerous.
The white stuff is the road
as found by Stanley,
and the gray stuff
that you see here
is stuff it just doesn't know
anything about,
so it's not going to drive
there.
Stanford's strategy pays off.
Stanley is the first to cross
the finish line.
We have the green flag.
Two years later...
Launch the Bots!
...the Pentagon stages
a final contest,
adding a new complexity:
traffic.
DARPA calls it
the Urban Challenge.
In the 2007 Urban Challenge,
they let us drive
with other cars,
both autonomous cars
as well as human-driven cars.
And so that was really exciting,
because all of a sudden, now you
have to track dynamic objects
and predict what they're going
to do in the future,
and that's
a much harder problem.
To help solve it,
some half a dozen teams bank
on a detection technology
called lidar,
which dramatically improves
a car's ability to see.
They rig devices
that spin 360 degrees.
Lidar works using laser beams,
pulses of invisible light
that bounce off everything
in their path.
A sensor collects
the reflections,
which provide a precise picture
of the environment.
And as those lasers are sweeping
around in a circle,
you get what's called
a point cloud.
So you'll get thousands,
even millions of points
coming back to the sensor.
And you can build up
a point cloud
that looks similar to this,
and it gives you
very accurate geometric detail.
Using all that data
to pilot a car
demands new kinds of software.
I vividly remember
the first time
when some of my software that
I had just written hours ago
ran on the car.
That was pretty incredible.
There was nobody
behind the wheel,
and it was just doing everything
on its own.
But not quite perfectly.
Okay, folks, we have got our
first autonomous traffic jam.
Another historic event
right here.
This time,
six of 11 cars complete
the course successfully.
It's a major turning point,
but there's also
a growing appreciation
for the immense challenge ahead.
The reality was,
the Urban Challenge
was a very small step
compared to what had to be done
to actually get
commercial vehicles on the road.
It was only a six-hour race.
So basically, if your car
could last for six hours
without hitting something,
you were, like,
"Yep, we did it," right?
Now, getting into
a commercial service
with thousands of vehicles
and setting a safety bar
that's significantly higher
than human-level performance,
that's very difficult.
Still, the potential
of this new technology
proves irresistible to engineers
and to business.
♪
The DARPA challenges
may have seemed
like just a geeky
science project,
but they trigger a race
to build a whole new industry.
One of the first
out of the gate?
Tech giant Google.
Should we do
a simple test first?
In university robotics labs...
Yes!
...and the start-ups
that spin out of them,
a growing army of engineers
keeps improving
both sensors and software.
The big car manufacturers
take notice,
and they too enter the fray.
Well, hello,
ladies and gentlemen,
and welcome to C.E.S.
They're betting
that driverless cars
are about to become
a huge global business.
The autonomous vehicle market
is supposed to be an
enormous market in the future.
Estimates say
that in the mid-2030s,
the market could be worth
a few trillion U.S. dollars,
with a T in there.
That potential payday
brings Uber to Arizona,
whose flat landscape
and sunny weather are ideal
for testing self-driving cars.
Uber sees eliminating
its paid drivers
as a key
to future profitability.
But the March 2018 crash
in Tempe
casts a dark shadow on the
future of self-driving cars.
Uber suspends all testing
on public roads.
The National Transportation
Safety Board
launches an investigation:
why did neither
the computer system
nor the safety
driver stop the car?
The Uber crash in Tempe
wasn't really about the maturity
of the technology.
We all knew the
self-driving car technology
wasn't ready to deploy.
That's why they were doing
testing.
The significance
of the Uber crash
was that the human safety driver
failed to prevent the crash.
In the months after the crash,
the NTSB and Tempe police
release new details
about what happened that night.
The interior camera reveals
that Rafaela Vasquez
is not watching the road
for nearly seven of the
22 minutes before the crash,
including about five
of the last six seconds.
♪ Chain, chain,
chain The police discover
that the whole time
the car is moving,
she is streaming an episode
of a singing competition
on her phone.
♪ Chain of fools
Vasquez denies watching
the video
or even looking at her phone.
She says she was just checking
the control panel.
But she does not step
on the brake
until after the car
strikes Herzberg.
The NTSB findings
also point to flaws
in the self-driving system.
The sensors actually detect
Herzberg
six seconds before impact,
but the system doesn't alert
the safety driver.
It's not designed to.
So even though their sensors
had detected this target...
there was a potentially
hazardous situation...
they made the decision
to not give any of that
information to the test driver.
I can't imagine why.
And the computer doesn't decide
emergency braking is needed
until 1.3 seconds before impact.
But even then,
the car doesn't brake,
because Uber has disabled
the Volvo's factory-installed
emergency braking system.
Uber wants to avoid jerky rides
caused by unnecessary braking
for harmless objects.
So the net result of that is,
they took
a safe production vehicle
and turned it
into an unsafe prototype
that caused the pedestrian
to be killed.
Uber tells the NTSB that
it's the safety driver's job
to correct any mistakes made
by the self-driving system.
Now, while many people look
at the safety driver
in that vehicle
as, as perhaps the villain,
the real issue is the system
that that safety driver
was put into.
They were put into a system
ripe for failure,
where the expectations
of failure
were far lower than
the actual probabilities.
Perhaps because the computer
scientists and the engineers
building these systems
don't all appreciate
the complexities
of, that human behavior brings
to the system.
Uber declined to participate
in this film.
But even before Tempe,
there were signs
that drivers and automation
don't always work together
very well.
Automotive engineers
classify automation
into levels, from zero to five.
Fully automated cars,
the ones driven by computers,
are levels three, four,
and five.
Uber was testing its car
as a level four prototype.
At the lower levels,
humans drive.
In level zero cars,
they handle everything.
Level one and two cars
assist drivers
by regulating speed
or keeping the car in its lane,
or even both.
They're partially automated.
Partial automation is the
foundation for full automation.
And how we respond and adapt
to automation
at the lower levels
gives us our window
into the future.
Millions of level one and two
cars are on the road today.
Insurance data suggest that
they are less likely to crash
because most of them
automatically brake the car
to avoid collisions.
And indeed,
it appears these systems
are making people
better drivers, because...
Think of it
as an extra set of eyes and ears
beyond your own eyes and ears.
So you get that additional
vigilance
from the sensor systems
that may detect things
that you missed.
But the added confidence
provided by partial automation
also poses an unforeseen risk.
In the Volvo,
she's clearly using
pilot assist a lot.
Which is great, keeping
her hands on the wheel,
paying attention to
what's going on in front of her.
Mm-hmm.
A research team at M.I.T.
is gathering data on how people
use partial automation.
Seem to use Autopilot a lot?
Yeah, he's been using it
a little bit more
than everyone else, actually.
The cars we are studying today
can automatically steer,
accelerate, and brake
in ways that were not available
in production vehicles
20 years ago.
So I think the ultimate question
is, you know,
"How does attention change
over time
as we use these cars
over months and years?"
And that's the very heart
of the question
we're hoping to understand more.
Cathy Urquhart was given
a Volvo S90 to test-drive.
It's equipped with
several automated features,
including one for steering
that keeps the car in its lane.
The first day I was nervous,
of course.
It's new technology,
and I think it's the same
when you're driving any new car.
You have to get used to it.
But I liked the lane monitor to
keep you in the, in the lane.
Uh, it wasn't too obtrusive
with the, uh, car
just kind of pulling you over,
the wheel pulling you
to one side.
It was good, I liked that.
When the Volvo's steering
assistance is on,
Cathy could briefly
take her hands off the wheel...
but she doesn't.
I'm not somebody
that takes my hands
off the steering wheel...
I like to have control.
I like to hold on
to the steering wheel.
But there are others
in the M.I.T. study,
like Taylor Ogan,
who are much more trusting
of automation.
He owns a Tesla equipped
with software called Autopilot.
Actually traded my first car in
to get Autopilot,
so that the car
could drive itself.
It helps you stay in your lane
and follow the car,
the distance of the car
in front of you.
And you really don't have
to touch anything.
It's awesome, I love it.
I swear by it.
All right, here we go.
All right.
All right, oh!
Tesla's founder and C.E.O.,
Elon Musk,
has long touted Autopilot
as a step along the road
to his goal of full autonomy.
No hands, no feet, nothing.
It's a combination of radar,
camera with image recognition,
and ultrasonic sensors
that's integrated with maps
and real-time traffic.
But for now, the Tesla system
is still level two,
which means it requires
just as much driver attention
as a regular car.
So the company has been
criticized
for choosing the name Autopilot.
It's a nice, catchy-sounding
name,
but "Autopilot" certainly
has a misleading connotation.
It suggests
that the pilot of the car
is in the program,
in the computer,
and not in the driver
of the vehicle.
But Musk claims that
the name is not misleading.
Autopilot is what
they have in airplanes.
It's where there's still an
expectation there'll be a pilot.
So the, so if, if,
the, the onus is on the pilot
to make sure
that the autopilot is doing
the right thing.
We're not asserting that the car
is capable of driving, um,
in the absence
of driver oversight.
Yet when he uses Autopilot,
Taylor Ogan feels free
to look away from the road
from time to time.
I'm not a distracted driver,
but I definitely do things
that I think older people
who don't trust technology
wouldn't do.
I'm definitely on my phone
a lot more,
texting, reading articles,
when the car is driving itself.
I'm actually going to test
this Autopilot thing.
Oh, I can eat food!
This is the life, look at this.
♪
Some early Autopilot users push
things a lot further
and show off their exploits
on YouTube.
Oh, my!
That's a sharp turn.
Oh!
I'm alive.
Okay, good.
What is technology?
What! What!
Drivers like these
may seem hopelessly reckless.
But they're just
extreme examples
of people placing too much trust
in partially automated cars,
a problem that in 2016
leads to tragedy.
Ah, jeez, car's doing it
all itself.
Don't know what I'm going to do
with my hands down here.
39-year-old Joshua Brown is
a particularly enthusiastic
early adopter
of Tesla's Autopilot.
He loves to make YouTube videos
showing what Autopilot can do.
So the camera up here is
what's always watching the road,
and it's looking
at your lane markings
and it is looking
at speed limit signs,
and right now we're actually
about...
Tesla says that Autopilot
should only be used on highways
with entrance and exit ramps.
Highways that don't have
cross traffic, like this one.
But it's only a recommendation,
and Tesla doesn't block drivers
from using it
on other roads, too.
There's a car in front of me,
so I'm not going to have
to do anything.
It starts to turn and...
Brown stresses how important
it is
for the driver to stay
constantly alert.
So just like that,
I have my hands on the wheel,
but I can still get
my foot down to a brake,
and just keep it like this,
because you can react so quickly
that if anything goes wrong,
you're going to want
to be able to take control
very, very quickly
while we're driving like this.
On May 7, 2016, near
the town of Williston, Florida,
Joshua Brown is cruising east
on Route 27A,
a four-lane divided highway
that has numerous intersections.
He is driving
at 74 miles per hour
with Autopilot switched on.
As Brown nears an intersection,
a truck driver prepares to make
a left turn across his path.
The truck is supposed to yield.
But it doesn't.
It begins its turn
with Brown's Tesla
about 1,000 feet away.
Brown now has about ten seconds
to avoid a collision.
Records show
that he is not using his phone.
But he does nothing.
And neither does Autopilot.
The Tesla slams
into the trailer,
killing Brown.
In the immediate aftermath,
a critically important question:
who... or what... is to blame?
Autopilot is an obvious suspect.
But the NTSB clears it
because Autopilot isn't
designed to detect obstacles,
like that truck,
that are crossing
the car's path.
It only looks for objects
moving in the same direction
as the car.
So, the Safety Board
splits the blame
between the truck driver
for failing to yield
and Brown
for not paying attention.
Had he noticed the truck
and stepped on the brake,
he could have easily stopped
with plenty of room to spare.
And Brown is not the only person
killed while using Autopilot.
In 2016 in China,
a driver named Gao Yaning
crashes into a road sweeper.
A deadly Tesla crash
in California.
In 2018, in California,
Walter Huang dies when his
Tesla crashes into a barrier.
...set on Autopilot...
This deadly Tesla crash raising
new questions tonight.
The roof of the car
was ripped off
as it passed
under the trailer...
And in 2019, Jeremy Beren Banner
is killed in Florida
when his Tesla hits
a truck crossing its path.
The circumstances
of which are similar
to a crash in May of 2016
near Gainesville.
Despite these fatalities,
Tesla says that it continually
improves Autopilot.
And it asserts that drivers
who use Autopilot
have a lower crash rate
than U.S. drivers as a whole.
Tesla declined to participate
in this film.
Some scientists say
that Autopilot
and other systems like it
remain inherently risky.
When cars are fully manual and
you must do everything yourself,
you bring
all your cognitive resources
to bear to do that task,
but if the automation
is doing a good enough job,
people will check out
in their heads very quickly.
Your brain is wired
to stop paying attention
when the automation
starts doing well enough.
It's very easy to be lulled
into a false sense of security
if you're supervising
an automated vehicle
that you have never seen
have any sort of issue.
That false sense of security
makes it difficult
for people to react quickly
in emergencies.
So, many engineers are skeptical
of what's called
level three autonomy,
where a car can fully
drive itself,
except in an emergency,
when the system alerts
the driver to take over.
If the vehicle
all of a sudden can say,
"Oh, wait, I don't actually
know what I'm doing,
you better take over,"
I'm not at all convinced
that that can be done
in a way that's actually safe.
That's one of the reasons
why we're making the leap
all the way
to level four and five driving,
which is to say
that as a passenger
in our vehicle,
you have no legal
or other responsibility
for driving
or keeping the vehicle safe.
Level four and five cars,
which are fully autonomous,
are designed to safely handle
any driving situation,
even emergencies,
with no human intervention
at all.
Level fours could operate only
on some roads at certain times;
level fives, anytime, anywhere.
You can go to sleep
or sit in the back seat,
because there is
no take-over request.
The car can manage
all the situations.
And in case something
goes wrong,
it still has enough redundancies
to safely stop.
So it doesn't need
the driver to engage.
But achieving that goal,
of fully driverless cars,
will require
the people developing them
to overcome
all kinds of obstacles.
We've never done this before.
We've done some things like it.
We've increased safety
in aviation tremendously.
We've automated different kinds
of transportation modes
and activities beautifully.
But we've never done this.
♪
Trying to develop
an automated vehicle
that can do everything
that a human driver can do
is a huge problem.
And it requires an awful lot
of interlocking pieces.
To even match human drivers,
a self-driving car needs
to learn to handle not just one,
but three distinct tasks
nearly flawlessly.
Seeing everything
that's around the car.
Second, understanding
what it's seeing.
And third, planning
the car's path
and controlling
its acceleration, braking,
and steering along the way.
Seeing.
Most self-driving cars rely
on cameras, radar, and lidar.
Each has weaknesses.
Cameras work poorly at night.
Radar doesn't
distinguish very well
between different types
of objects.
And lidar can fail completely
in rain, snow, or fog.
But even when sensors
are working nearly perfectly,
there's still
the second big challenge:
understanding
what they're seeing.
For us, taking in
sensory information
and making best guesses
about the real world
are second nature.
For a computer,
making meaning
out of visual data
is fiendishly difficult.
In each case,
it's the task called perception.
The human brain is an expert
in perception.
But for machines,
it's not natural.
So this is one big challenge,
right?
If you want
to drive autonomously,
you need to perceive the world
just like humans do.
Until recently,
no computer could come close.
But since 2010,
a big leap forward in a branch
of A.I. called machine learning
has come a long way
to closing the gap.
Machine learning,
it's about giving machines
the ability to look at data,
identify patterns,
and make predictions.
Some common examples
of machine learning:
an app that can recognize
human speech,
or an airport security system
that can recognize faces.
But before a computer
can do these things,
it has to be trained...
shown countless examples
of the things
we want it to recognize.
At the core of machine learning
is the idea
of using training data
so that one can train the system
to do something.
Usually, the training stage
consists of
millions of examples.
In this case,
a set of images of cats.
The computer will find
the similarities among them
that will help it generalize
and be able to recognize
any cat, like we do.
It also requires
a great breadth of examples,
because the performance
of the network
will be only as good
as the data used to train it.
In Jerusalem,
Mobileye, a specialist
in autonomous technology,
trains its perception software
with a huge volume of data.
A person has to carefully
label each image
so that the computer can learn
what different things look like.
She is making sure that all the
vehicle are correctly annotated.
We are looking
for cars, pedestrian,
traffic sign, traffic light.
Once the software masters
the set of training images,
it's ready to tackle the data
from a real drive,
a stream of pixels coming in
from the car's cameras.
And it'll take each image
from the video,
say, one frame out of the video,
and say,
"Okay, I have a bunch of pixels,
a bunch of colored dots,"
and it tries to find out
what's in there.
And at a high level,
what it's doing is,
it's going around,
sniffing through the image,
looking for things like,
"Oh, there are
some vertical edges,
"and there are
some horizontal edges,
and there's something round."
And so it pulls out
a bunch of features.
And so it's pulling
these video features out
and associating them
with whether or not
it's a person or a car.
Today's software can interpret
millions of pixels every second.
And thanks to the recent
breakthroughs
in machine learning,
its accuracy has shot up
substantially,
to as high as 98%.
But is that good enough?
Whenever I hear a good
high number like 98% accuracy
in the context of
self-driving cars,
my reaction is,
"That's not near good enough."
One of the problems
with a really high accuracy is,
that's only about
how you do on the training data.
If the real world
is even slightly different
than the training data...
which it always is...
that accuracy might not
really turn out to be the case.
In Pittsburgh,
Phil Koopman's company
helps clients
by pushing perception software
to its limits,
hoping to reveal
the unexpected ways it can fail.
He drops out.
It picks him up
for just a little bit,
and then he goes away.
The driverless cars have gotten
really good
at ordinary situations.
Going down the highway
on a sunny day
should be no problem.
Even navigating city streets,
if nothing crazy's going on,
should be okay.
They have a lot of trouble
with the things
they haven't seen before.
So we call them edge cases.
Just something
you've never seen before.
And you'll notice
it was raining a little bit
that night,
so we have a lot of adults
carrying umbrellas.
If they don't have a lot of
umbrellas in the training set,
maybe it's not going to see
the people with the umbrellas.
It's not going to see
the people.
So when it sees something
that it's never seen before...
it's never seen an example...
it doesn't know what to do.
Today, Koopman and his colleague
Jen Gallingane
are driving around Pittsburgh
in his car,
looking for edge cases.
We have the crossing guard
directing traffic
in addition to a traffic light.
They use what they learn
to improve the training
of the software.
The red boxes you see
are called bounding boxes,
and what those are,
they're just a rectangle
around where the person is
in the image.
And so the idea is,
this is where the computer
thinks a pedestrian is.
When you see
those bounding boxes disappear,
it means that the computer
has lost track
of where the person is.
In other words,
if there's no red box,
it doesn't see the person.
So I don't think
it saw him hardly at all,
until we were almost
on top of him.
And there are many
common variations
in the ways
that people and things look
that can prevent a computer
from making the correct I.D.
A delivery man wearing a turban
is holding a food tray
next to his head...
forming shapes that differ
from the typical training images
of a person's head.
Got the food tray right here.
Those unusual shapes make
the delivery man an edge case.
So when the red box is there,
it sees him,
and there he is
looking right at me.
He's right in front of my car,
it doesn't see him.
You can look at the picture,
and you say,
"There's a person there."
And the perception system says,
"Nope, there's no person there."
And that's a problem.
But many engineers say
that a botched identification
can be overcome...
as long as
some of the car's sensors
detect that something is there.
So we use sensors
like radar and lidar
to directly measure
where everything is around us.
And that's really important,
because even if
a machine learning system
can't classify exactly what type
of object something is,
we still know
there's something there,
we know how fast it's moving,
and so we can make sure
that we don't hit it.
Even if the car can see and
make sense of its surroundings
with unmatched perfection,
it could still be
a lethal hazard
if it fails to handle
its third crucial task:
planning.
Yet another daunting challenge.
Because the planning software
has to anticipate
what's likely to happen,
then plot the car's pathway
and speed,
ready to change both
in a split second
if the sensors spot trouble.
Just like the car's
perception software,
its planning software
also needs to be trained.
To do that
without putting lives at risk,
most companies do
a lot of their training
in environments
they can fully control.
One of them
is computer simulation.
We are driving in San Francisco,
so we have to create
a world in simulation that
is just as complex and varied
as San Francisco.
And this is not a small feat.
When we run tests in simulation,
we take a model of the real car,
along with all the sensors
that are on the real car,
including the A.I.
that runs on the real car.
And this is what we place
in our simulation.
Here, the A.I. software
can practice new skills
without putting anyone
in danger.
Suppose that we drive
in the real world,
and there is a
double-parked car situation
we don't know
how to deal with, right?
So what we do is, we ensure
that the vehicle can deal
with these situations
in simulation first,
so that once we actually
see that situation in real life,
we already know that
we'll be able to deal with it.
Another kind
of safe training environment
is a private test facility,
like this one
in northern California,
known as Castle.
It's run by Waymo,
the company that Google spun off
to develop self-driving cars.
And so, because the site has so
many different types of roads,
from residential to expressways
to arterial roads
going between the two,
cul-de-sacs
and things like that,
we're able to stage
basically an infinite number
of scenarios
that you would encounter
on those types of roads
in the real world.
Great, let's go for run.
Three, two, one.
Today's test:
on a street
where the view is blocked,
a car suddenly backs out
into the Waymo car's path.
Great job, everyone.
We can go a little spicier.
We can change the speed
at which the auxiliary vehicle
exits the driveway.
Does it rip out of its driveway
like a bat out of hell, really
coming out of the driveway
ahead of the Waymo vehicle?
Or is it slowly kind of
meandering down the driveway
and taking its time?
Great job, guys.
Let's restage and stand by.
Next,
the Waymo car has to handle
what's called a pinch point:
a two-way street made narrow
by parked cars.
Waymo rolling.
You encounter this really often
on public roads:
Two oncoming vehicles
have to negotiate
who will assume the right of way
and who will have to yield.
The other car arrives
at the pinch point first,
so the software should tell
the autonomous car to yield.
Yielding, yielding right here.
Like butter, going around.
Butter.
These are just two
of the thousands of scenarios
the company uses
to train the software,
and they introduce new ones
all the time.
But ultimately,
the only way to know for sure
how a car will perform
on public roads
is to test it
in the real world...
where the stakes
for wrong decisions
are much higher.
We don't just,
"Let's go for a test drive
just for the,
for the fun of it."
Because a test drive by itself
may be dangerous, okay?
But there are things that are
very difficult to check
without actually doing
a test drive.
And these are things
that involve negotiation
with other drivers.
Because in order to check
if you are negotiating
normally and properly
with other drivers,
you need other drivers.
Today, Mobileye engineers
are testing a part
of their planning software
that handles merging in traffic.
So, we merged fine,
but at the last point,
we're a bit slow.
Yes, yes.
Merging into traffic,
this multi-agent game
that we are playing,
requires
sophisticated negotiation.
You are negotiating.
Your motion signals
to other road users your intent.
So you are negotiating.
And this negotiation
requires skills.
And those skills don't come
naturally.
Those skills need to be trained.
You need to now change two lanes
because the exit is a
few hundred meters, uh, from us.
The software is trained to be
assertive when it has to be.
Here, the car signals
its intention to exit
by speeding up so it can merge.
And you saw
that we changed two lanes
without obstructing
the flow of traffic
and, and there are
many vehicles here.
You need to provide agility
that is as good as humans
if you want to be safe.
The skill gap
between autonomous systems
and human drivers
is narrowing.
And to teach self-driving cars
to be even better than humans,
some engineers
are exploiting the things
that computers can already
do extremely well.
Okay, you may begin.
Like control machinery
with extraordinary precision.
Autonomous mode
in three, two, one.
This car is drifting,
a kind of controlled skid.
And what makes
this feat possible
is a computer programmed
to exploit the laws of physics.
Here at Thunderhill Raceway
in California,
a team from Stanford University
is developing
self-driving software
that can take evasive action
to escape danger.
In an emergency situation,
you want to be able to use
all of the capabilities
of the tires
to do anything that's required
to avoid that collision.
Our automated vehicles
are able to put the vehicle
into a very heavy swerve
when that is the best choice
for how to avoid the collision.
The Stanford team applies
what they've learned
from the undisputed masters
of pushing car performance
to the limit:
race car drivers.
Race car drivers are always
pushing up to the limits.
But they're trying to avoid
accidents when they do that.
So, for instance, race car
drivers are trying to use
all the friction
between the tire and the road
to be fast.
We want to use all the friction
between the tire and the road
to be safe.
While it might take
a race car driver years
to perfect skills like these,
once the software masters them,
a download could pass them on
to a whole fleet of cars
in just minutes.
We feel this is
a fundamental building block
of any type
of automated vehicle
that you would want to develop.
The vehicle should be able
to use all of its capabilities
to move out of harm's way.
You guys rock.
So, despite all the obstacles,
many engineers think
they're closing in on the prize:
self-driving cars safe enough
to trust.
Cars that could proliferate
very rapidly.
If and when that happens,
they will share the road
with human drivers.
How will that work?
Will we all get along?
In Michael Fleming's
test drives around the country,
he sees a clash brewing.
His company's autonomous car
is named Asimov,
after Isaac Asimov,
famed for his
science fiction books on robots.
Asimov has a clear-cut rule book
and is very consistent.
But often times,
when we drive safely
and follow the letter
of the law,
we get honked at.
And do you know
who we get honked at by?
The aggressive drivers,
the rule breakers.
Fleming and his team
analyze the data
from the car's
strange encounters
and use it to improve
their software.
Whether it's a careless
pedestrian...
This lady just steps out
in the middle of the road.
And if you notice,
she doesn't even turn her head.
Or a wrong-way car.
We were driving down a one-way
street with multiple lanes.
And we see this white truck
driving the wrong way
down a one-way road.
So clearly we have
a rule breaker here,
doing something
that they shouldn't do.
You know, the development
of self-driving technology
would be pretty simple
if everyone just followed
the rules of the road.
If everyone came to a stop
at a stop sign.
If everyone used crosswalks.
But the reality is, they don't.
They break rules all the time.
Conflicts between
rule-bound automated cars
and impatient humans
are just one potential problem.
If robotaxi services
become very cheap,
might traffic actually increase,
and pollution grow worse?
How many people
who now earn their living
from driving
might lose their jobs?
If millions of cars
are electronically connected,
what risks might that pose
to our privacy and our security?
Finally, an ethical question:
Are we willing to accept
self-driving cars
that kill some people,
as long as they kill
fewer people
than human drivers do?
Despite these looming questions,
proponents say self-driving cars
could make transportation
both easier and safer.
I think new technology offers
the biggest safety tool
that we've had in 100 years.
That's the cultural
transformation that's coming.
But it is far from certain
that driverless cars will
ever deliver on that promise.
I think it is hubris to believe
that driving
is such a simple task
that, since there's so much more
automation in the world,
how hard could this be?
I'm a big fan of where we're
going with this technology,
but I also work on a day-to-day
basis with this technology,
and it's just simply not ready
for public consumption
to any verifiable degree
of safety.
I think that we,
as developers in the industry,
need to earn the public's trust,
and not the other way around.
I think we need to be able
to demonstrate
why our system is, in fact,
safer than human drivers.
If self-driving cars eventually
do win public trust,
their adoption
may be less of a revolution
than a slow evolution.
In my opinion,
the safe solutions today
work at low speeds
in low-complexity environments.
So this includes driving
on private roads,
on campuses, retirement
communities, airports.
But we do not have solutions
that work in general
at high speeds,
in congestion,
and in really difficult
road conditions.
Having a fully
autonomous vehicle
being able to take you
anywhere, anytime
is very, very far in the future.
In fact, I don't have
even a guess
as to how far in the future
that will be.
It's not like a mobile phone
app,
where, you know, if the
mobile phone app doesn't work
ten percent of the
time, big deal.
This has got to work
all the time.
There's a pot of gold out there
at the end of the rainbow
for those who can actually
get this to work.
Now, the challenge is
how to get it to work safely.
♪
Major funding for "NOVA"
is provided by the following:
To order this "NOVA" program
on DVD,
visit ShopPBS
or call 1-800-PLAY-PBS.
This program is also available
on Amazon Prime Video.
♪
we can't live without.
But what if cars got smarter?
Automated vehicles offer
the promise
of dramatically reducing
collisions and fatalities
on our roads and highways.
The big dream is to make a car
that will never be responsible
for a collision.
The potential payoff is huge.
In the mid-2030s,
the market could be worth
a few trillion U.S. dollars,
with a T in there.
Pursuing that pot of gold,
companies are already testing
their cars on our streets.
This is a bold
and ambitious mission.
Is this the beginning
of a mobility revolution?
We're venturing into domains
that ten years ago
would be deemed science fiction.
Or are we simply moving
too fast?
The technologies
are deeply flawed.
It's just simply not ready
for public consumption.
Will we ever be safe
with a robot behind the wheel?
"Look Who's Driving,"
right now, on "NOVA."
♪
Major funding for "NOVA"
is provided by the following:
♪
911, what is your emergency?
Um, yes, I, um...
I hit a bicyclist.
Do you need paramedics?
I don't, but they do.
♪
Every day in the United States,
there are about 100
fatal car crashes.
But on March 18, 2018,
one attracts
particular attention,
because the woman behind
the wheel, Rafaela Vasquez,
is not driving... a computer is.
♪
The crash puts a spotlight on
a controversial new technology:
the self-driving car.
So what exactly happened?
Well, the car was in auto-drive.
Uh-huh.
And all of a sudden, I...
The car didn't see it,
I didn't see it.
And all of a sudden,
she's just there.
But she shot out in front.
And then I think I...
I know I hit 'em.
The victim is 49-year-old
Elaine Herzberg.
The car that kills her
is being tested by Uber,
the ride-hailing company.
It's a modified Volvo equipped
with Uber's self-driving
technology.
Test drives are permitted
in Arizona,
as long as there's
a safety driver to take over
in case of trouble.
♪
But on this night,
Uber's experiment badly fails.
Elaine Herzberg is
the first person in history
killed by a self-driving car.
♪
A woman was pushing a bike
across a road.
This is the sweet spot,
in theory,
where autonomy would be
at its best,
particularly compared to humans,
who have terrible vision
at night.
And I think that's a, sadly, um,
it's a very good example
of just how far away from safe
these cars really are.
Now, the question I have
for you guys is,
do you guys have remote access
to the cameras and stuff
like that that are in there?
We technically should, yeah.
Herzberg's death quickly becomes
an international story.
Killed on the street
by a self-driving Uber car...
Uber is now suspending all
of its self-driving testing...
Self-driving cars
under intense scrutiny after...
Uber's C.E.O. tweeting,
"Incredibly sad news.
We're thinking of
the victim's family..."
So the question here is,
"What went wrong?"
♪
The Tempe crash stokes
public fears
about self-driving cars.
Nearly three out of four
Americans
say they'd be afraid
to ride in one.
So why is anyone even trying
to make a car
that drives itself?
Self-driving vehicles
don't get distracted.
They don't get fatigued,
uh, they don't fall asleep.
Uh, and, you know,
they don't drive drunk.
In other words,
they promise safety.
Each year in the U.S.,
some 35,000 people die
in car crashes,
nearly all caused
by human error.
They're because of a choice
or an error that we make,
a choice to pick up the phone,
drive drunk, drive drugged,
drive distracted, drive drowsy.
You're looking one direction
when you should have been
watching in the other direction.
94% of crashes.
The hope is that computers
will be able to do a better job.
I really believe
that with self-driving cars,
we will eliminate
road accidents.
If we get the technology right,
those cars will know everything
about the road situation,
the road condition,
well before a human would.
So that if there is somebody
running around the corner
about to jump in front
of the car,
the car will know that,
and there will be no accidents.
This is the dream.
And that dream is
about much more than safety.
Proponents say self-driving cars
could bring about
the biggest changes
in transportation
since horses gave way
to the automobile.
Instead of having to own a car,
people could simply summon one
from a circulating fleet
of robotaxis,
reducing the number of cars
on the road,
cutting pollution,
and eliminating the need
to have so many private cars
sitting idle all day long.
For people who can't drive,
self-driving cars could also
provide greater mobility.
You can put your kid
on an autonomous car,
and it will take him to school,
and problem solved.
Some see a future
where cars talk to each other,
reducing traffic jams.
Others are less sure.
It could also
lead to the nightmare scenario,
as well,
where inexpensive mobility
leads to dramatic consumption
of mobility,
congested freeways,
unsustainable use of energy,
and an acceleration
of climate change
and other issues
that we're facing now.
This is a disruption.
I would call this
"Mobility 2.0."
If, if cars today is 1.0,
and, uh, horse carriages
was 0.0,
this is a new era of mobility.
An era which in some places
seems very close at hand.
Like here, on the streets
of San Francisco.
All right, I'm going to go ahead
and slide to go.
So we're off
on our autonomous way.
Jesse Levinson demonstrates
how his company's
self-driving car
navigates the city streets.
On the screen here,
you can see what the vehicle
is planning to do.
That's the green corridor.
The display shows how
the route is constantly adjusted
in response to data from cameras
and scanners that use radar
and lasers.
The system is designed to handle
anything that might happen.
But on test drives, the company
always has a safety driver.
Here's a really interesting
situation
we're about to encounter,
is a six-way
unprotected intersection.
This intersection is
so complicated,
I'm not sure
I know how to drive it.
But what we're doing here is,
we're going to make a left turn.
We're checking
all the oncoming traffic.
The car scans its surroundings
to determine
if its path is clear.
We're also yielding
for all these pedestrians
in the crosswalk.
The computer won't allow the car
to proceed
until it's safe.
And literally tracking
hundreds of dynamic agents
at the same time.
Now, we've just made
our way back.
That was a 100% autonomous drive
with absolutely
no manual interventions.
Pretty cool, huh?
Test drives like this
are impressive.
But some warn
it will be a long time
before computers can
consistently drive more safely
than humans do.
Because driving,
though it may seem easy,
is actually
a very difficult task.
Let me make the following
bold assertion.
Driving is the most complex
activity
that most adults on the planet
engage in
on a regular basis.
When we drive,
the vehicle is moving,
the environment is changing
on a continuing basis,
all these pieces of information
actually coming
towards our senses,
goes to the brain,
The brain basically applies
the rules of the road.
And for the most part,
we drive safely.
Each of the 35,000 annual
crash deaths in the U.S.
is tragic.
But they're statistically rare.
On average, there's only one
for every 100 million miles
of driving.
That translates into
3.4 million hours of driving.
3.4 million hours is 390 years
of continuous, 24-hours-a-day,
seven-days-a-week driving
in between fatal crashes.
That's a very high bar
for technology to clear.
Think about our modern
electronic devices
that are powered by software
that we use every day.
And try to imagine
that those devices could run
without ever giving you
the spinning blue doughnut
that said it's not ready to give
you the answer you wanted.
Because if that computer
was driving your vehicle,
you crashed.
So getting to the point
where we have
a software-intensive device
that can operate without a fault
is a huge, huge challenge.
This is a bold and ambitious
mission.
A mission that really
isn't going
to be accomplished overnight.
Michael Fleming speaks
from hard experience.
He's been working
on self-driving cars
for more than a decade.
And like many in the field
today,
he got his start thanks to
a push from a surprising place:
the Pentagon.
In 2000, hoping to reduce
battlefield casualties,
Congress orders the military
to develop combat vehicles
that can drive themselves.
Two years later,
the Pentagon's research agency,
announces what it calls
the Grand Challenge:
a driverless car race,
142 miles through
the California desert.
Whoever finishes first
will win $1 million.
March 13, 2004.
13 vehicles... rigged with
sensors to detect what's ahead
and software to control speed
and steering...
set out.
And we and a lot of other teams
went out to compete in the
DARPA, you know, Grand Challenge
with high hopes of bringing home
a million-dollar prize.
Right away, the vehicles run
into trouble.
The course is littered
with rocks, cliffs,
cattle grates, and river beds...
obstacles that the sensors
sometimes miss.
We got 100 yards
out of the gate,
and the vehicle
just stopped working.
And we failed miserably
with everyone else.
Of the full 142 miles,
no vehicle goes farther
than eight.
But a year later,
DARPA provides a second chance:
a new challenge
that doubles the payoff
to $2 million.
Among the 2005 competitors,
a team from Stanford,
confident that their car,
named Stanley,
can make it all the way
to the finish line.
Their ace in the hole:
cutting-edge software that uses
A.I... artificial intelligence.
We built a computer system
using artificial intelligence
that's able to actually find
the road very, very reliably.
What you see here is
a map of the environment,
that's being where
the Stanley drives.
The red stuff is stuff that
it doesn't want to drive over,
it's dangerous.
The white stuff is the road
as found by Stanley,
and the gray stuff
that you see here
is stuff it just doesn't know
anything about,
so it's not going to drive
there.
Stanford's strategy pays off.
Stanley is the first to cross
the finish line.
We have the green flag.
Two years later...
Launch the Bots!
...the Pentagon stages
a final contest,
adding a new complexity:
traffic.
DARPA calls it
the Urban Challenge.
In the 2007 Urban Challenge,
they let us drive
with other cars,
both autonomous cars
as well as human-driven cars.
And so that was really exciting,
because all of a sudden, now you
have to track dynamic objects
and predict what they're going
to do in the future,
and that's
a much harder problem.
To help solve it,
some half a dozen teams bank
on a detection technology
called lidar,
which dramatically improves
a car's ability to see.
They rig devices
that spin 360 degrees.
Lidar works using laser beams,
pulses of invisible light
that bounce off everything
in their path.
A sensor collects
the reflections,
which provide a precise picture
of the environment.
And as those lasers are sweeping
around in a circle,
you get what's called
a point cloud.
So you'll get thousands,
even millions of points
coming back to the sensor.
And you can build up
a point cloud
that looks similar to this,
and it gives you
very accurate geometric detail.
Using all that data
to pilot a car
demands new kinds of software.
I vividly remember
the first time
when some of my software that
I had just written hours ago
ran on the car.
That was pretty incredible.
There was nobody
behind the wheel,
and it was just doing everything
on its own.
But not quite perfectly.
Okay, folks, we have got our
first autonomous traffic jam.
Another historic event
right here.
This time,
six of 11 cars complete
the course successfully.
It's a major turning point,
but there's also
a growing appreciation
for the immense challenge ahead.
The reality was,
the Urban Challenge
was a very small step
compared to what had to be done
to actually get
commercial vehicles on the road.
It was only a six-hour race.
So basically, if your car
could last for six hours
without hitting something,
you were, like,
"Yep, we did it," right?
Now, getting into
a commercial service
with thousands of vehicles
and setting a safety bar
that's significantly higher
than human-level performance,
that's very difficult.
Still, the potential
of this new technology
proves irresistible to engineers
and to business.
♪
The DARPA challenges
may have seemed
like just a geeky
science project,
but they trigger a race
to build a whole new industry.
One of the first
out of the gate?
Tech giant Google.
Should we do
a simple test first?
In university robotics labs...
Yes!
...and the start-ups
that spin out of them,
a growing army of engineers
keeps improving
both sensors and software.
The big car manufacturers
take notice,
and they too enter the fray.
Well, hello,
ladies and gentlemen,
and welcome to C.E.S.
They're betting
that driverless cars
are about to become
a huge global business.
The autonomous vehicle market
is supposed to be an
enormous market in the future.
Estimates say
that in the mid-2030s,
the market could be worth
a few trillion U.S. dollars,
with a T in there.
That potential payday
brings Uber to Arizona,
whose flat landscape
and sunny weather are ideal
for testing self-driving cars.
Uber sees eliminating
its paid drivers
as a key
to future profitability.
But the March 2018 crash
in Tempe
casts a dark shadow on the
future of self-driving cars.
Uber suspends all testing
on public roads.
The National Transportation
Safety Board
launches an investigation:
why did neither
the computer system
nor the safety
driver stop the car?
The Uber crash in Tempe
wasn't really about the maturity
of the technology.
We all knew the
self-driving car technology
wasn't ready to deploy.
That's why they were doing
testing.
The significance
of the Uber crash
was that the human safety driver
failed to prevent the crash.
In the months after the crash,
the NTSB and Tempe police
release new details
about what happened that night.
The interior camera reveals
that Rafaela Vasquez
is not watching the road
for nearly seven of the
22 minutes before the crash,
including about five
of the last six seconds.
♪ Chain, chain,
chain The police discover
that the whole time
the car is moving,
she is streaming an episode
of a singing competition
on her phone.
♪ Chain of fools
Vasquez denies watching
the video
or even looking at her phone.
She says she was just checking
the control panel.
But she does not step
on the brake
until after the car
strikes Herzberg.
The NTSB findings
also point to flaws
in the self-driving system.
The sensors actually detect
Herzberg
six seconds before impact,
but the system doesn't alert
the safety driver.
It's not designed to.
So even though their sensors
had detected this target...
there was a potentially
hazardous situation...
they made the decision
to not give any of that
information to the test driver.
I can't imagine why.
And the computer doesn't decide
emergency braking is needed
until 1.3 seconds before impact.
But even then,
the car doesn't brake,
because Uber has disabled
the Volvo's factory-installed
emergency braking system.
Uber wants to avoid jerky rides
caused by unnecessary braking
for harmless objects.
So the net result of that is,
they took
a safe production vehicle
and turned it
into an unsafe prototype
that caused the pedestrian
to be killed.
Uber tells the NTSB that
it's the safety driver's job
to correct any mistakes made
by the self-driving system.
Now, while many people look
at the safety driver
in that vehicle
as, as perhaps the villain,
the real issue is the system
that that safety driver
was put into.
They were put into a system
ripe for failure,
where the expectations
of failure
were far lower than
the actual probabilities.
Perhaps because the computer
scientists and the engineers
building these systems
don't all appreciate
the complexities
of, that human behavior brings
to the system.
Uber declined to participate
in this film.
But even before Tempe,
there were signs
that drivers and automation
don't always work together
very well.
Automotive engineers
classify automation
into levels, from zero to five.
Fully automated cars,
the ones driven by computers,
are levels three, four,
and five.
Uber was testing its car
as a level four prototype.
At the lower levels,
humans drive.
In level zero cars,
they handle everything.
Level one and two cars
assist drivers
by regulating speed
or keeping the car in its lane,
or even both.
They're partially automated.
Partial automation is the
foundation for full automation.
And how we respond and adapt
to automation
at the lower levels
gives us our window
into the future.
Millions of level one and two
cars are on the road today.
Insurance data suggest that
they are less likely to crash
because most of them
automatically brake the car
to avoid collisions.
And indeed,
it appears these systems
are making people
better drivers, because...
Think of it
as an extra set of eyes and ears
beyond your own eyes and ears.
So you get that additional
vigilance
from the sensor systems
that may detect things
that you missed.
But the added confidence
provided by partial automation
also poses an unforeseen risk.
In the Volvo,
she's clearly using
pilot assist a lot.
Which is great, keeping
her hands on the wheel,
paying attention to
what's going on in front of her.
Mm-hmm.
A research team at M.I.T.
is gathering data on how people
use partial automation.
Seem to use Autopilot a lot?
Yeah, he's been using it
a little bit more
than everyone else, actually.
The cars we are studying today
can automatically steer,
accelerate, and brake
in ways that were not available
in production vehicles
20 years ago.
So I think the ultimate question
is, you know,
"How does attention change
over time
as we use these cars
over months and years?"
And that's the very heart
of the question
we're hoping to understand more.
Cathy Urquhart was given
a Volvo S90 to test-drive.
It's equipped with
several automated features,
including one for steering
that keeps the car in its lane.
The first day I was nervous,
of course.
It's new technology,
and I think it's the same
when you're driving any new car.
You have to get used to it.
But I liked the lane monitor to
keep you in the, in the lane.
Uh, it wasn't too obtrusive
with the, uh, car
just kind of pulling you over,
the wheel pulling you
to one side.
It was good, I liked that.
When the Volvo's steering
assistance is on,
Cathy could briefly
take her hands off the wheel...
but she doesn't.
I'm not somebody
that takes my hands
off the steering wheel...
I like to have control.
I like to hold on
to the steering wheel.
But there are others
in the M.I.T. study,
like Taylor Ogan,
who are much more trusting
of automation.
He owns a Tesla equipped
with software called Autopilot.
Actually traded my first car in
to get Autopilot,
so that the car
could drive itself.
It helps you stay in your lane
and follow the car,
the distance of the car
in front of you.
And you really don't have
to touch anything.
It's awesome, I love it.
I swear by it.
All right, here we go.
All right.
All right, oh!
Tesla's founder and C.E.O.,
Elon Musk,
has long touted Autopilot
as a step along the road
to his goal of full autonomy.
No hands, no feet, nothing.
It's a combination of radar,
camera with image recognition,
and ultrasonic sensors
that's integrated with maps
and real-time traffic.
But for now, the Tesla system
is still level two,
which means it requires
just as much driver attention
as a regular car.
So the company has been
criticized
for choosing the name Autopilot.
It's a nice, catchy-sounding
name,
but "Autopilot" certainly
has a misleading connotation.
It suggests
that the pilot of the car
is in the program,
in the computer,
and not in the driver
of the vehicle.
But Musk claims that
the name is not misleading.
Autopilot is what
they have in airplanes.
It's where there's still an
expectation there'll be a pilot.
So the, so if, if,
the, the onus is on the pilot
to make sure
that the autopilot is doing
the right thing.
We're not asserting that the car
is capable of driving, um,
in the absence
of driver oversight.
Yet when he uses Autopilot,
Taylor Ogan feels free
to look away from the road
from time to time.
I'm not a distracted driver,
but I definitely do things
that I think older people
who don't trust technology
wouldn't do.
I'm definitely on my phone
a lot more,
texting, reading articles,
when the car is driving itself.
I'm actually going to test
this Autopilot thing.
Oh, I can eat food!
This is the life, look at this.
♪
Some early Autopilot users push
things a lot further
and show off their exploits
on YouTube.
Oh, my!
That's a sharp turn.
Oh!
I'm alive.
Okay, good.
What is technology?
What! What!
Drivers like these
may seem hopelessly reckless.
But they're just
extreme examples
of people placing too much trust
in partially automated cars,
a problem that in 2016
leads to tragedy.
Ah, jeez, car's doing it
all itself.
Don't know what I'm going to do
with my hands down here.
39-year-old Joshua Brown is
a particularly enthusiastic
early adopter
of Tesla's Autopilot.
He loves to make YouTube videos
showing what Autopilot can do.
So the camera up here is
what's always watching the road,
and it's looking
at your lane markings
and it is looking
at speed limit signs,
and right now we're actually
about...
Tesla says that Autopilot
should only be used on highways
with entrance and exit ramps.
Highways that don't have
cross traffic, like this one.
But it's only a recommendation,
and Tesla doesn't block drivers
from using it
on other roads, too.
There's a car in front of me,
so I'm not going to have
to do anything.
It starts to turn and...
Brown stresses how important
it is
for the driver to stay
constantly alert.
So just like that,
I have my hands on the wheel,
but I can still get
my foot down to a brake,
and just keep it like this,
because you can react so quickly
that if anything goes wrong,
you're going to want
to be able to take control
very, very quickly
while we're driving like this.
On May 7, 2016, near
the town of Williston, Florida,
Joshua Brown is cruising east
on Route 27A,
a four-lane divided highway
that has numerous intersections.
He is driving
at 74 miles per hour
with Autopilot switched on.
As Brown nears an intersection,
a truck driver prepares to make
a left turn across his path.
The truck is supposed to yield.
But it doesn't.
It begins its turn
with Brown's Tesla
about 1,000 feet away.
Brown now has about ten seconds
to avoid a collision.
Records show
that he is not using his phone.
But he does nothing.
And neither does Autopilot.
The Tesla slams
into the trailer,
killing Brown.
In the immediate aftermath,
a critically important question:
who... or what... is to blame?
Autopilot is an obvious suspect.
But the NTSB clears it
because Autopilot isn't
designed to detect obstacles,
like that truck,
that are crossing
the car's path.
It only looks for objects
moving in the same direction
as the car.
So, the Safety Board
splits the blame
between the truck driver
for failing to yield
and Brown
for not paying attention.
Had he noticed the truck
and stepped on the brake,
he could have easily stopped
with plenty of room to spare.
And Brown is not the only person
killed while using Autopilot.
In 2016 in China,
a driver named Gao Yaning
crashes into a road sweeper.
A deadly Tesla crash
in California.
In 2018, in California,
Walter Huang dies when his
Tesla crashes into a barrier.
...set on Autopilot...
This deadly Tesla crash raising
new questions tonight.
The roof of the car
was ripped off
as it passed
under the trailer...
And in 2019, Jeremy Beren Banner
is killed in Florida
when his Tesla hits
a truck crossing its path.
The circumstances
of which are similar
to a crash in May of 2016
near Gainesville.
Despite these fatalities,
Tesla says that it continually
improves Autopilot.
And it asserts that drivers
who use Autopilot
have a lower crash rate
than U.S. drivers as a whole.
Tesla declined to participate
in this film.
Some scientists say
that Autopilot
and other systems like it
remain inherently risky.
When cars are fully manual and
you must do everything yourself,
you bring
all your cognitive resources
to bear to do that task,
but if the automation
is doing a good enough job,
people will check out
in their heads very quickly.
Your brain is wired
to stop paying attention
when the automation
starts doing well enough.
It's very easy to be lulled
into a false sense of security
if you're supervising
an automated vehicle
that you have never seen
have any sort of issue.
That false sense of security
makes it difficult
for people to react quickly
in emergencies.
So, many engineers are skeptical
of what's called
level three autonomy,
where a car can fully
drive itself,
except in an emergency,
when the system alerts
the driver to take over.
If the vehicle
all of a sudden can say,
"Oh, wait, I don't actually
know what I'm doing,
you better take over,"
I'm not at all convinced
that that can be done
in a way that's actually safe.
That's one of the reasons
why we're making the leap
all the way
to level four and five driving,
which is to say
that as a passenger
in our vehicle,
you have no legal
or other responsibility
for driving
or keeping the vehicle safe.
Level four and five cars,
which are fully autonomous,
are designed to safely handle
any driving situation,
even emergencies,
with no human intervention
at all.
Level fours could operate only
on some roads at certain times;
level fives, anytime, anywhere.
You can go to sleep
or sit in the back seat,
because there is
no take-over request.
The car can manage
all the situations.
And in case something
goes wrong,
it still has enough redundancies
to safely stop.
So it doesn't need
the driver to engage.
But achieving that goal,
of fully driverless cars,
will require
the people developing them
to overcome
all kinds of obstacles.
We've never done this before.
We've done some things like it.
We've increased safety
in aviation tremendously.
We've automated different kinds
of transportation modes
and activities beautifully.
But we've never done this.
♪
Trying to develop
an automated vehicle
that can do everything
that a human driver can do
is a huge problem.
And it requires an awful lot
of interlocking pieces.
To even match human drivers,
a self-driving car needs
to learn to handle not just one,
but three distinct tasks
nearly flawlessly.
Seeing everything
that's around the car.
Second, understanding
what it's seeing.
And third, planning
the car's path
and controlling
its acceleration, braking,
and steering along the way.
Seeing.
Most self-driving cars rely
on cameras, radar, and lidar.
Each has weaknesses.
Cameras work poorly at night.
Radar doesn't
distinguish very well
between different types
of objects.
And lidar can fail completely
in rain, snow, or fog.
But even when sensors
are working nearly perfectly,
there's still
the second big challenge:
understanding
what they're seeing.
For us, taking in
sensory information
and making best guesses
about the real world
are second nature.
For a computer,
making meaning
out of visual data
is fiendishly difficult.
In each case,
it's the task called perception.
The human brain is an expert
in perception.
But for machines,
it's not natural.
So this is one big challenge,
right?
If you want
to drive autonomously,
you need to perceive the world
just like humans do.
Until recently,
no computer could come close.
But since 2010,
a big leap forward in a branch
of A.I. called machine learning
has come a long way
to closing the gap.
Machine learning,
it's about giving machines
the ability to look at data,
identify patterns,
and make predictions.
Some common examples
of machine learning:
an app that can recognize
human speech,
or an airport security system
that can recognize faces.
But before a computer
can do these things,
it has to be trained...
shown countless examples
of the things
we want it to recognize.
At the core of machine learning
is the idea
of using training data
so that one can train the system
to do something.
Usually, the training stage
consists of
millions of examples.
In this case,
a set of images of cats.
The computer will find
the similarities among them
that will help it generalize
and be able to recognize
any cat, like we do.
It also requires
a great breadth of examples,
because the performance
of the network
will be only as good
as the data used to train it.
In Jerusalem,
Mobileye, a specialist
in autonomous technology,
trains its perception software
with a huge volume of data.
A person has to carefully
label each image
so that the computer can learn
what different things look like.
She is making sure that all the
vehicle are correctly annotated.
We are looking
for cars, pedestrian,
traffic sign, traffic light.
Once the software masters
the set of training images,
it's ready to tackle the data
from a real drive,
a stream of pixels coming in
from the car's cameras.
And it'll take each image
from the video,
say, one frame out of the video,
and say,
"Okay, I have a bunch of pixels,
a bunch of colored dots,"
and it tries to find out
what's in there.
And at a high level,
what it's doing is,
it's going around,
sniffing through the image,
looking for things like,
"Oh, there are
some vertical edges,
"and there are
some horizontal edges,
and there's something round."
And so it pulls out
a bunch of features.
And so it's pulling
these video features out
and associating them
with whether or not
it's a person or a car.
Today's software can interpret
millions of pixels every second.
And thanks to the recent
breakthroughs
in machine learning,
its accuracy has shot up
substantially,
to as high as 98%.
But is that good enough?
Whenever I hear a good
high number like 98% accuracy
in the context of
self-driving cars,
my reaction is,
"That's not near good enough."
One of the problems
with a really high accuracy is,
that's only about
how you do on the training data.
If the real world
is even slightly different
than the training data...
which it always is...
that accuracy might not
really turn out to be the case.
In Pittsburgh,
Phil Koopman's company
helps clients
by pushing perception software
to its limits,
hoping to reveal
the unexpected ways it can fail.
He drops out.
It picks him up
for just a little bit,
and then he goes away.
The driverless cars have gotten
really good
at ordinary situations.
Going down the highway
on a sunny day
should be no problem.
Even navigating city streets,
if nothing crazy's going on,
should be okay.
They have a lot of trouble
with the things
they haven't seen before.
So we call them edge cases.
Just something
you've never seen before.
And you'll notice
it was raining a little bit
that night,
so we have a lot of adults
carrying umbrellas.
If they don't have a lot of
umbrellas in the training set,
maybe it's not going to see
the people with the umbrellas.
It's not going to see
the people.
So when it sees something
that it's never seen before...
it's never seen an example...
it doesn't know what to do.
Today, Koopman and his colleague
Jen Gallingane
are driving around Pittsburgh
in his car,
looking for edge cases.
We have the crossing guard
directing traffic
in addition to a traffic light.
They use what they learn
to improve the training
of the software.
The red boxes you see
are called bounding boxes,
and what those are,
they're just a rectangle
around where the person is
in the image.
And so the idea is,
this is where the computer
thinks a pedestrian is.
When you see
those bounding boxes disappear,
it means that the computer
has lost track
of where the person is.
In other words,
if there's no red box,
it doesn't see the person.
So I don't think
it saw him hardly at all,
until we were almost
on top of him.
And there are many
common variations
in the ways
that people and things look
that can prevent a computer
from making the correct I.D.
A delivery man wearing a turban
is holding a food tray
next to his head...
forming shapes that differ
from the typical training images
of a person's head.
Got the food tray right here.
Those unusual shapes make
the delivery man an edge case.
So when the red box is there,
it sees him,
and there he is
looking right at me.
He's right in front of my car,
it doesn't see him.
You can look at the picture,
and you say,
"There's a person there."
And the perception system says,
"Nope, there's no person there."
And that's a problem.
But many engineers say
that a botched identification
can be overcome...
as long as
some of the car's sensors
detect that something is there.
So we use sensors
like radar and lidar
to directly measure
where everything is around us.
And that's really important,
because even if
a machine learning system
can't classify exactly what type
of object something is,
we still know
there's something there,
we know how fast it's moving,
and so we can make sure
that we don't hit it.
Even if the car can see and
make sense of its surroundings
with unmatched perfection,
it could still be
a lethal hazard
if it fails to handle
its third crucial task:
planning.
Yet another daunting challenge.
Because the planning software
has to anticipate
what's likely to happen,
then plot the car's pathway
and speed,
ready to change both
in a split second
if the sensors spot trouble.
Just like the car's
perception software,
its planning software
also needs to be trained.
To do that
without putting lives at risk,
most companies do
a lot of their training
in environments
they can fully control.
One of them
is computer simulation.
We are driving in San Francisco,
so we have to create
a world in simulation that
is just as complex and varied
as San Francisco.
And this is not a small feat.
When we run tests in simulation,
we take a model of the real car,
along with all the sensors
that are on the real car,
including the A.I.
that runs on the real car.
And this is what we place
in our simulation.
Here, the A.I. software
can practice new skills
without putting anyone
in danger.
Suppose that we drive
in the real world,
and there is a
double-parked car situation
we don't know
how to deal with, right?
So what we do is, we ensure
that the vehicle can deal
with these situations
in simulation first,
so that once we actually
see that situation in real life,
we already know that
we'll be able to deal with it.
Another kind
of safe training environment
is a private test facility,
like this one
in northern California,
known as Castle.
It's run by Waymo,
the company that Google spun off
to develop self-driving cars.
And so, because the site has so
many different types of roads,
from residential to expressways
to arterial roads
going between the two,
cul-de-sacs
and things like that,
we're able to stage
basically an infinite number
of scenarios
that you would encounter
on those types of roads
in the real world.
Great, let's go for run.
Three, two, one.
Today's test:
on a street
where the view is blocked,
a car suddenly backs out
into the Waymo car's path.
Great job, everyone.
We can go a little spicier.
We can change the speed
at which the auxiliary vehicle
exits the driveway.
Does it rip out of its driveway
like a bat out of hell, really
coming out of the driveway
ahead of the Waymo vehicle?
Or is it slowly kind of
meandering down the driveway
and taking its time?
Great job, guys.
Let's restage and stand by.
Next,
the Waymo car has to handle
what's called a pinch point:
a two-way street made narrow
by parked cars.
Waymo rolling.
You encounter this really often
on public roads:
Two oncoming vehicles
have to negotiate
who will assume the right of way
and who will have to yield.
The other car arrives
at the pinch point first,
so the software should tell
the autonomous car to yield.
Yielding, yielding right here.
Like butter, going around.
Butter.
These are just two
of the thousands of scenarios
the company uses
to train the software,
and they introduce new ones
all the time.
But ultimately,
the only way to know for sure
how a car will perform
on public roads
is to test it
in the real world...
where the stakes
for wrong decisions
are much higher.
We don't just,
"Let's go for a test drive
just for the,
for the fun of it."
Because a test drive by itself
may be dangerous, okay?
But there are things that are
very difficult to check
without actually doing
a test drive.
And these are things
that involve negotiation
with other drivers.
Because in order to check
if you are negotiating
normally and properly
with other drivers,
you need other drivers.
Today, Mobileye engineers
are testing a part
of their planning software
that handles merging in traffic.
So, we merged fine,
but at the last point,
we're a bit slow.
Yes, yes.
Merging into traffic,
this multi-agent game
that we are playing,
requires
sophisticated negotiation.
You are negotiating.
Your motion signals
to other road users your intent.
So you are negotiating.
And this negotiation
requires skills.
And those skills don't come
naturally.
Those skills need to be trained.
You need to now change two lanes
because the exit is a
few hundred meters, uh, from us.
The software is trained to be
assertive when it has to be.
Here, the car signals
its intention to exit
by speeding up so it can merge.
And you saw
that we changed two lanes
without obstructing
the flow of traffic
and, and there are
many vehicles here.
You need to provide agility
that is as good as humans
if you want to be safe.
The skill gap
between autonomous systems
and human drivers
is narrowing.
And to teach self-driving cars
to be even better than humans,
some engineers
are exploiting the things
that computers can already
do extremely well.
Okay, you may begin.
Like control machinery
with extraordinary precision.
Autonomous mode
in three, two, one.
This car is drifting,
a kind of controlled skid.
And what makes
this feat possible
is a computer programmed
to exploit the laws of physics.
Here at Thunderhill Raceway
in California,
a team from Stanford University
is developing
self-driving software
that can take evasive action
to escape danger.
In an emergency situation,
you want to be able to use
all of the capabilities
of the tires
to do anything that's required
to avoid that collision.
Our automated vehicles
are able to put the vehicle
into a very heavy swerve
when that is the best choice
for how to avoid the collision.
The Stanford team applies
what they've learned
from the undisputed masters
of pushing car performance
to the limit:
race car drivers.
Race car drivers are always
pushing up to the limits.
But they're trying to avoid
accidents when they do that.
So, for instance, race car
drivers are trying to use
all the friction
between the tire and the road
to be fast.
We want to use all the friction
between the tire and the road
to be safe.
While it might take
a race car driver years
to perfect skills like these,
once the software masters them,
a download could pass them on
to a whole fleet of cars
in just minutes.
We feel this is
a fundamental building block
of any type
of automated vehicle
that you would want to develop.
The vehicle should be able
to use all of its capabilities
to move out of harm's way.
You guys rock.
So, despite all the obstacles,
many engineers think
they're closing in on the prize:
self-driving cars safe enough
to trust.
Cars that could proliferate
very rapidly.
If and when that happens,
they will share the road
with human drivers.
How will that work?
Will we all get along?
In Michael Fleming's
test drives around the country,
he sees a clash brewing.
His company's autonomous car
is named Asimov,
after Isaac Asimov,
famed for his
science fiction books on robots.
Asimov has a clear-cut rule book
and is very consistent.
But often times,
when we drive safely
and follow the letter
of the law,
we get honked at.
And do you know
who we get honked at by?
The aggressive drivers,
the rule breakers.
Fleming and his team
analyze the data
from the car's
strange encounters
and use it to improve
their software.
Whether it's a careless
pedestrian...
This lady just steps out
in the middle of the road.
And if you notice,
she doesn't even turn her head.
Or a wrong-way car.
We were driving down a one-way
street with multiple lanes.
And we see this white truck
driving the wrong way
down a one-way road.
So clearly we have
a rule breaker here,
doing something
that they shouldn't do.
You know, the development
of self-driving technology
would be pretty simple
if everyone just followed
the rules of the road.
If everyone came to a stop
at a stop sign.
If everyone used crosswalks.
But the reality is, they don't.
They break rules all the time.
Conflicts between
rule-bound automated cars
and impatient humans
are just one potential problem.
If robotaxi services
become very cheap,
might traffic actually increase,
and pollution grow worse?
How many people
who now earn their living
from driving
might lose their jobs?
If millions of cars
are electronically connected,
what risks might that pose
to our privacy and our security?
Finally, an ethical question:
Are we willing to accept
self-driving cars
that kill some people,
as long as they kill
fewer people
than human drivers do?
Despite these looming questions,
proponents say self-driving cars
could make transportation
both easier and safer.
I think new technology offers
the biggest safety tool
that we've had in 100 years.
That's the cultural
transformation that's coming.
But it is far from certain
that driverless cars will
ever deliver on that promise.
I think it is hubris to believe
that driving
is such a simple task
that, since there's so much more
automation in the world,
how hard could this be?
I'm a big fan of where we're
going with this technology,
but I also work on a day-to-day
basis with this technology,
and it's just simply not ready
for public consumption
to any verifiable degree
of safety.
I think that we,
as developers in the industry,
need to earn the public's trust,
and not the other way around.
I think we need to be able
to demonstrate
why our system is, in fact,
safer than human drivers.
If self-driving cars eventually
do win public trust,
their adoption
may be less of a revolution
than a slow evolution.
In my opinion,
the safe solutions today
work at low speeds
in low-complexity environments.
So this includes driving
on private roads,
on campuses, retirement
communities, airports.
But we do not have solutions
that work in general
at high speeds,
in congestion,
and in really difficult
road conditions.
Having a fully
autonomous vehicle
being able to take you
anywhere, anytime
is very, very far in the future.
In fact, I don't have
even a guess
as to how far in the future
that will be.
It's not like a mobile phone
app,
where, you know, if the
mobile phone app doesn't work
ten percent of the
time, big deal.
This has got to work
all the time.
There's a pot of gold out there
at the end of the rainbow
for those who can actually
get this to work.
Now, the challenge is
how to get it to work safely.
♪
Major funding for "NOVA"
is provided by the following:
To order this "NOVA" program
on DVD,
visit ShopPBS
or call 1-800-PLAY-PBS.
This program is also available
on Amazon Prime Video.
♪