Unknown: Killer Robots (2023) - full transcript

Follows the terrifying behind-the-scenes of military-funded scientists racing to build this technology, as Artificial intelligence infiltrates every level of the armed forces.

I find AI to be awe-inspiring.

All right, circling up. Good formation.

AI has the potential
to eliminate poverty,

give us new medicines,

and make our world even more peaceful.

Nice work.

But there are so many risks
along the way.

With AI, we are essentially creating
a non-human intelligence

that is very unpredictable.

As it gets more powerful,

where are the red lines
we're going to draw with AI,



in terms of how we want to use it,
or not use it?

There is no place that is ground zero
for this conversation

more than military applications.

The battlefield has now become
the province of software and hardware.

Target has been acquired...

And militaries are racing

to develop AI faster
than their adversaries.

I'm dead.

We're moving towards a world

where not only major militaries,

but non-state actors, private industries,

or even our local police department
down the street

could be able to use these weapons
that can autonomously kill.

Will we cede the decision to take a life
to algorithms, to computer software?



It's one of the most pressing issues
of our time.

And, if not used wisely,

poses a grave risk
to every single person on the planet.

Good work out there, guys.

That's a wider lens
than we had before.

Really?

You can see a lot more data.

Very cool.

At Shield AI, we are building an AI pilot

that's taking self-driving,
artificial intelligence technology

and putting it on aircraft.

When we talk about an AI pilot,

we think about giving an aircraft
a higher level of autonomy.

They will be solving problems
on their own.

Nova is an autonomous quadcopter

that explores buildings
and subterranean structures

ahead of clearance forces

to provide eyes and ears in those spaces.

You can definitely tell
a ton of improvements

since we saw it last.

We're working
on some exploration changes today.

We're working a little
floor-by-floor stuff.

It'll finish one floor, all the rooms,
before going to the second.

- That's awesome.
- We put in some changes recently...

A lot of people often ask me

why artificial intelligence
is an important capability.

And I just think back to the missions
that I was executing.

Spent seven years in the Navy.

I was a former Navy SEAL
deployed twice to Afghanistan,

once to the Pacific Theater.

In a given day, we might have to clear
150 different compounds or buildings.

One of the core capabilities
is close-quarters combat.

Gunfighting at extremely close ranges
inside buildings.

You are getting shot at.

There are IEDs
potentially inside the building.

It's the most dangerous thing
that any special operations forces member,

any infantry member,
can do in a combat zone.

Bar none.

For the rest of my life
I'll be thankful for my time in the Navy.

There are a collection of moments
and memories that, when I think about,

I certainly get emotional.

It is cliché that freedom isn't free,
but I 100% believe it.

Um, I've experienced it,
and it takes a lot of sacrifice.

Sorry.

When something bad happens
to one of your teammates,

whether they're hurt or they're killed,

um, it's just a...
It's a really tragic thing.

You know, for me now
in the work that we do,

it's motivating to um... be able to

you know,
reduce the number of times

that ever happens again.

In the late 2000s,

there was this awakening
inside the Defense Department

to what you might call
the accidental robotics revolution.

We deployed thousands of air
and ground robots to Iraq and Afghanistan.

When I was asked
by the Obama administration

to become the Deputy Secretary of Defense,

the way war was fought...

uh, was definitely changing.

Robots were used
where people would have been used.

Early robotics systems
were remote-controlled.

There's a human driving it, steering it,
like you might a remote-controlled car.

They first started generally
going after improvised explosive devices,

and if the bomb blew up,
the robot would blow up.

Then you'd say, "That's a bummer.
Okay, get out the other robot."

In Afghanistan,
you had the Predator drone,

and it became a very, very useful tool
to conduct airstrikes.

Over time, military planners
started to begin to wonder,

"What else could robots be used for?"
And where was this going?

And one of the common themes
was this trend towards greater autonomy.

An autonomous weapon
is one that makes decisions on its own,

with little to no human intervention.

So it has an independent capacity,
and it's self-directed.

And whether it can kill
depends on whether it's armed or not.

When you have more and more autonomy
in your entire system,

everything starts to move
at a higher clock speed.

And when you operate
at a faster pace than your adversaries,

that is an extraordinarily
big advantage in battle.

What we focus on as it relates
to autonomy

is highly resilient intelligence systems.

Systems that can read and react
based on their environment,

and make decisions
about how to maneuver in that world.

The facility that we're at today
was originally built as a movie studio

that was converted over

to enable these realistic
military training environments.

We are here to evaluate our AI pilot.

The mission is looking for threats.
It's about clearance forces.

It can make a decision
about how to attack that problem.

We call this "the fatal funnel."
You have to come through a doorway.

It's where we're most vulnerable.

This one looks better.

The Nova lets us know,
is there a shooter behind that door,

is there a family behind that door?

It'll allow us to make better decisions
and keep people out of harms way.

We use the vision sensors

to be able to get an understanding
of what the environment looks like.

It's a multistory building.

Here's the map.

While I was exploring,
here's what I saw and where I saw them.

Person detector. That's sweet.

One of the other sensors
onboard Nova is a thermal scanner.

If that's 98.6 degrees,
it probably is a human.

People are considered threats
until deemed otherwise.

It is about eliminating the fog of war
to make better decisions.

And when we look to the future,

we're scaling out to build teams
of autonomous aircraft.

With self-driving vehicles,

ultimately the person has said to it,

"I'd like you to go
from point A to point B."

Our systems are being asked
not to go from point A to point B,

but to achieve an objective.

It's more akin to, "I need milk."

And then the robot would have to
figure out what grocery store to go to,

be able to retrieve that milk,
and then bring it back.

And even more so,

it may be more appropriately stated as,
"Keep the refrigerator stocked."

And so, this is a level of intelligence

in terms to figuring out what we need
and how we do it.

And if there is a challenge, or a problem,

or an issue arises,
figure out how to mitigate that.

When I had made the decision
to leave the Navy, I started thinking,

"Okay. Well, what's next?"

I grew up with the Internet.
Saw what it became.

And part of the conclusion
that I had reached was...

AI in 2015

was really where the Internet was in 1991.

And AI was poised to take off

and be one of the most
powerful technologies in the world.

Working with it every single day,
I can see the progress that is being made.

But for a lot of people,
when they think "AI,"

their minds immediately go to Hollywood.

Shall we play a game?

How about Global Thermonuclear War?

Fine.

When people think
of artificial intelligence generally,

they might think of The Terminator.
Or I, Robot.

Deactivate.

What am I?

Or The Matrix.

Based on what you see
in the sci-fi movies,

how do you know I'm a human?
I could just be computer generated AI.

Replicants are like any other machine.
They're either a benefit or a hazard.

But there's all sorts
of more primitive AIs,

that are still going to change our lives

well before we reach
the thinking, talking robot stage.

The robots are here.
The robots are making decisions.

The robot revolution has arrived,

it's just that it doesn't look like
what anybody imagined.

Terminator's
an infiltration unit.

Part man, part machine.

We're not talking about
a Terminator-style killer robot.

We're talking about AI
that can do some tasks that humans can do.

But the concern is
whether these systems are reliable.

New details in last night's
crash involving a self-driving Uber SUV.

The company created
an artificial intelligence chatbot.

She took on a rather racist tone...

Twenty-six state legislators
falsely identified as criminals.

The question is whether they can handle
the complexities of the real world.

The physical world
is really messy.

There are many things that we don't know,

making it much harder to train AI systems.

That is where machine learning systems
have started to come in.

Machine learning
has been a huge advancement

because it means that we don't have
to teach computers everything.

You actually give a computer
millions of pieces of information,

and the machine begins to learn.

And that could be applied to anything.

Our Robot Dog project,
we are trying to show

that our dog can walk across
many, many diverse terrains.

Humans have evolved
over billions of years to walk,

but there's a lot of intelligence
in adapting to these different terrains.

The question remains
for robotic systems is,

could they also adapt
like animals and humans?

With machine learning,

we collect lots and lots
of data in simulation.

A simulation is a digital twin of reality.

We can have many instances of that reality
running on different computers.

It samples thousands of actions
in simulation.

The ground that they're encountering
has different slipperiness.

It has different softness.

We take all the experience
of these thousands of robots

from simulation and download this
into a real robotic system.

The test we're going to do today
is to see if it can adapt to new terrains.

When the robot was going over foam,

the feet movements
were stomping on the ground.

Versus when it came on this poly surface,

it was trying to adjust the motion,
so it doesn't slip.

Then that is when it strikes you,

"This is what machine learning
is bringing to the table."

We think the Robot Dog
could be really helpful

in disaster response scenarios

where you need to navigate
many different kinds of terrain.

Or putting these dogs to do surveillance
in harsh environments.

But most technology
runs into the challenge

that there is some good they can do,
and there's some bad.

For example,
we can use nuclear technology for energy...

but we also could develop atom bombs
which are really bad.

This is what is known
as the dual-use problem.

Fire is dual-use.

Human intelligence is dual-use.

So, needless to say,
artificial intelligence is also dual-use.

It's really important
to think about AI used in context

because, yes, it's terrific
to have a search-and-rescue robot

that can help locate somebody
after an avalanche,

but that same robot can be weaponized.

When you see companies
using robotics

for putting armed weapons on them,

a part of you becomes mad.

And a part of it is the realization
that when we put our technology,

this is what's going to happen.

This is a real
transformative technology.

These are weapon systems

that could actually change
our safety and security in a dramatic way.

As of now, we are not sure
that machines can actually make

the distinction
between civilians and combatants.

Early in the war in Afghanistan,
I was part of an Army Ranger sniper team

looking for enemy fighters
coming across the border.

And they sent a little girl
to scout out our position.

One thing that never came up
was the idea of shooting this girl.

Under the laws of war,
that would have been legal.

They don't set an age
for enemy combatants.

If you built a robot
to comply perfectly with the law of war,

it would have shot this little girl.

How would a robot know the difference
between what's legal and what is right?

When it comes to
autonomous drone warfare,

they wanna take away the harm
that it places on American soldiers

and the American psyche,

uh, but the increase on civilian harm
ends up with Afghans,

and Iraqis, and Somalians.

I would really ask those who support
trusting AI to be used in drones,

"What if your village was
on the receiving end of that?"

AI is a dual-edged sword.

It can be used for good,
which is what we'd use it for ordinarily,

and at the flip of a switch,

the technology becomes potentially
something that could be lethal.

I'm a clinical pharmacologist.

I have a team of people

that are using artificial intelligence
to figure out drugs

that will cure diseases
that are not getting any attention.

It used to be with drug discoveries,
you would take a molecule that existed,

and do a tweak to that
to get to a new drug.

And now we've developed AI
that can feed us with millions of ideas,

millions of molecules,

and that opens up so many possibilities

for treating diseases
we've never been able to treat previously.

But there's definitely a dark side
that I never have thought

that I would go to.

This whole thing started
when I was invited

by an organization out of Switzerland
called the Spiez Laboratory

to give a presentation
on the potential misuse of AI.

Sean just sent me an email
with a few ideas

of some ways we could misuse
our own artificial intelligence.

And instead of asking our model
to create drug-like molecules,

that could be used to treat diseases,

let's see if we can generate
the most toxic molecules possible.

I wanted to make the point,
could we use AI technology

to design molecules that were deadly?

And to be honest,
we thought it was going to fail

because all we really did
was flip this zero to a one.

And by inverting it,
instead of driving away from toxicity,

now we're driving towards toxicity.

And that's it.

While I was home,
the computer was doing the work.

I mean, it was cranking through,
generating thousands of molecules,

and we didn't have to do anything
other than just push "go."

The next morning,
there was this file on my computer,

and within it
were roughly 40,000 molecules

that were potentially

some of the most toxic molecules
known to humankind.

The hairs on the back of my neck
stood up on end.

I was blown away.

The computer made
tens of thousands of ideas

for new chemical weapons.

Obviously, we have molecules
that look like and are VX analogs and VX

in the data set.

VX is one of the most potent
chemical weapons in the world.

New claims from police

that the women seen attacking Kim Jong-nam
in this airport assassination

were using a deadly nerve agent called VX.

It can cause death
through asphyxiation.

This is a very potent molecule,

and most of these molecules were predicted
to be even more deadly than VX.

Many of them had never,
as far as we know, been seen before.

And so, when Sean and I realized this,
we're like,

"Oh, what have we done?"

I quickly realized
that we had opened Pandora's box,

and I said, "Stop.
Don't do anything else. We're done."

"Just make me the slides
that I need for the presentation."

When we did this experiment,

I was thinking, "What's the worst thing
that could possibly happen?"

But now I'm like, "We were naive.
We were totally naive in doing it."

The thing that terrifies me the most
is that anyone could do what we did.

All it takes is the flip of a switch.

How do we control
this technology before it's used

potentially to do something
that's utterly destructive?

At the heart of the conversation
around artificial intelligence

and how do we choose to use it in society

is a race between the power,
with which we develop technologies,

and the wisdom that we have to govern it.

There are the obvious
moral and ethical implications

of the same thing
that powers our smartphones

being entrusted
with the moral decision to take a life.

I work with the Future Of Life Institute,
a community of scientist activists.

We're overall trying to show

that there is this other side
to speeding up and escalating automation.

But we're trying to make sure
that technologies we create

are used in a way
that is safe and ethical.

Let's have conversations
about rules of engagement,

and codes of conduct in using AI
throughout our weapons systems.

Because we are now seeing
"enter the battlefield" technologies

that can be used to kill autonomously.

In 2021, the UN released
a report on the potential use

of a lethal autonomous weapon
on the battlefield in Libya.

A UN panel said that a drone
flying in the Libyan civil war last year

had been programmed
to attack targets autonomously.

If the UN reporting is accurate,

this would be
a watershed moment for humanity.

Because it marks a use case

where an AI made the decision
to take a life, and not a human being.

You're seeing advanced
autonomous weapons

beginning to be used
in different places around the globe.

There were reports out of Israel.

Azerbaijan used autonomous systems
to target Armenian air defenses.

It can fly around
the battlefield for hours,

looking for things to hit on its own,

and then plow into them
without any kind of human intervention.

And we've seen recently

these different videos
that are posted in Ukraine.

It's unclear what mode they might
have been in when they were operating.

Was a human in the loop,
choosing what targets to attack,

or was the machine doing that on its own?

But there will certainly
come a point in time,

whether it's already happened in Libya,
Ukraine or elsewhere,

where a machine makes its own decision
about whom to kill on the battlefield.

Machines exercising
lethal power against humans

without human intervention

is politically unacceptable,
morally repugnant.

Whether the international community

will be sufficient
to govern those challenges

is a big question mark.

If we look towards the future,
even just a few years from now,

what the landscape looks like
is very scary,

given that the amount
of capital and human resource

going into making AI more powerful

and using it
for all of these different applications,

is immense.

Oh my God, this guy.

He knows he can't win.

Oh...

When I see AI win at different problems,
I find it inspirational.

Going for a little Hail Mary action.

And you can apply those same tactics,
techniques, procedures to real aircraft.

- Good game.
- All right, good game.

It's surprising to me

that people continue to make statements
about what AI can't do. Right?

"Oh, it'll never be able
to beat a world champion in chess."

An IBM computer
has made a comeback

in Game 2 of its match
with world chess champion, Garry Kasparov.

Whoa! Kasparov has resigned!

When I see something
that is well beyond my understanding,

I'm scared. And that was something
well beyond my understanding.

And then people would say,

"It'll never be able to beat
a world champion in the game of Go."

I believe human intuition
is still too advanced for A.I.

to have caught up.

Go is one of the most
complicated games anyone can learn

because the number of moves on the board,
when you do the math,

equal more atoms
than there are in the entire universe.

There was a team at Google
called DeepMind,

and they created a program called AlphaGo

to be able to beat
the world's best players.

Wow.

Congratulations to...

- AlphaGo.
- AlphaGo.

A computer program
has just beaten a 9 dan professional.

Then DeepMind chose StarCraft
as kind of their next AI challenge.

StarCraft is perhaps the most popular
real-time strategy game of all time.

AlphaStar became famous
when it started defeating world champions.

AlphaStar
absolutely smashing Immortal Arc.

Know what?

This is not gonna be a fight
that the pros can win.

It's kind of ridiculous.

Professional gamers say,
"I would never try that tactic."

"I would never try that strategy.
That's something that's not human."

And that was perhaps,
you know, the "a-ha" moment for me.

I came to realize the time is now.

There's an important technology
and an opportunity to make a difference.

I only knew the problems that I had faced
as a SEAL in close-quarters combat,

but one of my good friends,
who was an F-18 pilot, told me,

"We have the same problem
in the fighter jet community."

"They are jamming communications."

"There are proliferated
surface-to-air missile sites

that make it too dangerous to operate."

Imagine if we had a fighter jet
that was commanded by an AI.

Welcome
to the AlphaDogfights.

We're a couple of minutes away
from this first semifinal.

DARPA, the Defense
Advanced Research Projects Agency,

had seen AlphaGo and AlphaStar,

and so this idea of the AlphaDogfight
competition came to life.

It's what you wanna see
your fighter pilots do.

This looks like
human dogfighting.

Dogfighting is
fighter-on-fighter aircraft going at it.

You can think about it
as a boxing match in the sky.

Maybe people have seen the movie Top Gun.

- Can we outrun these guys?
- Not their missiles and guns.

It's a dogfight.

Learning to master dogfighting
can take eight to ten years.

It's an extremely complex challenge
to build AI around.

The prior approaches to autonomy
and dogfighting tended to be brittle.

We figured machine learning was
probably the way to solve this problem.

At first, the AI knew nothing
about the world in which it was dropped.

It didn't know it was flying
or what dogfighting was.

It didn't know what an F-16 is.

All it knew
was the available actions it could take,

and it would start
to randomly explore those actions.

The blue plane's been training
for only a small amount of time.

You can see it wobbling back and forth,

uh, flying very erratically,
generally away from its adversary.

As the fight progresses,
we can see blue is starting

to establish here its game plan.

It's more in a position to shoot.

Once in a while,
the learning algorithm said,

"Here's a cookie.
Keep doing more of that."

We can take advantage of computer power
and train the agents many times

in parallel.

It's like a basketball team.

Instead of playing the same team
over and over again,

you're traveling the world
playing 512 different teams,

all at the same time.

You can get very good, very fast.

We were able to run that simulation 24/7

and get something like 30 years
of pilot training time in, in 10 months.

We went from barely able
to control the aircraft

to being a stone-cold assassin.

Under training, we were competing only
against other artificial intelligence.

But competing against humans directly
was kind of the ultimate target.

My name is Mike Benitez,

I'm a Lieutenant Colonel
in the U.S. Air Force.

Been on active duty about 25 years.

I've got 250 combat missions

and I'm a weapons school graduate,

which is Air Force version of Top Gun.

I've never actually flown against AI.

So I'm pretty excited
to see how well I can do.

We got now a 6,000 feet
offensive set up nose-to-nose.

Fight's on.

He's gone now.

Yeah, that's actually really interesting.

Dead. Got him. Flawless victory.

All right, round two.

What the artificial intelligence is doing
is maneuvering with such precision,

uh, that I just can't keep up with it.

Right into the merge.

Oh, now you're gone.

Got him!

Still got me.

AI is never scared.

There's a human emotional element
in the cockpit an AI won't have.

One of the more interesting strategies
our AI developed,

was what we call the face shot.

Usually a human wants to shoot from behind

because it's hard for them
to shake you loose.

They don't try face shots
because you're playing a game of chicken.

When we come head-on,
3,000 feet away to 500 feet away

can happen in a blink of an eye.

You run a high risk of colliding,
so humans don't try it.

The AI, unless it's told to fear death,
will not fear death.

All good. Feels like
I'm fighting against a human, uh,

a human that has a reckless abandonment
for safety.

He's not gonna survive this last one.

He doesn't have enough time.

Ah!

Good night.

I'm dead.

It's humbling to know

that I might not even be
the best thing for this mission,

and that thing could be something
that replaces me one day.

Same 6 CAV.

One thousand offset.

With this AI pilot
commanding fighter aircraft,

the winning is relentless, it's dominant.

It's not just winning by a wide range.

It's, "Okay, how can we get that
onto our aircraft?" It's that powerful.

It's realistic to expect
that AI will be piloting an F-16,

and it will not be that far out.

If you're going up against
an AI pilot that has a 99.99999% win rate,

you don't stand a chance.

When I think about
one AI pilot being unbeatable,

I think about what a team of 50, or 100,

or 1,000 AI pilots
can continue to, uh, achieve.

Swarming is a team
of highly intelligent aircraft

that work with each other.

They're sharing information about
what to do, how to solve a problem.

Swarming will be a game-changing
and transformational capability

to our military and our allies.

Target has been acquired,
and the drones are tracking him.

Here comes the land.

Primary goal
of the swarming research we're working on

is to deploy a large number of drones

over an area that is hard to get to
or dangerous to get to.

The Army Research Lab has been supporting
this particular research project.

If you want to know what's in a location,

but it's hard to get to that area,
or it's a very large area,

then deploying a swarm
is a very natural way

to extend the reach of individuals

and collect information
that is critical to the mission.

So, right now in our swarm deployment,

we essentially give a single command
to go track the target of interest.

Then the drones go
and do all of that on their own.

Artificial intelligence allows
the robots to move collectively as a swarm

in a decentralized manner.

In the swarms in nature that we see,

there's no boss,
no main animal telling them what to do.

The behavior is emerging
out of each individual animal

following a few simple rules.

And out of that grows this emergent
collective behavior that you see.

What's awe-inspiring
about swarms in nature

is the graceful ability
in which they move.

It's as if they were built
to be a part of this group.

Ideally, what we'd love to see
with our drone swarm is,

much like in the swarm in nature,

decisions being made
by the group collectively.

The other piece of inspiration for us

comes in the form
of reliability and resiliency.

That swarm will not go down

if one individual animal
doesn't do what it's supposed to do.

Even if one of the agents falls out,
or fails,

or isn't able to complete the task,

the swarm will continue.

And ultimately,
that's what we'd like to have.

We have this need in combat scenarios
for identifying enemy aircraft,

and it used to be we required
one person controlling one robot.

As autonomy increases,

I hope we will get to see
a large number of robots

being controlled
by a very small number of people.

I see no reason why we couldn't achieve
a thousand eventually

because each agent
will be able to act of its own accord,

and the sky's the limit.

We can scale our learning...

We've been working on swarming
in simulation for quite some time,

and it is time to bring
that to real-world aircraft.

We expect to be doing
three robots at once over the network,

and then starting
to add more and more capabilities.

We want to be able
to test that on smaller systems,

but take those same concepts
and apply them to larger systems,

like a fighter jet.

We talk a lot about,

how do you give a platoon
the combat power of a battalion?

Or a battalion
the combat power of a brigade?

You can do that with swarming.

And when you can unlock
that power of swarming,

you have just created
a new strategic deterrence

to military aggression.

I think the most exciting thing
is the number of young men and women

who we will save
if we really do this right.

And we trade machines

rather than human lives.

Some argue that autonomous weapons
will make warfare more precise

and more humane,

but it's actually difficult to predict

exactly how autonomous weapons
might change warfare ahead of time.

It's like the invention
of the Gatling gun.

Richard Gatling was an inventor,

and he saw soldiers coming back,
wounded in the Civil War,

and wanted to find ways
to make warfare more humane.

To reduce the number of soldiers
that were killed in war

by reducing the number
of soldiers in the battle.

And so he invented the Gatling gun,
an automated gun turned by a crank

that could automate the process of firing.

It increased effectively by a hundredfold
the firepower that soldiers could deliver.

Oftentimes, efforts to make warfare
more precise and humane...

...can have the opposite effect.

Think about the effect
of one errant drone strike

in a rural area

that drives the local populace
against the United States,

against the local government.
You know, supposedly the good guys.

Now magnify that by 1,000.

The creation of a weapon system

that is cheap, scalable,
and doesn't require human operators

drastically changes
the actual barriers to conflict.

It keeps me up at night to think
of a world where war is ubiquitous,

and we no longer carry
the human and financial cost of war

because we're just so far removed from...

the lives that will be lost.

This whole thing is haunting me.

I just needed an example
of artificial intelligence misuse.

The unanticipated consequences
of doing that simple thought experiment

have gone way too far.

When I gave the presentation

on the toxic molecules
created by AI technology,

the audience's jaws dropped.

The next decision was whether
we should publish this information.

On one hand, you want to warn the world

of these sorts of capabilities,
but on the other hand,

you don't want to give somebody the idea
if they had never had it before.

We decided it was worth publishing

to maybe find some ways
to mitigate the misuse of this type of AI

before it occurs.

The general public's reaction
was shocking.

We can see the metrics on the page,
how many people have accessed it.

The kinds of articles we normally write,
we're lucky if we get...

A few thousand people look at our article
over a period of a year or multiple years.

It was 10,000 people
had read it within a week.

Then it was 20,000,
then it was 30,000, then it was 40,000,

and we were up to 10,000 people a day.

We've done The Economist,
the Financial Times.

Radiolab, you know, they reached out.
Like, I've heard of Radiolab!

But then the reactions turned
into this thing that's out of control.

When we look at those tweets, it's like,
"Oh my God, could they do anything worse?"

Why did they do this?

And then we got an invitation
I never would have anticipated.

There was a lot of discussion
inside the White House about the article,

and they wanted to talk to us urgently.

Obviously, it's an incredible honor

to be able
to talk to people at this level.

But then it hits you

like, "Oh my goodness,
it's the White House. The boss."

This involved putting together
data sets that were open source...

And in about six hours, the model was able
to generate about over 40,000...

They asked questions
about how much computing power you needed,

and we told them it was nothing special.

Literally a standard run-of-the-mill,
six-year-old Mac.

And that blew them away.

The folks that are in charge
of understanding chemical warfare agents

and governmental agencies,
they had no idea of this potential.

We've got this cookbook
to make these chemical weapons,

and in the hands of a bad actor
that has malicious intent

it could be utterly horrifying.

People have to sit up and listen,

and we have to take steps
to either regulate the technology

or constrain it in a way
that it can't be misused.

Because the potential for lethality...

is terrifying.

The question of the ethics of AI
is largely addressed by society,

not by the engineers or technologists,
the mathematicians.

Every technology that we bring forth,
every novel innovation,

ultimately falls under the purview
of how society believes we should use it.

Right now,
the Department of Defense says,

"The only thing that is saying
we are going to kill something

on the battlefield is a human."

A machine can do the killing,

but only at the behest
of a human operator,

and I don't see that ever changing.

They assure us
that this type of technology will be safe.

But the United States military just
doesn't have a trustworthy reputation

with drone warfare.

And so, when it comes
to trusting the U.S. military with AI,

I would say, you know, the track record
kinda speaks for itself.

The U.S. Defense Department policy
on the use of autonomy in weapons

does not ban any kind of weapon system.

And even if militaries
might not want autonomous weapons,

we could see militaries handing over
more decisions to machines

just to keep pace with competitors.

And that could drive militaries
to automate decisions

that they may not want to.

Vladimir Putin said,

"Whoever leads in AI
is going to rule the world."

President Xi has made it clear that AI
is one of the number one technologies

that China wants to dominate in.

We're clearly
in a technological competition.

You hear people talk
about guardrails,

and I believe
that is what people should be doing.

But there is a very real race
for AI superiority.

And our adversaries, whether it's China,
whether it's Russia, whether it's Iran,

are not going to give two thoughts
to what our policy says around AI.

You're seeing a lot more conversations
around AI policy,

but I wish more leaders
would have the conversation

saying, ''How quickly
can we build this thing?

Let's resource the heck out of it
and build it."

We are at the Association of the U.S.
Army's biggest trade show of the year.

Basically, any vendor who is selling
a product or technology into a military

will be exhibiting.

The Tyndall Air Force Base
has four of our robots

that patrol their base
24 hours a day, 7 days a week.

We can add everything from cameras
to sensors to whatever you need.

Manipulator arms. Again, just to complete
the mission that the customer has in mind.

What if your enemy introduces AI?

A fighting system that thinks
faster than you, responds faster,

than what a human being can do?
We've got to be prepared.

We train our systems
to collect intel on the enemy,

managing enemy targets
with humans supervising the kill chain.

- Hi, General.
- How you doing?

Good, sir. How are you? Um...

I'll just say
no one is investing more in an AI pilot.

Our AI pilot's called Hivemind,

so we applied it to our quadcopter Nova.
It goes inside buildings,

explores them
ahead of special operation forces

and infantry forces.

We're applying Hivemind to V-BAT,

so I think about, you know,
putting up hundreds of those teams.

Whether it's the Taiwan Strait,

whether it's in the Ukraine,
deterring our adversaries.

So, pretty excited about it.

- All right. Thank you.
- So. Thank you, General.

AI pilots should be ubiquitous,
and that should be the case by 2025, 2030.

Its adoption will be rapid
throughout militaries across the world.

What do you do with the Romanian military,
their UAS guy?

High-tech in the military.

We've spent half a billion dollars to date
on building an AI pilot.

We will spend another billion dollars
over the next five years. And that is...

It's a major reason why we're winning
the programs of record in the U.S.

Nice.

I mean, it's impressive.
You succeeded to weaponize that.

Uh, it is... This is not weaponized yet.

So not yet. But yes, in the future.

Our customers think about it as a truck.
We think of it as an intelligent truck

that can do a lot of different things.

Thank you, buddy.

I'll make sure
to follow up with you.

If you come back in 10 years,

you'll see that, um...

AI and autonomy
will have dominated this entire market.

Forces that are supported
by AI and autonomy

will absolutely dominate,
crush, and destroy forces without.

It'll be the equivalent
of horses going up against tanks,

people with swords
going up against the machine gun.

It will not even be close.

It will become ubiquitous,
used at every spectrum of warfare,

the tactical level,

the strategic level,

operating at speeds
that humans cannot fathom today.

Commanders are already overwhelmed
with too much information.

Imagery from satellites,
and drones, and sensors.

One of the things AI can do

is help a commander
more rapidly understand what is occurring.

And then, "What are the decisions
I need to make?"

Artificial intelligence
will take into account all the factors

that determine the way war is fought,

come up with strategies...

and give recommendations
on how to win a battle.

We at Lockheed Martin,
like our Department of Defense customer,

view artificial intelligence
as a key technology enabler

for command and control.

The rate of spread
has an average of two feet per second.

This perimeter
is roughly 700 acres.

The fog of war is
a reality for us on the defense side,

but it has parallels
to being in the environment

and having to make decisions
for wildfires as well.

The Washburn fire
is just north of the city of Wawona.

You're having to make decisions
with imperfect data.

And so how do we have AI help us
with that fog of war?

Wildfires are very chaotic.

They're very complex,

and so we're working
to utilize artificial intelligence

to help make decisions.

The Cognitive Mission Manager
is a program we're building

that takes aerial infrared video

and then processes it
through our AI algorithms

to be able to predict
the future state of the fire.

As we move into the future,

the Cognitive Mission Manager
will use simulation,

running scenarios
over thousands of cycles,

to recommend the most effective way
to deploy resources

to suppress high-priority areas of fire.

It'll say, "Go perform an aerial
suppression with a Firehawk here."

"Take ground crews that clear brush...

...firefighters that are hosing down,

and deploy them
into the highest priority areas."

Those decisions
will be able to be generated faster

and more efficiently.

We view AI
as uniquely allowing our humans

to be able to keep up
with the ever-changing environment.

And there are
a credible number of parallels

to what we're used to at Lockheed Martin
on the defense side.

The military is no longer
talking about just using AI

in individual weapons systems
to make targeting and kill decisions,

but integrating AI

into the whole decision-making
architecture of the military.

The Army has a big project
called Project Convergence.

The Navy has Overmatch.

And the Air Force has
Advanced Battle Management System.

The Department of Defense
is trying to figure out,

"How do we put all these pieces together,

so that we can operate
faster than our adversary

and really gain an advantage?"

An AI Battle Manager would be
like a fairly high-ranking General

who's in charge of the battle.

Helping to give orders
to large numbers of forces,

coordinating the actions
of all of the weapons that are out there,

and doing it at a speed
that no human could keep up with.

We've spent the past 70 years

building the most sophisticated military
on the planet,

and now we're facing the decision
as to whether we want to cede control

over that infrastructure to an algorithm,
to software.

And the consequences of that decision

could trigger the full weight
of our military arsenals.

That's not one Hiroshima. That's hundreds.

This is the time that we need to act

because the window to actually
contain this risk is rapidly closing.

This afternoon, we start
with international security challenges

posed by emerging technologies

in the area of lethal
autonomous weapons systems.

Conversations are happening
within the United Nations

about the threat
of lethal autonomous weapons

and our prohibition on systems
that use AI to select and target people.

Consensus amongst technologists
is clear and resounding.

We are opposed
to autonomous weapons that target humans.

For years, states have actually
discussed this issue

of lethal autonomous weapon systems.

This is about a common,
shared sense of security.

But of course, it's not easy.

Certain countries,
especially those military powers,

they want to be ahead of the curve,

so that they will be
ahead of their adversaries.

The problem is, everyone
has to agree to get anything done.

There will be
at least one country that objects,

and certainly the United States
and Russia have both made clear

that they are opposed to a treaty
that would ban autonomous weapons.

When we think about the number
of people working to make AI more powerful

that room is very crowded.

When we think about the room of people,
making sure that AI is safe,

that room's much more sparsely populated.

But I'm also really optimistic.

I look at something
like the Biological Weapons Convention,

which happened
in the middle of the Cold War...

...despite tensions between the Soviet Union
and the United States.

They were able to realize
that the development of biological weapons

was in neither of their best interests,

and not in the best interests
of the world at large.

Arms race dynamics
favor speed over safety.

But I think
what's important to consider is,

at some point,
the cost of moving fast becomes too high.

We can't just develop things
in isolation

and put them out there without any thought
of where they could go in the future.

We've got to prevent
that atom bomb moment.

The stakes in the AI race
are massive.

I don't think a lot of people
appreciate the global stability

that has been provided
by having a superior military force

for the past 75 years.

And so the United States
and our allied forces,

they need to outperform adversarial AI.

There is no second place in war.

China laid out
an ambitious plan

to be the world leader in AI by 2030.

It's a race that some say America
is losing...

He will accelerate
the adoption of artificial intelligence

to ensure
our competitive military advantage.

We are racing forward
with this technology.

I think what's unclear
is how far are we going to go?

Do we control technology,
or does it control us?

There's really no opportunity
for do-overs.

Once the genie is out of the bottle,
it is out.

And it is very, very difficult
to put it back in.

And if we don't act now,
it's too late.

It may already be too late.