The A.I. Race (2017) - full transcript

The Discussion of Artificially Intelligence replacing Humans. And the Technological Increases of Artificial Intelligence (Moore's Law) Capabilities, Artificial Intelligence that learns.

(mystical music)

[Female Robotic voice] Hi, how are you feeling?

I just checked your health data.

Your last meal contributed 60% of your daily nutrients.

And you've completed 11,000 steps towards

your daily fitness goal.

So you can take a seat now.

I've got something for you to watch.

And I'll be watching, too.

Tonight, you're going to see humans take on

the robots that might replace them.



There will always need to be experienced

people on the road.

From truckies to lawyers,

artificial intelligence is coming.

Actually it's already here.

I didn't realize that it would be just be able

to tell you, hey, here's the exact answer

to your question.

We'll challenge your thinking about AI.

[Male Robotic Voice] Same category, 1600.

The AI's going to become like electricity.

Automation isn't going to affect

some workers, it's going to affect every worker.

And we let the generation most affected



take on the exerts.

I think that the younger generations

probably have a better idea of where things

are going than the younger generations.

(laughing)

Tonight, we'll help you get ready

for The AI Race.

(upbeat music)

(mellow music)

Australian truckies often work up to

72 hours a week and are now driving

bigger rigs to try to make ends meet.

I've seen a lot of people go backwards

out of this industry and I'm seeing

a lot of pressures it's caused them,

their family life, especially when you're

paying the rig off.

Now a new and unexpected threat to Frank

and other truck drivers is coming on fast.

Last year this driverless truck in the US

became the first to make an interstate delivery.

it traveled nearly 200 kilometers on the open road

with no one at the wheel, no human that is.

The idea of robot vehicles on the open road

seemed ludicrous to most people just five years ago.

Now just about every major auto and tech company

is developing them.

So what changed?

An explosion in artificial intelligence.

(mellow music)

There's lots of AI already in our lives.

You can already see it on your smartphone

every time you use Siri, every time you ask

Alexa a question.

Every time you actually use your cellphone

navigation, we're using one of these

algorithms you're using some AI that's

recognizing your speech, answering questions,

giving you search results, recommending books

for you to buy on Amazon.

They're the beginnings of AI everywhere in our lives.

(gentle music)

We don't think about electricity.

Electricity powers our planet.

It powers pretty much everything we do.

It's going to be that you walk into a room

and you say, room, lights on.

You walk and you sit in your car and you say,

take me home.

(whistles)

A driverless car is essentially a robot.

It has a computer that takes input from its senses

and produces an output.

The main senses are radar, which can be found

in adaptive cruise control.

Ultrasonic senses, and then there's cameras

that collect images.

And this data is used to control the car,

to slow the car down, to accelerate the car,

to turn the wheels.

There's been an explosion in AI now,

because of the convergence of full exponentials.

The first exponential is Moore's Law.

The fact that every two years we've had a doubling

and computing performance.

The second exponential is that every two years

we've had a doubling of the amount of data that we have.

Because these machine learning algorithms

are very hungry for data.

The third exponential is that well we've been

working on AI for 50 years or so now.

And our algorithm are starting to get better.

And then the fourth exponential, which is

over the last few years, we've had a doubling

every two years of the amount of funding

going into AI.

We now have the compute power.

We now have the data.

We now have the algorithms.

We now have a lot of people working on the problems.

(engine turns)

It could be you just jump into the car.

You assume the car knows where you need to go

because it has access to your calendar, your diary,

where you're meant to be.

And if you did not want the car to go

where you're calendar says you ought to be,

then you need to tell the car, oh, by the way,

don't take me to the meeting that's in my calendar,

take me to the beach.

[Host] But Frank Black won't have a bar of it.

I think it's crazy stuff.

You've got glitches in computers now.

The banks are having glitches with their ATMs

and emails are having glitches.

Who's to say this is going to be perfect

and this is a lot more dangerous if there's

a computer glitch.

There'll always need to be experienced people

on the road, not machines.

Frank is going to explain why he believes

robots can never match human drivers.

(mellow music)

Okay then, let's do it.

But Frank is off to a rocky start,

Driverless trucks in Rio Tinto Mines

in West Australia, show productivity gains

of 15%.

Frank needs to break every five hours

and rest every 12.

Oh and he needs to eat and he expects to be paid

for his work.

Robots don't need a salary.

Trials also indicate that driverless vehicles

save up to 15% on fuel and emissions.

Especially when driving very close together

in a formation called Platooning.

And at first glance, driverless technology

could dramatically reduce road accidents

because it's estimated that 90% of accidents

are due to human error such as fatigue,

or loss of concentration.

Robots don't get tired.

But hang on, Frank's not done.

He's about to launch a comeback using 30 years

of driving experience.

If there's something, say like a group of kids

playing with a ball on the side of the road.

We can see that ball starting to bounce

towards the road.

We anticipate that it would be a strong possibility

that that child will run out in the road,

you know, after that ball.

I can't see how a computer can anticipate

that for a start and even if it did,

what sort of reaction will it take?

Would it say, swerve to the left,

swerve to the right?

Will it just break and bring the vehicle

to a stop?

What about if it can't stop in time?

In fact, right now, a self-driving

vehicle can only react according to its program.

Anything unprogrammed can create problems.

Like when this Tesla drove into roadworks barrier,

after the human driver failed to take back control.

And what if some of the sensors fail?

What happens if something gets on the lens

and people doesn't know where it's going?

It's true, currently heavy rain or fog

or even unclear road signs can bamboozle

driverless technology.

And then there's the most unpredictable

element of all.

Human drivers.

Stupidity always find new forms.

Quite often you see things you've never seen before.

That's why there are no plans to trial

driverless trucks in complex urban settings right now.

They'll initially be limited to predictable

multi-lane highways.

You also still need a human right now

to load and unload a truck.

And a robot truck won't help change your tire.

If someone's in trouble on the road,

you'll usually find that a trucker

will pull over and make sure they're all right.

Finally there are road rules.

Australia requires human hands on the steering wheel

at all times, in every state and territory.

Hey Frank.

You won the race.

One for the human beings.

(laughing)

(upbeat music)

But how long can human drivers

stay on top?

Nearly 400,000 Australians earn their living

from driving, any more when you add part-time drivers.

But the race is on to deliver the first

version of a fully autonomous vehicle

in just four years.

And it might not be hype.

Because AI is getting much better, much faster

every year.

With a version of AI called Machine Learning.

(mellow music)

Machine learning is the little part of AI

that's focused on teaching programs to learn.

If you think about how we got to be intelligent,

we started out not knowing very much

when we were born and most of what we got

is through learning.

And so we write programs that learn

to improve themselves.

They need, at the moment, lots of data.

And they get better and better and in many cases,

for setting narrow focus domains,

we can often actually exceed human level performance.

When AlphaGo beats Lee Sedol last year,

one of the best Go players on the planet,

that was a landmark moment.

So we've always used games as benchmarks,

both between humans and between humans and machines.

And a quarter century ago, chess fell

to the computers.

And at that time, people thought,

well Go is going to be like that.

Because in Go, there are so many more possible moves.

And the best Go players weren't working

by trying all possibilities ahead.

They were working on kind of the gestalt of

what it looked like and working on intuition.

And we didn't have any idea of how to

instill that type of intuition into a computer.

(mesmerizing music)

But what happened is we've got some recent

techniques with deep learning where

we're able to do things like understand

photos, understand speech and so on

and people said, maybe this will be

the key to getting that type of intuition.

Sp, first it started by practicing on

every game that a master had ever played.

You feed them all in and it practices on that.

The key was to get AlphaGo good enough

from training it on past gains by humans

sot hat it could then start playing itself

and improving itself.

And one things that's very interesting

is that the amount of time it took

the total number of person years invested

is a tenth or less than the amount of time

it took for IBM to do the chess playing.

So the rate of learning is going to be exponential.

Something that we, as humans, are not used

to seeing.

We have to learn things painfully ourselves.

And the computers are going to learn

on a planet-wide scale, not on an individual level.

(mesmerizing music)

There is this interesting idea

that the intelligence would just suddenly explode

and take us to what's called the singularity,

where machines now improve themselves

almost without end.

There are lots of reasons to suppose that

maybe that might happen, but if it does happen,

most of my colleagues think it's about

50 years away, maybe even 100.

(robot gasps)

I'm not convinced that how important

intelligence is.

So I think that there's lots of different

attributes and intelligence is only one of them

and there certainly are tasks that having

a lot of intelligence would help.

And being able to compute quickly would help

so if I want to trade stocks then having a computer

that's smarter than anybody else is going to give

me a definite advantage.

But I think if I wanted to solve the Middle East

crisis, I don't think it's not being solved

because nobody's smart enough.

But AI experts believe robot cars

will improve so much that humans will eventually

be banned from driving.

(dramatic music)

Big road blocks remain, not the least

of which is public acceptance.

As we found out after inviting professional

drivers to meet two robot car experts.

How are you doing, Maria?

Straight away the first thing is to be safety.

You definitely have to have safety paramount.

And obviously efficiency.

So the big question, when is it going to happen?

In the next five to 10 years we will see

higher autonomous vehicles on the road.

If you want to drive from city to Canberra,

you drive to the freeway, activate autopilot

or whatever it will be called at the time

and by the time you arrive in Canberra,

the car will ask you to take back control.

There are predictions that in 20 years time,

50% of new vehicles will actually be

completely driverless.

What makes us think that these computers

in these vehicles are going to be fool-proof?

Well we were able to send rockets to the moon

and you know, I think that there are ways

of doing it and you can have backup systems

and you have backups for your backups and,

but I agree.

Reliability is kind of a big question mark.

But we're not talking a phone call dropping out

or an email shutting down, we're talking about

a 60 ton vehicle, in traffic, that's going to

kill people, there will be deaths

if it makes a mistake.

I think we need to accept that there will

still be accidents and a machine can make

a mistake, can shutdown, can fail

and if we reduce accidents by,

say 90%, there will still be 10% of the current

accidents will still occur on the network.

Who said it's going to be 90%?

How do you work that out?

90% is because 90% of the accidents

are because of human error.

And the idea is if we take the human out

we could potentially reduce it by 90%.

Have any of you ever driven a car

available on the market today with all this

technology, autopilot and everything in there?

It's absolutely unbelievable how safe

and comfortable you feel.

I think people will ultimately accept

this technology because we will be going

in steps.

I would say, for me as an Uber driver,

we're providing a passenger service

and those passengers, when they're going to

the airport, a lot of luggage.

If it's an elderly passenger, they need help

to get into the car, they need help

getting out of the car.

The human factor needs to be there.

I would argue that you can offer

a much better service if you're not

also driving.

So the cars taking care of the journey

and you're taking care of the customer.

And improving the customer experience.

And I think that there's a lot of scope for improvement

in the taxi and Uber customer experience.

You could offer tax advice, you could offer

financial advice.

(laughing)

It's unlimited.

Then we go back though, they're not fully driverless

vehicles anymore, we've still got a babysitter

there, a human being to look after the cars.

So what are we gaining with the driverless technology?

Well, the opportunity to do that.

Yeah, but, that's--

Are you trying to reduce cost by not having to

drive in the vehicle?

Well, it depends on what people are paying for, okay?

And if you are in business, you are trying

to get as many customers as possible.

And if you're competitor has autonomous vehicles

and is offering, you know, daycare services

or looking after disabled, you probably

won't be in business very long if they're able to

provide a much better customer experience.

For my personal use, I like to drive my car.

I want to enjoy driving.

Well I think in 50 years there will be

special places for people with vintage cars

and they can go out and drive around.

(all laugh)

(drowned out by laughter) Someday driving

our vintage car when these autonomous vehicles

have got our roads.

I mean, in the future when all the cars

are autonomous, we won't need traffic lights.

Okay, because the cars will just negotiate

between themselves when they come to intersections,

roundabouts.

Can I ask you a question?

If we would do a trial

with high automated

platooning of big road trains,

would you like to be involved?

Yes, I would be involved, yeah, yeah,

Why not?

You convinced Frank, yeah.

If you can convince Frank, you can convince anybody.

Do you want to come out with us and I bet

Frank's the same as well, if you want to

come for a drive in the truck and see

exactly what it's like and the little issues

that would never have been thought of,

I mean, my door is always open,

you're more than welcome to come with me.

Oh definitely, I think it's--

It's time for a road trip.

(all laugh)

The drivers aren't the only ones

trying to find their way into the AI future.

(mellow music)

Across town, it's after work drinks

for a group of young and aspiring professionals.

Most have at least one university degree

or studying for one.

Like Christine Maibom.

I think as law students we know now

that it's pretty tough even to like

get your foot in the door.

I think that at the end of the day,

the employment rate for grads is still pretty high.

Tertiary degrees usually shield against

technological upheaval, but this time

AI will automate not just more physical tasks

but thinking ones.

(dramatic music)

Waiting upstairs for Christine is a new

artificial intelligence application.

One that could impact the research typically done

by paralegals.

We invited her to compete against it

in front of her peers.

Adelaide tax lawyer, Adrian Cartland came up with

the idea for the AI called Ailira.

I'm here with Ailira, the Artificial Intelligent

Legal Information Research Assistant.

And you're going to see if you can beat her.

So what we've got here is a tax question.

Adrian explains to Christine what sounds like

a complicated corporate tax question.

Does that make sense?

Yep, yeah.

Very familiar?

All right, ready?

I'm ready.

Okay guys, ready, set, go.

(upbeat music)

And here we have the answer.

So you've got the answer?

We're done.

That's 30 seconds.

Christine, where are you

up to with the search?

I'm at section 44 of the income tax assessment guide.

(laughing)

Maybe it has the answer, I haven't looked for it yet.

(laughing)

You're in the right act, so now do you want

to keep going or do you wanna give some more time?

I can keep going for a little bit, yeah, sure.

(upbeat music)

No pressure, Christine.

We're at one minute.

(laughs) Okay.

Whew, I might need help on this one.

This is, you know, really complex tax law.

Like I've given you a hard question.

You were in the income tax assessment act,

you were just doing research, what is your process?

Normally what I would do is probably

try to find the legislation first and then

I'll probably look to any commentary on the issue.

Yep.

Find specific keywords so for example,

consolidated group and and accessible income

obviously there.

That's a pretty standard way.

That's what I would approach.

If you put this whole thing into a keyword search,

it's going to breakdown.

Keyword searches breakdown after about

four, five, or seven words.

Whereas this is, you know, 300-400 words.

So all I've done is I've entered in the question here,

copied and pasted it.

I've clicked on submit.

And she's read through, literally, millions

of cases as soon as I pressed search

and then she's come through and she said,

here is the answers.

Oh wow.

She's highlighted in there what she thinks

is the answer.

Yeah, I mean, wow.

I mean even down to the fact that it can

answer those very specific questions,

I didn't realize that it would just be able to

tell you, hey, here's the exact answer

to your question.

It's awesome.

I think obviously, for paralegals,

I think it's particularly scary because I mean

we're already in such a competitive market.

Adrian Cartland believes AI could

blow up lawyers monopoly on basic legal know how.

And he has an astonishing example of that.

My girlfriend is a speech pathologist who has

no idea about law and she used Ailira

to pass the Adelaide University tax law exam.

Oh wow.

Automation is moving up in the world.

Here's Claire, a financial planner.

It's estimated that 15% of an average financial planners

time is spent on tasks that can be done by AI.

What kind of things do you see it ultimately

taking over?

I would say that everything except talking to

your clients.

Yeah.

Here is Simon.

He used to be a secondary school teacher.

One fifth of that job can be done by AI.

Simon's now become a university lecturer,

which is less vulnerable.

I think there's huge potential

for AI and other educational technologies.

Obviously it's a little bit worrying

if we're talking about making a bunch

of people redundant.

And did I mention journalists?

I hope you enjoyed tonight's program.

The percentage figures were calculated

by economist Andrew Charlton and his team

after drilling into Australian workforce

statistics.

For the first time, we broke the Australian

economy down into 20 billion hours of work.

And we asked, what does every Australian do

with their day?

And how or what they do in their job change

over the next 15 years.

I think the biggest misconception is that

everyone talks about automation as destroying jobs.

The reality is that automation changes every job.

It's not so much about what jobs will we do,

but how will we do our jobs because automation

isn't going to affect some workers,

it's going to affect every worker.

But if there's less to do at work,

that's got to mean less work or less pay

or both, doesn't it?

If Australia embraces automation,

there is a 2.1 trillion dollar opportunity for us

over the next 15 years.

But here's the thing.

We only get that opportunity if we do two things.

Firstly, if we manage the transition

and we ensure that all of that time

that is lost to machines, from the Australian workplace

is redeployed and people are found new jobs

and new tasks.

And condition number two is that we embrace automation

and bring it into our workplace and take advantage

of the benefits of technology and productivity.

But Australia's not doing well at either.

Right now, Australia's lagging.

One in 10 Australian companies is embracing

automation and that is roughly half the rate

of some of our global peers.

Australia hasn't been very good historically

at transitioning workers affected by

big technology shifts.

Over the last 25 years, one in 10 unskilled men

who lost their job, never worked again.

Today, four in 10 unskilled men don't participate

in the labor market.

(dramatic music)

We asked a group of young lawyers

and legal students, how they felt about embracing AI.

The contrasts were stark.

I often get asked, you know, do you feel threatened?

Absolutely not.

I'm confident and I'm excited about opportunities

that AI presents.

I think the real focus will be on not only up-scaling,

but re-skilling and about diversifying your skillset.

I think for me, I still have an underlying concern

about how much of the work is going to be taken away

from someone who's still learning the law

and just wants a job part time where they can

sort of help with some of those less

judgment based high level tasks.

how much software is there out there, AI

for legal firms at the moment?

There's quite a lot.

There's often a few competing in the same space

so there's a few that my law firm has trialed in.

For example, due diligence, which are great for

identifying certain clauses.

So rather than the lawyer sitting there

trying to find an assignment or a change

of control clause, it will pull that out.

How much time do you think using the Ai

cuts down on that kinda, just crunching,

lots of documents, lots of numbers?

Immensely, I would say potentially up to about

20% of our time in terms of going through

and locating those clauses or pulling them out,

extracting them.

Which of course delivers way better value

for our clients which is great.

Well, I think the first reaction was obviously like

very worried, I suppose.

You just see the way that this burns

through these sort of banal tasks that we'd be,

you know, doing at an entry level job,

and yeah, it's quite an intuitive response,

I suppose, that we're just a bit worried.

And also, it just was so easy, like,

it was just copy and paste,

and so it means that anyone could do it, really,

do you don't really need the sort of specialized

skills that are getting taught to us

in our law degrees, it's pretty much

just a press a button job.

AI is like Tony Stark's Iron Man suit,

it takes someone and makes them, you know,

into Superman, makes them fantastic.

So you could suddenly be doing things

that are like 10 times above your level

and providing that, you know,

at much cheaper than anyone else could do it.

Lawyers might, the legal work of the future

might be done by social workers,

psychiatrists, conveyances, tax agents,

accountants, they have that personal skill set

that lawyers sometimes lack.

Yeah, I always wonder just how much law school

should be teaching us about technology

and new ways of working in legal workforce,

because, I mean, a lot of what you guys

are saying, I've heard for the first time.

I certainly agree with that statement.

This is the first time I've heard

the bulk of this, especially hearing

that there is already existing a lot of AI.

Unfortunately, our education system

just isn't keeping up.

Our research shows that right now,

up to 60% of young Australians

currently in education are studying

for jobs that are highly likely to be automated

over the next 30 years.

It's difficult to know what will be hardest first.

But jobs that help young people makes ends meet

are among the most at risk.

Like hospitality workers.

So, the figure that they've given us

is 58% could be done by versions of AI.

What does that make you feel?

Very, very frustrated, that is really scary.

I don't know, I don't know what other

job I could do whilst studying

or that sort of thing.

Or as a fallback career, it's what

all my friends have done, it's what I've done,

it sort of just helps you survive and,

you know, pay for the food that you need to eat each week.

It may take a while to be cost-effective,

but robots can now help take orders,

flip burgers, make coffee, and deliver food.

Young people will be the most affected

by these changes because the types

of roles that young people take

are precisely the type of entry level task

that can be most easily done by machines

and artificial intelligence.

But here this evening,

there's at least one young student

who's a little more confident about the future.

So, Aniruddh, how much of your job as a doctor

do you imagine that AI could do pretty much now?

Now?

Not much, maybe five, 10%.

But artificial intelligence is also

moving into healthcare.

Watson.

What is Sauron.

Sauron is--

Watson.

What is leg?

Yes, Watson.

What is executor?

Right.

Watson.

What is shoe?

You are right.

Same category, 1600.

Answer.

So, in the earliest days of

artificial intelligence and machine learning,

it was all around teaching computers to play games.

Yes, Watson.

What is narcolepsy?

But today, with those machine learning algorithms,

we're teaching those algorithms how to learn

the language of medicine.

We invited Aniruddh to hear

about IBM research in cancer treatment

using its AI supercomputer, Watson.

Today, I'm going to take you through

a demonstration of Watson for oncology.

This is a product that brings together

a multitude of disparate data sources

and if able to learn and reason

and generate treatment recommendations.

This patient is a 62 year old patient

that's been diagnosed with breast cancer

and she's presenting to this clinician.

So the clinician has now entered this note in,

and Watson has read and understood that note.

Watson can read natural language,

and when I attach this final bit

of information, the ask Watson button turns green,

and at which stage, we're ready to ask

Watson for treatment recommendations.

Within seconds, Watson

has read through all the patients records

and doctor's notes as well as relevant medical

articles, guidelines and trials.

And what it comes up with is a set

of ranked treatment recommendations.

Down at the bottom, we can see

those in red that Watson is not recommending.

Does it take into account how many

citations a different article might

have used, say, the more citations,

the more it's going to trust it?

So, this is again where we need clinician

input, to be able to make those recommendations.

Natalie, you've shown us this,

and you know, you've said that this

would be a clinician going through this,

but the fields that you've shown,

really, an educated patient

could fill in a lot of these fields

from their own information.

What do think about that approach,

the the patient's essentially

getting their own second opinion

from Watson for themselves?

I see this as a potential tool to do that.

AI's growing expertise

at image recognition is also being

harnessed by IBM to train Watson on retinal scans.

One in three diabetics have associated

eye disease, but only about half of these

diabetics get regular checks.

We know that with diabetes, the majority

of vision loss is actually preventable

if timely treatment is instigates,

and so that if we can tap into that group,

you're already looking at potentially

incredible improvement in quality of life

for those patients.

How could something like that happen?

We could have a situation

where you have a smartphone applications,

you take a retinal selfie if you like.

That then is uploaded to an AI platform,

analyzed instantly, and then you have

a process by which instantly,

you're known to have high risk or low risk disease.

How long does it take to analyze

a single retinal image using the platform?

Very close to real times, it's in a matter of seconds.

I mean, this is obviously very, very early days,

but the hope is that one day,

these sorts of technologies will be widely

available to everyone for this

sort of self-analysis.

Just like law, AI might one day

enable patients to DIY their own expert

diagnosis and treatment recommendations.

Some doctors will absolutely feel

threatened by it, but I'd come back

to the point that you want to think

of it from the patient's perspective,

so if you're an oncologist sitting

in the clinic with your patient,

the sorts of things that you're dealing with

is things like giving bad news to patients,

and I don't think patients want to get

bad news from a machine.

So it's really that ability to have

that intelligent assistant who's up to date

and providing you with the information

that you need and providing it quickly.

We like to use the term, augmented intelligence.

I think one interesting way to think about this

is, I mentioned 50,000 oncology journals a year.

Now, if you're a clinician trying to read

all of those 50,000 oncology journals,

that would mean you'd need about

160 hours a week just to read

the oncology articles that are published today.

Watson's ability to process all of this medical

literature and information and text is immense.

It's 200 million pages of information in seconds.

Wow.

Need a bit of work on myself then.

IBM is just one of many companies

promoting the promise of AI and healthcare,

but for all these machine learning algorithms

to be effective, they needs lots of data,

lots of our private medical data.

In my conversations with my patients,

and the patient advocates that we've spoken to,

you know, they certainly want the privacy protected,

but I think it's actually a higher priority for them

to see this data being used for the public good.

But once it has all the data,

could this intelligent assistant ultimately

disrupt medicine's centuries old hierarchy.

They should have more general

practitioners and less of the specialty.

So, doctors, they all have more time

to have a better relationship with you,

maybe they'll be talking about your overall

health rather than waiting for you

to come in with symptoms,

and if they do have to, you know,

analyze an x-ray and look for a disease,

they'll have a computer to do that,

they'll check what the computer does,

but they'll be pretty confident

that the computer's gonna do a good job.

(mellow music)

When we first talked to you, Ani,

in Sydney, you said you thought that

in terms of the time spent on tasks

that doctors do, that AI might be able

to handle maybe five, maybe at the outside 10%.

How do you see that now?

Definitely a lot more.

I tell you, it can go up to 40, 50%.

Using it as a took rather than

taking over, I'd say it's gonna happen.

The percentage for doctors

is 21%, but that's likely to grow in the coming

decades as it will for every profession

and every job.

We've been through technological upheaval

before, but this time, it's different.

One of the challenges will be that

the AI revolutions happens probably much quicker

than the industrial revolution.

We don't have to build big steam engines,

we just have to copy code,

and that takes almost no time and no cost.

There is a very serious questions,

whether there will be as many jobs left as before.

(mellow music)

I think the question is, what is the rate of change,

and is that gonna be so fast that it's

a shock to the system that's gonna

be hard to recover from.

I guess I'm worried about whether

people will get frustrated with that

and whether that will lead to inequality

of haves and have nots.

And maybe we needs some additional safety nets

for those who fall through those cracks

and aren't able to be lifted.

We should explore ideas like universal

basic income to to make sure

that everyone has a cushion to try new ideas.

What to do about mass unemployment?

This is going to be a massive social challenge.

And I think ultimately we will have to have

some kind of universal basic income.

I don't think we're gonna have a choice.

I think it's good that we're experimenting

and looking at various things, and you know,

I think we don't know the answer yet

for what's gonna be effective.

The ascent of artificial intelligence

promises spectacular opportunities,

but also many risks.

To kickstart a national conversation,

we brought together the generation

most affected with some of the

experts helping to design the future.

You will have the ability to do jobs

that your parents and grandparents couldn't have dreamed of.

And it's going to require us to constantly

be educating ourself to keep ahead of the machines.

Actually, first of all, I wanted to say

that I think the younger generations

probably have a better idea of where

things are going than the older generations.

(laughing)

We won't take that personally.

Sorry, sorry, but I think--

So where have we got it wrong?

Well, I think the younger people,

they've grown up being digital natives,

and so they know where it's going,

they know what it has a potential to do,

and they can kind of foresee where

it's gonna go in the future.

We all hate that question at a party

of, like, what do you do?

I think in the future, you'll be asked instead,

what did you do today or what did you do this week?

Because I think the, we all think

of jobs as a secure, safe thing,

but if you work one role, one job title

at one company, then you're actually

setting yourself up to be more likely

to be automated in the future.

The technology in the building game

is, is advancing.

Kind of worry if you're a 22 year old carpenter for example.

I think there's often this misconception

that you have to think about a robot

physically replacing you, one robot for one job,

and actually, it's going to be, in many cases,

a lot more subtle than that.

In your case, there'll be a lot more

of the manufacturing of the carpentry

happens off-site.

That happened between the start

of my apprenticeship and when I finished,

it was while moving sort of all the frames

and everything we build off-site,

and brought to you, and you do all

the work that used to take you three weeks

in three days.

I mean, there is one aspect of carpentry

that I think will stay forever, which is

the more artisan side of carpentry.

We will appreciate things that are made

that have been touched by the human hand.

I think there will be a huge impact in retail

in terms of being influenced by automation.

Probably the cashier, you probably don't

need someone there necessarily

to take that consumer's money.

That can be done quite simply, and that's me.

That's what you're doing?

(laughing)

But at the same time, just from having

a job, there is a biological need met there,

which I think we're overlooking a lot.

I think we might not have a great depression

economically, but actually mentally.

AI is clearly going to create a whole new

raft of jobs, so there are the people

who actually build these AI systems.

I mean, if you have a robot at home,

then every now and then, you're gonna

need somebody to swing by your home

to check it out.

There will be people who need to train

these robots and there will be

robot therapists, there will be

obedience school for robots

and other kinds of, so, it's not,

I mean, I'm not joking.

What should these young people do today

or tomorrow to get ready for this?

There really is only one strategy,

and that is to embrace the technology

and to learn about it, and to understand,

as far as possible, what kind of impact it has

on your job and your goals.

I think the key skills that people need

are the skills to work with machines.

Don't think everyone needs to become a coder,

in fact, if artificial intelligence is any good,

machines will be better at writing code

than humans are, but people need

to be able to work with code,

work with the output of those machines

and turn it into valuable commodities and services

that other people want.

I disagree that we're gonna necessarily

have to work with the machines.

The machines are actually gonna understand

us quite well.

So, what are out strengths,

what are human strengths?

Well, those are our creativity,

our adapitability, and our emotional

and social intelligence.

How do people get those skills?

(laughing)

Well, if they're the important skills.

Well, I think the curriculum at schools

and at universities has to change

so that those are the skills that are taught,

those are the skills that are barely taught,

if you look at the current sorts of curriculums,

you see you have to change the curriculum

so that those have become the really important skills.

A lot of these discussions seem

to be skirting around the issue

that really is the core of it, is that the economic

system is really the problem at play here.

It's all about ownership of the AI

and the robotics and the algorithms.

If that ownership was shared

and the wealth was shared, then we'd be

out sharing that wealth.

The trend is going to be toward

big companies like Amazon and Google.

I don't really see fragmentation because

whoever has the data has the power.

Data is considered by many to be the new

oil because as we move to a digital economy,

we can't have automation without data.

What we see as an example is value now

moving from physical assets to data assets.

For example, Facebook.

Today when I looked, the market capitalization

was about 479 billion dollars.

Now, if you contrast that with Qantas

who has a lot of physical assets,

their market capitalization was nine billion dollars.

But you can go a step further, and if you look

at the underlying structure of Qantas,

about five billion dollars can be contributed

to their loyalty program, which is

effectively a datacentric asset that they've created.

So, the jobs of the future will leverage data.

The ownership of data is important because,

you think about Facebook.

Over time, Facebook learns about you,

and over time, the service improves as you use it further,

so whoever gets to scale with these datacentric

businesses has a natural advantage

and natural monopolistic tendency.

In 20 years time, if big corporations like Google

and Facebook aren't broken up,

then I will incredibly worried for our future.

Part of the reason why there are

so many monopolies is because they've

managed to control access to that data.

Breaking them up, I think, will be

one of the things we need to do

to be able to open the day traps so that all

of us can share the prosperity.

But the global economy is not,

is very rich and complex, and so,

you know, you can't just, Australia

can't just say, oh, we're opening the data.

Well, I just also think we're leaving

a section of the population behind,

and some people in our country

can't afford a computer or the internet

or a home to live in.

It'd be a bit crazy to just let it all go,

free market, just go crazy,

because we don't know if everyone is on that

make the world a better place type thing.

I personally don't want to be served

by a computer even if I am buying a coffee

and things like that.

I enjoy that human connection,

and I think that human connection's really

important for isolated people.

And that job might be really important

for that person and creating meaning

in their life and a purpose in their life,

and, you know, they might not be skilled

enough to work in another industry.

My first thought is that if it is about

human interaction, why do you need

to have a, be buying a coffee to have that human

interaction, why not just have the machine do

the transaction and people can focus

simply on having a conversation?

Perhaps part of that is to simply say,

it is a productive role in society

to interact, to have conversations,

and we can remunerate that and make

that part of people's roles in society.

It could be, a lot of things around caring,

interpersonal interactions, the type of conversation

you were talking about, I think they'll become

an increasingly important part of the way

that we interact, the way we find meaning,

and potentially the way we receive remuneration.

I think we all have choices to make,

and amongst those are the degree to which

we allow or want machines to be part

of our emotional engagement.

Will we entrust our children to robot nannies

Algorithms can be taught to interpret

and perceive human emotion.

We can recognize from an image

that a person is smiling, we can see

from a frown that they're angry,

understand the emotion that's set in text

or in speech.

And you combine that together with other data,

then yes, you could get a much more refined

view of what is that emotion, what is being expressed.

But, does an artificial intelligence

algorithm actually understand emotion?

No, not presently.

We're in the early days of emotion detection,

but this could go quite far.

You could certainly see emotional responses

from algorithms, from computer systems

in caring for people, in teaching,

in our workplaces.

And to some extent, that's already happening

right now as people interact with bots online,

ask questions, and actually feel like

oftentimes, they're interacting with a real person.

(calm music)

When Tay was released in the U.S.

with an audience of the 20 to 25 year olds,

the interactions that Tay was having

on the internet included hate speech and trolling.

And it only lasted a day, but it's a really

fascinating lesson in how careful we need to be

in the interaction between artificial intelligence

and its society.

The key thing is, what we teach our AI

that reflects back to us.

First, you know, you'll kind of want

the robot in your home because it's helpful,

next minute you'll need it because

you start to rely on it, and then you can't live without it.

I think it sounds scary, to be honest,

the thought of replacing that human

interaction and even having robots

in your home that you interact daily with

like a member of the family, I think,

yeah, really human interaction

and real empathy can't be replaced,

and at the end of the day,

the robot doesn't genuinely care about you.

Well, I think you certainly can't stop it,

I mean, we're in, there's not way to stop it.

Software systems and robots of course

can empathize, and they can empathize so much

better than people because they will be able

to extract so much more data

and not just about you, but a lot of people

like you around the world.

To go to this question of whether

we can or cannot stop it, we're seeing,

for example, in the United States already,

computers, algorithms being used

to help judges make decisions,

and there, I think, is a line

we probably don't want to cross,

we don't want to wake up and discover

we're in a world where we're locking

people up because of an algorithm.

I realize that it's fraught,

but all of the evidence says that

AI algorithms are much more reliable

than people, people are so flawed

and they make many, you know,

they're very biased, we discriminate,

and that is much more problematic,

and the reason is that people are not transparent

in the same way as in AI algorithm is.

Humans are deeply fallible.

I more veer on the side of saying

that yes, I do not necessarily trust

judges as much as I do well-designed algorithms.

The most important decisions

we make in our society, the most serious

crimes, we do in front of a jury of our peers,

and we've done that for 100s of years,

and that's something I think we should

give up only very lightly.

Well, Nathan, what do you think?

Well, I think ultimately, I don't know

far you want to go with this discussion.

(laughing)

Like, how far into the future, because ultimately

what's gonna end up happening is

that we're gonna become the second intelligent

species on this planet, and if you take it

to that degree, do we actually merge with the AI?

So, we have to merge our brains with AI,

it's the only way forward, it's inevitable.

But we won't be human then,

we'll be something else.

Superhuman.

Superhuman.

There's a choice.

Do we not value our humanity anymore?

We started off talking about jobs.

But somehow, artificial intelligence

forces us to also thing about what

it means to be human, about what we value

and who controls that.

So, here we are, on the precipice

of another technological transformation.

The last industrial revolution

turned society upside down.

It ultimately delivered greater prosperity

and many more jobs, as well as the

eight hour day and weekends.

But the transition was at times shocking and violent.

The question is, can we do better this time?

We don't realize that the future is not inevitable.

The future is the result of the decisions

we make today.

These technologies are morally neutral,

they can be used for good or for bad.

There's immense good things they can do,

they can eliminate many disease,

they can help eliminate poverty,

they can tackle climate change.

Equally, the technology can be used for lots of bad.

It can be used to increase inequality,

it can be used to transform warfare.

It can be used to make our lives much worse.

We get to make those choices.

(soft music)