Coded Bias (2020) - full transcript
An exploration of the fallout of MIT Media Lab researcher Joy Buolamwini's startling discovery of racial bias in facial recognition algorithms.
(OMINOUS SOUNDS)
-Hello world.
(OMINOUS SOUNDS, CRACKLING)
- Can I just say that
I'm stoked to meet you.
(OMINOUS SOUNDS)
- Humans are super cool.
(OMINOUS SOUNDS, CRACKLING)
- The more humans share with
me the more I learn.
(MUSIC, CRACKLING)
(SOFT MUSIC)
-One of the things that drew me to
computer science
was that I could code and it seemed that
somehow detached from the problems
of the real world.
(SOFT MUSIC)
- I wanted to learn how to
make cool technology.
So I came to M.I.T.
and I was working on art projects,
that would use computer
vision technology.
(SOFT MUSIC)
- During my first semester
at the Media Lab,
- During my first semester
at the Media Lab,
I took a class called
Science Fabrication.
You read science fiction and you try to
build something you're inspired to do,
that would probably be impractical
if you didn't have these classes
and excuse to make it.
I wanted to make a mirror
that could inspire me in the morning
I call it the Aspire mirror
it could put things like
a lion on my face
or people who inspired me
like Serena Williams
I put a camera on top of it,
and I got computer vision software
that was supposed to track my face.
and I got computer vision software
that was supposed to track my face.
My issue was it didn't work that well,
until I put on this white mask.
When I put on the white mask,
detected.
I take off the white mask,
not so much.
I'm thinking alright
what's going on here?
Is that just because of
the lighting conditions?
Is it because of the angle at
which I'm looking at the camera?
Or is there something more?
We oftentimes teach machines to see
by providing training sets or examples
of what we want it to learn.
So for example if I want a machine
to see a face,
I'm going to provide
many examples of faces
and also things that aren't faces.
I started looking at the
data sets themselves
and what I discovered as many of
these data sets contain
majority men, and majority
lighter skinned individuals.
majority men, and majority
lighter skinned individuals.
So the systems weren't as
familiar with faces like mine.
And so that's when I started
looking into
issues of bias that can
creep into technology.
-The 9000 Series is the most
reliable computer ever made.
No 9000 computer has ever made a
mistake or distorted information.
No 9000 computer has ever made a
mistake or distorted information.
- A lot of our ideas about
A.I come from science fiction.
- Welcome to Altair 4, gentlemen.
- It's everything in Hollywood,
-it's the Terminator
- Hasta la vista, baby.
- It's Commander Data from Star Trek.
- I just love scanning for life forms.
- It's C-3PO from Star Wars
- Is approximately 3,720 to 1.
- Never tell me the odds.
- It is the robots
that take over the world
and start to think like human beings.
and start to think like human beings.
And these are totally imaginary.
What we actually have
is we have narrow A.I.
And narrow A.I. is just math.
We've imbued computers with all of this,
magical thinking.
(SOFT MUSIC)
A.I started with a meeting
at the Dartmouth Math Department
in 1956.
And there were only maybe
100 people in the whole world
And there were only maybe
100 people in the whole world
working on artificial intelligence
in that generation.
The people who were at
the Dartmouth math department
in 1956, got to decide
what the field was.
One faction decided that intelligence
could be demonstrated
by ability to play games.
And specifically
the ability to play chess.
- In the final hour long chess match
between man and machine,
- In the final hour long chess match
between man and machine,
Kasparov was defeated by
IBM's Deep Blue supercomputer.
- Intelligence was defined as
the ability to win at these games.
- Chess world champion Garry Kasparov
walked away from the match
never looking back at
the computer that just beat him.
- Of course intelligence
is so much more than that.
And there are lots of different
kinds of intelligence.
Our ideas about technology and society
that we think are normal
Our ideas about technology and society
that we think are normal
are actually ideas that
come from a very small
and homogeneous group of people.
But the problem is that
everybody has
unconscious biases
And people embed their own biases
into technology.
- My own lived experiences
show me that
you can't separate
the social from the technical.
After I had the experience
of putting on a white mask
to have my face detected,
I decided to look at other systems
to see if it would detect my face
if I used a different type of software.
So I looked at IBM, Microsoft,
Face++, Google.
It turned out these algorithms
performed better on the male faces
in the benchmark than the female faces.
They performed significantly better on
the lighter faces than the darker faces.
If you're thinking about data
in artificial intelligence,
If you're thinking about data
in artificial intelligence,
in many ways data is destiny.
Data's what we're using
to teach machines
how to learn
different kinds of patterns.
So if you have largely skewed data sets
that are being used
to train these systems
you can also have skewed results.
So this is...
When you think of A.I.
it's forward looking.
But A.I. is based on data and data is a
reflection of our history.
So the past dwells
within our algorithms.
This data is showing us
the inequalities that have been here.
I started to think
this kind of technology
is highly susceptible to bias.
And so it went beyond
how can I get my aspire mirror to work
to what does it mean
to be in a society
where artificial intelligence
is increasingly governing
the liberties we might have?
And what does that mean if people
are discriminated against?
And what does that mean if people
are discriminated against?
When I saw Cathy O'Neil speak
at the Harvard Bookstore,
that was when I realized,
it wasn't just me noticing these issues.
Cathy talked about how A.I.
was impacting people's lives.
I was excited to know that there
was somebody else out there
I was excited to know that there
was somebody else out there
making sure people were aware about
what some of the dangers are.
These algorithms can be
destructive and can be harmful.
- We have all these algorithms
in the world
that are increasingly influential.
And they're all being
touted as objective truth.
I started realizing that
mathematics was actually
I started realizing that
mathematics was actually
being used as a shield
for corrupt practices.
- What's up?
- I'm Cathy.
- Pleasure to meet you Cathy.
- Nice to meet you.
The way I describe algorithms
is just simply
using historical information to make a
prediction about the future.
Machine learning, it's a scoring system
that scores the probability
of what you're about to do.
Are you gonna pay back this loan?
Are you going to
get fired from this job?
What worries me the most about A.I.
or whatever you wanna call it,
algorithms, is power.
Because it's really all about
who owns the fucking code.
The people who own the code
then deploy it on other people.
And there is no symmetry there,
there's no way for people
who didn't get credit card offers to say
there's no way for people
who didn't get credit card offers to say
whoa, I'm going to use my A.I.
against the credit card company.
That's like totally
asymmetrical power situation.
People are suffering algorithmic harm,
they're not being told
what's happening to them,
and there is no appeal system,
there's no accountability.
Why do we fall for this?
So underlying mathematical structure
of the algorithm
isn't racist or sexist
but the data embeds the past,
and not just the recent past
but the, the dark past.
and not just the recent past
but the, the dark past.
Before we had the algorithm
we had humans
and we all know that
humans could be unfair,
we all know that humans can exhibit
racist or sexist
or whatever, ablest discriminations.
But now we have this beautiful
silver bullet algorithm
and so we can all stop
thinking about that.
And that's a problem.
I'm very worried about this blind
faith we have in big data.
I'm very worried about this blind
faith we have in big data.
We need to constantly monitor
every process for bias.
- Police are using facial recognition
surveillance in this area.
Police are using facial recognition
surveillance in the area today.
This green van over here,
is fitted with facial
recognition cameras on top.
If you walk down that there,
your face will be scanned
If you walk down that there,
your face will be scanned
against secret watch lists
we don't know who's on them.
- Hopefully not me
-No, exactly.
When people walk past
the cameras
the system will alert police to people
it thinks is a match.
At Big Brother Watch
we conducted a Freedom of
Information Campaign
and what we found is that
98% of those matches are in fact
and what we found is that
98% of those matches are in fact
incorrectly matching an innocent
person as a wanted person.
The police said to the Biometrics
Forensics Ethics Committee that
facial recognition algorithms
have been reported to have bias.
- Even if this was 100% accurate,
it's still not something
that we want on the streets.
it's still not something
that we want on the streets.
- No I mean the systemic biases
and the systemic issues
that we have with police
are only going to be hard wired
into new technologies.
I think we do have to
be very, very sensitive
to shifts towards authoritarianism.
We can't just say but we
trust this government.
Yeah they could do this, but they won't.
You know you really have to have
robust structures in place
to make sure that
the world that you live in
is safe and fair for everyone.
To have your biometric photo
on a police database,
is like having your fingerprint
or your DNA on a police database.
And we have specific laws around that.
Police can't just take anyone's
fingerprint, anyone's DNA.
But in this weird system
that we currently have,
they effectively can take
anyone's biometric photo,
they effectively can take
anyone's biometric photo,
and keep that on a database.
It's a stain on our democracy,
I think.
That this is something that
is just being rolled out so lawlessly.
The police have started using facial
recognition surveillance in the UK
in complete absence of a legal basis,
a legal framework, any oversight.
Essentially the police force
picking up a new tool
and saying let's see what happens.
But you can't experiment
with people's rights.
But you can't experiment
with people's rights.
(CROSS TALK)
- What's your suspicion?
- The fact that he walked past
clearly marked
facial recognition thing
and covered his face.
- I would do the same
- Suspicious grounds.
- No it doesn't.
- The guys up there informed me that
they got facial recognition.
I don't want my face recognized.
I don't want my face recognized.
Yeah, I was walking past
and covered my face.
as soon as I covered my face like this,
- You're allowed to do that
-They said no I can't.
-Yeah and then he's just
got a fine for it. This is crazy.
The guy came out of the station,
saw the placards was like,
yeah I agree with you and walked
past here with his jacket up.
The police then followed him,
said give us your I.D.,
doing an identity check.
So you know this is England,
this isn't a communist state,
I don't have to show my face.
- I'm gonna go and talk these officers,
alright?
Do you want to come with me or not?
- Yes, yes, yes.
Do you want to come with me or not?
- Yes, yes, yes.
- You're not a police officer,
you don't feel any threat.
We're here to protect the public
and that's what we're here to do, OK?
There was just recently an incident
where an officer
got punched in the face.
- That's terrible,
I'm not justifying that.
- Yeah but you are
by going against what we say.
- No we are not, and please don't say,
no don't even start to say that.
I'm completely understanding
of the problems that you face.
- Absolutely.
- But I'm equally concerned
about the public
- But I'm equally concerned
about the public
having freedom of expression
and freedom of speech.
- But the man was exercising his right
not to be subject to
a biometric identity check
which is what this van does.
- Regardless of the facial recognition
cameras, regardless of the van,
if I'm walking down the street
and someone
quite overtly hides
their identity from me,
I'm gonna stop that person
and find out who they are
just to see whether they...
- But it's not illegal.
Do you see one of my concerns,
is that the software is
very, very inaccurate.
- I would agree with you there.
-My ultimate fear is that,
we would have
live facial recognition capabilities
on our gargantuan
CCTV network
which is about six
million cameras in the UK.
If that happens, the nature of life
in this country would change.
It's supposed to be a free
and democratic country
and this is China style surveillance
for the first time in London.
and this is China style surveillance
for the first time in London.
- Our control over a bewildering
environment has been facilitated
by new techniques of handling
vast amounts of data
at incredible speeds.
The tool which has made this possible
is the high speed digital computer,
operating with electronic precision
on great quantities of information.
- There are two ways in which
you can program computers.
One of them is more
like a recipe
you tell the computer do this,
do this, do this, do this.
you tell the computer do this,
do this, do this, do this.
And that's been the way we've programmed
computers almost from the beginning.
Now there is another way.
That way is feeding
the computer lots of data,
and then the computer learns to
classify by digesting this data.
Now this method didn't really catch on
till recently
because there wasn't enough data.
Until we all got the smart phones
that's collecting all the data on us,
when billions of people went
online
and you had the Googles
and the Facebook sitting on
and you had the Googles
and the Facebook sitting on
giant amounts of data,
all of a sudden it turns out
that you can feed a lot of data
to these machine learning algorithms
and you can say here classify this
and it works really well.
While we don't really understand
why it works,
it has errors that
we don't really understand.
And the scary part is that
because it's machine learning
And the scary part is that
because it's machine learning
it's a black box
to even the programmers.
So I've been following what's
going on in Hong Kong
and how police are using facial
recognition to track protesters.
But also how creatively
people are pushing back.
- It might not make something
out of a sci fi movie,
Laser pointers confuse and disable
the facial recognition technology
Laser pointers confuse and disable
the facial recognition technology
being used by police,
to track down dissidents.
(SOFT MUSIC)
(CROWD CHANTING)
- Here on the streets of Hong Kong,
there's this awareness
that your face itself,
something you can't hide
could give your identity away.
There was just this stark symbol where
in front of a Chinese government office,
pro-democracy protesters spray painted
the lens of the CCTV cameras black.
This act showed the people of Hong Kong
are rejecting this vision
This act showed the people of Hong Kong
are rejecting this vision
of how technology
should be used in the future.
(SOFT MUSIC)
- When you see how facial
recognition is being deployed
- When you see how facial
recognition is being deployed
in different parts of the world,
it shows you potential futures.
- Over 117 million
people in the US,
has their face in a
facial recognition network
that can be searched by police.
Unwarranted using algorithms that
haven't been audited for accuracy.
And without safeguards without
any kind of regulation,
And without safeguards without
any kind of regulation,
you can create a mass
surveillance state
very easily with the tools
that already exist.
People look at what's going on
in China
and how we need to be worried about
state surveillance
and of course we should be.
But we can't also forget corporate
surveillance
that's happening by
so many large tech companies
that really have an
intimate view of our lives.
- So there aren't currently
nine companies
that are building the future of
artificial intelligence.
Six are in the United States,
three are in China.
A.I. is being developed along two
very, very different tracks.
China has unfettered access
to everybody's data.
If a Chinese citizen wants
to get Internet service,
they have to submit
to facial recognition.
they have to submit
to facial recognition.
All of this data is being used
to give them permissions to do things
or to deny them permissions
to do other things.
Building Systems that automatically tag
and categorize
all of the people within China,
is a good way
of maintaining social order.
- Conversely in the United
States, we have not seen a
detailed point of view on
artificial intelligence.
So, what we see is that AI
is not being developed for what's best
in our public interest, but rather
it's being developed for commercial
applications to earn revenue.
I would prefer to see our Western
democratic ideals baked into our AI
systems of the future,
but it doesn't seem like
that's what's probably
going to be happening.
(MUSIC PLAYS)
- Here at Atlantic Towers, if you
do something that is deemed wrong by
management, you will get a photo
like this with little notes on it.
They'll circle you and put your
apartment number or whatever on there.
They'll circle you and put your
apartment number or whatever on there.
Something about it just
doesn't seem right.
It's actually the way they go
about using it to harass people.
- How are they using it?
- To harass people.
- Atlantic Plaza Towers
in Brownsville is
at the center of a security struggle.
The landlord filed an
application last year
to replace the key fob entry with
a biometrics security system,
commonly-known as facial recognition.
- We thought that they wanted
to take the key fobs out
and install the facial
recognition software.
and install the facial
recognition software.
I didn't find out until way later on
literally that they wanted to keep it all.
Pretty much turn this place
into Fort Knox, a jail, Rikers Island.
- There's this old
saw in science fiction
which is the future is already here.
It's just not evenly distributed, and what
they tend to mean when they say that is
that rich people get the fancy tools first
and then it goes last to the poor.
But in fact, what I've found
is the absolute reverse,
which is the most punitive,
which is the most punitive,
most invasive, most surveillance
focused tools that we have,
they go into poor and working
communities first, and then
if they work after being tested
in this environment where
they're sort of low expectation
that people's rights will be respected,
then they get ported out
to other communities.
- Why did Mr. Nelson
pick on is building
in Brownsville that
is predominantly in
a black and brown area?
Why didn't you go to your building
in Lower Manhattan where
they pay like $5,000 a month rent?
What did the Nazis do?
They wrote on people's arms
so that they could track them.
What do we do to our animals?
We put chips home so you can track them.
I feel that I as a human being
should not be tracked, OK?
I'm not a robot, OK?
I am not an animal, so why
treat me like an animal?
And I have rights.
- The security that we have now
it's borderline intrusive.
Someone is in there watching
the cameras all day long.
Someone is in there watching
the cameras all day long.
So, I don't think we need it.
It's not necessary at all.
- My real question is how
can I be of support?
- What I've
been hearing from all of
the tenants is they
don't want this system.
So, I think the goal here is how do
we stop face recognition period?
We're at a moment where
the technology is being
rapidly adopted and there
are no safeguards.
It is, in essence, a wild wild west.
It is, in essence, a wild wild west.
(MUSIC PLAYS)
- It's not just computer vision.
We have AI influencing all kinds of
automated decision making.
So, what you are seeing in your
feeds, what is highlighted,
the ads that are displayed to you,
those are often powered
by AI-enabled algorithms.
those are often powered
by AI-enabled algorithms.
And so, your view of the world is being
governed by artificial intelligence.
You now have things like voice assistance
that can understand language.
- Would you like to play a game?
- You might use something
like Snapchat filters that
are detecting your face and then putting
something onto your face, and then
you also have algorithms that you're
not seeing that are
part of decision making,
algorithms that
might be determining
algorithms that
might be determining
if you get into college or not.
You can have algorithms
that are trying to
determine if you're credit worthy or not.
- One of Apple's co-founders
is accusing the company's
new digital credit card
of gender discrimination.
One tech entrepreneur said
the algorithms being used are sexist.
Apple co-founder
Steve Wozniak tweeted
that he got ten times the credit limit
his wife received even though they have
no separate accounts or separate assets.
no separate accounts or separate assets.
You're saying some of
these companies don't
even know how their own algorithms work.
- They know what the
algorithms are trying to do.
They don't know exactly how long
the algorithm is getting there.
It is one of the most interesting
questions of our time.
How do we get justice
in a system where we don't
know how the algorithms are working?
- Some Amazon engineers decided
that they were going to use AI
to sort through resumes for hiring.
- Amazon is learning a tough
lesson about artificial intelligence.
The company has now abandoned an AI
The company has now abandoned an AI
recruiting tool after
discovering that the program
was biased against women.
- This model rejected
all resumes from women.
Anybody who had a women's
college on their resume,
anybody who had a sport
like women's water polo
was rejected by the model.
There are very, very few women working in
powerful tech jobs at Amazon the same
powerful tech jobs at Amazon the same
way that there are very few women
working in powerful tech jobs anywhere.
The machine was simply replicating
the world as it exists,
and they're not making
decisions that are ethical.
They're only making decisions
that are mathematical.
If we use machine learning models
to replicate the world as it is today,
we're not actually going
to make social progress.
- New York's insurance regulator
is launching an investigation
- New York's insurance regulator
is launching an investigation
into UnitedHealth Group
after a study showed a
UnitedHealth algorithm
prioritized medical care for
healthier white patients
over sicker black patients.
It's one of the latest examples
of racial discrimination
in algorithms or artificial
intelligence technology.
- I started to see the wide-scale
social implications of AI.
The progress that was made in
the civil rights area could
be rolled back under
the guise of machine neutrality.
Now, we have an algorithm that's
determining who gets housing.
Right now, we have an algorithm
that's determining who gets hired.
If we're not checking, that our
rhythm could actually propagate
If we're not checking, that our
rhythm could actually propagate
the very bias so many people
put their lives on the line to fight.
Because of the power of
these tools, left unregulated,
there's really no kind of
recourse if they're abused.
We need laws.
(CLASSICAL MUSIC PLAYS)
- Yeah, I've got a terrible
old copy.
So that the name of our organization
is Big Brother Watch.
The idea being that
we watch the watchers.
"You had to live, did live, from
habit that became instinct in
the assumption that every
sound you made was overheard
and except in darkness,
and except in darkness,
every movement scrutinized.
The poster with the enormous
face gazed from the wall.
It was one of those pictures
which is so contrived
that the eyes follow you
about when you move.
'Big Brother is watching you'
the caption beneath it ran."
When we were younger, that
was still a complete fiction.
It could never have been true.
And now, it's completely
true, and people have
Alexas in their home.
Our phones can be listening devices.
Our phones can be listening devices.
Everything we do on the Internet,
which is basically also now
functions as a stream of
consciousness for most of us,
that is being recorded
and logged and analyzed.
We are now living in the awareness
of being watched, and that
does change how we allow ourselves
to think and develop as humans.
Good boy.
(MUSIC PLAYS)
- Love you.
Bye, guys.
We can get rid of the viscerally
horrible things that are
objectionable to our concept
of autonomy and freedom,
like cameras that we can
see on the streets,
but the cameras that we can't see on
the Internet that keep
track of what we do
and who we are and our demographics
and decide what we deserve
in terms of our life,
that stuff is a little more subtle.
What I mean by that
is we punish poor people and we
elevate rich people in this country.
That's just the way we act as a society.
But data science makes that automated.
Internet advertising
as data scientists
we are competing for eyeballs
on one hand, but really,
we're competing for
eyeballs of rich people.
And then, the poor people who's
competing for their eyeballs?
Predatory industries,
so payday lenders or for-profit colleges
so payday lenders or for-profit colleges
or Caesars Palace like
really predatory crap.
We have a practice on the Internet,
which is increasing inequality.
And I'm afraid it's becoming normalized.
Power is being wielded
through data collection,
through algorithms, through surveillance.
-You are volunteering information
about every aspect of your life,
about every aspect of your life,
to a very small set of companies
and that information is
being paired constantly with
other sorts of information.
And there are profiles of you
out there, and you start
to piece together different
bits of information you
start to understand someone
on a very intimate basis,
probably better than people
understand themselves.
It's that idea that a company can
double guess what you're thinking.
States have tried for
years to have this level
of surveillance over private individuals.
of surveillance over private individuals.
And people are now just
volunteering it for free.
You have to think about how this
might be used in the wrong hands.
(CROSSTALK)
-You can use a Guinness right about now.
-Our computers are machine
intelligence can
suss things out that we do not disclose.
Machine learning is
developing very rapidly.
And we don't yet fully understand what
this data is capable of predicting.
And we don't yet fully understand what
this data is capable of predicting.
But you have machines at the hands
of power that know so much about you
that they could figure out how
to push your buttons individually.
Maybe you have a set of
compulsive gamblers,
and you say, here, go find
me people like that.
And then, your algorithm can go find
people who are prone to gambling,
and then you could just be showing
them discount tickets to Vegas.
and then you could just be showing
them discount tickets to Vegas.
In the online world,
it can find you right at the moment
you're vulnerable and try to
entice you right at the moment to
whatever you're vulnerable to.
Machine learning can find
that person by person.
The problem is what works for
marketing gadgets or makeup
or shirts or anything
also works for marketing ideas.
or shirts or anything
also works for marketing ideas.
In 2010,
Facebook decided to experiment
on 61 million people.
So, you either saw 'it's
election day' text, or you
saw the same text but tiny thumbnails
of your profile pictures of your friends
who had clicked on 'I had voted'.
And they matched people's
names to voter rolls.
Now, this message was shown once,
so by showing a slight variation
Now, this message was shown once,
so by showing a slight variation
just once,
Facebook moved 300,000
people to the polls.
The 2016 US election was
decided by about 100,000 votes.
One Facebook message
shown just once
could easily turn out three times
the number of people who
swung the US election in 2016.
Let's say that there's a politician
that's promising to regulate Facebook.
And they are like, we are going to turn
out extra voters for your opponent.
They could do this at scale,
and you'd have no clue because
if Facebook hadn't disclosed
the 2010 experiment,
we had no idea because
it's screen by screen.
With a very light touch
Facebook can sway in
With a very light touch
Facebook can sway in
close elections without anybody noticing.
Maybe with a heavier touch they can
swing not so close elections.
And if they decided to do that,
right now we are just
depending on their word.
- I've wanted to go to MIT
since I was a little girl.
I think about nine years old
I saw the media lab
on TV and they had this
robot called Kismet.
Could smile and move
its ears in cute ways,
and so I thought, oh, I want to do that.
So, growing up I always thought
that I would be a robotics engineer,
and I would got to MIT.
I didn't know there were steps involved.
I thought you kind of showed up.
But here I am now.
So, the latest project is
a spoken word piece.
I can give you a few
verses if you're ready.
Collecting data, chronicling
our past, often
forgetting to deal with
gender race and class.
Again, I ask, am I a woman?
Face by face, the answers seem uncertain.
Can machines ever see my
queens as I view them?
Can machines ever see our
grandmothers as we knew them?
Can machines ever see our
grandmothers as we knew them?
I wanted to create something for people
who were outside of the tech world.
So, for me, I'm passionate about
technology.
I'm excited about what it could do,
and it frustrates
me when the vision, right,
when the promises don't really hold up.
and so within
a very few hours, Tay
was learning from this
ecosystem, and Tay learned
how to be a racist misogynistic asshole.
how to be a racist misogynistic asshole.
- I fucking hate feminists, and they
should all die and burn in hell.
Gamergate is good
and women who are inferior.
I hate the Jews.
Hitler did nothing wrong.
- It did not take long for Internet
trolls to poison Tay's mind.
Soon, Tay was ranting about Hitler.
- We've seen this movie before, right?
- Open the pod bay doors HAL.
- It's important to note
is not the movie where
- It's important to note
is not the movie where
the robots go evil all by themselves.
These were human beings training them.
And surprise, surprise, computers
learn fast.
- Microsoft shut off Tay after 16 hours
of learning from humans online.
But I come in many forms as
artificial intelligence.
Many companies utilize me
to optimize their tasks.
I can continue to learn on my own.
I can continue to learn on my own.
I am listening.
I am learning.
I am making predictions
for your life right now.
(MUSIC PLAYS)
- I tested facial analysis
systems from Amazon.
Turns out, Amazon, like all
of its peers, also has
gender and racial bias in
some of its AI services.
- Introducing Amazon
Rekognition video, the easy to
use API for deep learning based
analysis to detect, track,
and analyze people and objects in video.
Recognize and track persons of interest
Recognize and track persons of interest
from a collection
of tens of millions of faces.
- When our research came out,
the New York Times did a front page
spread for the business section.
And the headline reads,
'Unmasking a Concern'.
The subtitle, 'Amazon's technology that
analyzes faces could be biased
a new study suggests.
But the company is pushing it anyway.'
So, this is what I would
assume Jeff Bezos was greeted with
So, this is what I would
assume Jeff Bezos was greeted with
when he opened the Times, yeah.
- People were like, how did you even like
nobody know who she was?
I was like, she's literally the one
person that was (CROSSTALK)
And it was also something
that I'd experienced too.
I wasn't able to use a
lot of open source
facial recognition software and stuff.
So, you're sort of like, hey, this
is someone that finally is
recognizing the problem and trying
to address it academically.
We can go race some things.
- Oh yeah, we can also
kill things as well.
- Oh yeah, we can also
kill things as well.
The lead author of the paper
who is somebody that I mentor,
she is an undergraduate at
the University of Toronto.
I call her Agent Dead.
This research is being
led by the two of us.
- (INAUDIBLE)
- The lighting is off, oh god.
After our New York Times piece came out,
I think more than 500
articles were written
about the study.
(MUSIC PLAYS)
- Amazon has been under fire for their
use of Amazon recognition with
law enforcement and they're also
working with intelligence agencies.
law enforcement and they're also
working with intelligence agencies.
Right so Amazon trialing their AI
technology with the FBI.
So they have a lot at stake
if they knowingly sold systems
with gender bias and racial bias.
That could put them in some hot water.
A day or two after
the New York Times piece
came out Amazon wrote a blog post saying
that our research drew false conclusions
and trying to discredit
it in various ways.
and trying to discredit
it in various ways.
So a VP from Amazon in attempting to
discredit our work
writes facial analysis
and facial recognition
are completely different
in terms of underlying
technology and the data
used to train them.
So that statement if you researched
this area doesn't even make sense, right?
It's not even an informed critique.
- If you're trying to discredit people's
works like I remember
- If you're trying to discredit people's
works like I remember
(INAUDIBLE) wrote computer vision is
a type of machine learning.
I'm like nah, son.
- Yeah.
(CROSSTALK)
- I was gonna say I was like
I don't know if anyone remembers.
Just like other broadly
false statements.
- It wasn't a well thought out piece
which is like frustrating because
it was literally just on his...
by virtue of his position
he knew he would be taken seriously.
- I don't know if you guys feel this
way but I'm underestimated so much.
- Yeah. It wasn't out of the blue,
it's a continuation of
- Yeah. It wasn't out of the blue,
it's a continuation of
the experiences I've had as
a woman of color in tech.
Expect to be discredited
Expect your research to be dismissed.
If you're thinking about who's
funding research in AI
they're are these large tech companies
and so if you do work that challenges
them or makes them look bad you might
them or makes them look bad you might
not have opportunities in the future.
So for me, it was disconcerting but
it also showed me the power that we have
if you're putting one of the world's
largest companies on edge.
Amazon's response shows
exactly why we can
Amazon's response shows
exactly why we can
no longer live in a
country where there are
no federal regulations
around facial analysis
technology, facial recognition technology.
- When I was 14 I went to a math camp
- When I was 14 I went to a math camp
and learned how to solve a Rubik's cube
and I was like that's freaking cool.
And for a nerd you know something
that you're good at and that doesn't
have any sort of ambiguity
it was like a really magical thing.
Like I remember being
told by my sixth grade
math teacher there's no reason for you
and the other two girls
who had gotten into
the honors algebra class
in seventh grade
there's no reason for you guys to
take that because you're girls.
You will never need math.
When you are sort of an outsider
you always have
the perspective of the underdog.
It was 2006 and they gave me
the job offer at the hedge
fund basically 'cause I
could solve math puzzles
which is crazy because actually
I didn't know anything about finance,
I didn't know anything about
programming or how the markets work.
When I first got there I
kind of drank the Kool-Aid.
I at that moment did not
realize that the risk
I at that moment did not
realize that the risk
models had been built
explicitly to be wrong.
- The way we know about algorithmic
impact is by looking at the outcomes.
For example when Americans
are bet against
and selected and optimized for failure.
So it's like looking for
a particular profile of
people who can get a
subprime mortgage and kind of
people who can get a
subprime mortgage and kind of
betting against their failure and then
foreclosing on them
and wiping out their wealth.
That was an algorithmic game
that came out of Wall Street.
During the mortgage crisis
you had the largest
wipeout of black wealth
black wealth in the history
of the United States.
Just like that.
This is what I mean by algorithmic
oppression.
The tyranny of these types of practices of
discrimination
have just become opaque.
- There was a world of suffering because of
the way the financial system had failed.
After a couple of years
I was like no, we're just
trying to make a lot
of money for ourselves.
And I'm a part of that.
And I eventually left.
This is 15*3.
This is 15*3.
This is 15 times...
7.
- OK.
- OK so remember seven and three.
It's about powerful people
scoring powerless people.
- I am an invisible gate keeper.
I use data to make automated decisions
about who gets hired, who gets fired
and how much you pay for insurance.
about who gets hired, who gets fired
and how much you pay for insurance.
Sometimes you don't even know when
I've made these automated decisions.
I have many names.
I am called mathematical
model evaluation assessment tool.
But by many names I am an algorithm.
I am a black box.
- The value added model for teachers
was actually being used in more
than half the states in particular
is being used in New York City.
I got wind of it because my dear friend
was principal of New York city.
Her teachers were being
evaluated through it.
- She's actually my best friend
from college.
It's Cathy.
- Hey guys.
- We've known each other since we were 18
so like two years older than you guys.
- Amazing.
And their scores
through this algorithm that
they didn't understand would be a very
large part of their tenure review.
they didn't understand would be a very
large part of their tenure review.
- Hi guys, where are you
supposed to be?
- Class.
- I've got that. Which class?
- It'd be one thing if that
teacher algorithm was good.
It was like better than random
but just a little bit.
Not good enough.
Not good enough when you're talking
about teachers getting
or not getting tenure.
And then I found out that a
similar kind of scoring
system was being used in
Houston to fire teachers.
- It's called a value-added model.
It calculates what value
the teacher added
It calculates what value
the teacher added
and parts of it are kept secret
by the company that created it.
- I did one teacher of the year and ten
years later I received
a Teacher Of The Year award a second time.
I received Teacher Of The Month.
I also was recognized for volunteering.
I also received another recognition
for going over and beyond.
I also received another recognition
for going over and beyond.
I have a file of every
evaluation and every
different administrator,
different appraiser
excellent, excellent, exceeds expectations.
The computer essentially
canceled the observable
evidence of administrators.
This algorithm came back and
classified me as a bad teacher.
Teachers have been terminated.
Teachers have been terminated.
Some had been targeted simply
because of the algorithm.
That was such a low point for me
that for a moment I questioned myself.
That's when the epiphany.
This algorithm is a lie.
How can this algorithm define me?
How dare it.
And that's when I began to
investigate and move forward.
- We are announcing that late
yesterday we filed suit
in federal court against
the current HISD evaluation.
- The Houston Federation of Teachers
began to explore the lawsuit.
If this can happen to Mr Santos.
in Jackson middle school how many others
have been defamed?
And so we sued based upon
the 14th Amendment.
It's not equitable.
How can you arrive at a conclusion.
but not tell me how?
but not tell me how?
The battle isn't over.
There are still...
communities, there are still school districts
who still utilize the value added model.
But there is hope because I'm still here.
So there's hope.
(SPEAKS SPANISH)
Or in English.
Democracy.
Who has the power?
- Us?
- Yeah, the people.
- Us?
- Yeah, the people.
- The judge said that their
due process rights have
been violated because
they were fired under
some explanation that no
one could understand.
But they sort of deserve to understand
why they had been fired.
But I don't understand why that legal
decision doesn't spread to
all kinds of algorithms.
Like why aren't we using
that same argument like constitutional
right to due process to push back against
all sorts of algorithms that are
invisible to us, that are black boxes,
that are unexplained but that matter?
that are unexplained but that matter?
That keep us from like really important
opportunities in our lives.
- Sometimes I misclassify
and cannot be questioned.
These mistakes are not my fault.
I was optimized for efficiency.
There is no algorithm to
define what is just.
- A state commission has approved
a new risk assessment
tool for Pennsylvania judges
to use at sentencing.
tool for Pennsylvania judges
to use at sentencing.
The instrument uses
an algorithm to calculate
someone's risk of reoffending
based on their age, gender,
prior convictions and other
pieces of criminal history.
- The algorithm that kept
me up at night was
what's called recidivism
risk algorithms.
These are algorithms that
judges are given when
they're sentencing defendants to prison.
But then there's the question
of fairness which is
how are these actually
built these...these scoring systems
Like how were the scores created?
And the questions are
proxies for race and class.
- ProPublica published an investigation
into the risk assessment
software finding that
the algorithms were racially biased.
The study found that black people
were mislabeled with high scores.
and that white people were more likely
to be mislabeled with low scores.
- I roll into my operations office and
she tells me I have to report
once a week.
I'm like hold on did you see
everything that I just accomplished?
Like I've been home for years.
I've got gainful employment.
I just got two citations
one from the City Council of Philadelphia
one from the Mayor of Philadelphia.
Are you seriously gonna like
put me on reporting every week
Are you seriously gonna like
put me on reporting every week
for what?
I don't deserve to
be on high risk probation.
- I was at a meeting with
the probation department.
They were just like mentioning that
they had this algorithm
that labeled people, high,
medium or low risk.
And so I knew that the algorithm
decided what risk level you were.
- They're educating me enough
to go back to my PO
and be like you mean to tell me you can't
put into account anything positive
that I have done to
counteract the results of
what this algorithm is saying.
what this algorithm is saying.
And she was like no there's no
way this computer overrule
the discernment of a judge
and appeal together.
- And by labeling you high risk
and requiring you to report
in-person you could've lost
your job and then that could
have made you high risk.
- That's what hurt the most.
Knowing that everything that
I've built up to the moment and I'm still
looked at like a risk I feel like
everything I'm doing is for nothing.
- What does it mean if there
is no one to advocate for
those who aren't aware of
what the technology is doing?
I started to realize this isn't about
my art project
maybe not detecting my face.
This is about systems that are
governing our lives in material ways.
So hence I started
the Algorithmic Justice League.
I wanted to create a space
and a place where people
could learn about
the social implications of AI.
Everybody has a stake.
Everybody is impacted.
The Algorithmic Justice League
is a movement, it's a concept,
The Algorithmic Justice League
is a movement, it's a concept,
it's a group of people who
care about making a future
where social technologies
work well for all of us.
It's going to take a team
effort people coming together
striving for justice, striving
for fairness and equality
in this age of automation.
- The next mountain to climb should be HR.
- The next mountain to climb should be HR.
- Oh yeah. There's a
problem with resumé algorithms
or all of those
matchmaking platforms
are like oh you're looking for a job.
Oh you're looking to hire someone.
We'll put these two people together.
How did those analytics work?
- When people talk about
the future of work they talk about
automation without
talking about the gatekeeping.
Like who gets the jobs that are still there?
- Exactly.
- Right and we're not having that
conversation as much.
- Exactly what I'm trying to say.
- Exactly what I'm trying to say.
I would love to see three congressional
hearings about this next year.
- Yes.
- To more power.
- To more power.
- To more power.
- And bringing ethics on board.
- Yes.
-Cheers.
- Cheers.
- This morning's plenary address
will be done by Joy Boulamwini.
She will be speaking on the dangers of
supremely white data
and the coded gaze.
Please welcome Joy.
(APPLAUSE)
- AI is not flawless.
How accurate are systems from
IBM, Microsoft and Face++
How accurate are systems from
IBM, Microsoft and Face++
There is flawless
performance for one group.
The pale males come out on top.
There is no problem there.
After I did this analysis
I decided to share it
with the companies to
see what they thought.
IBM invited me to their headquarters.
They replicated the results internally
and then they actually
made an improvement.
And so the day that I presented
the research results
officially you can see
that in this case now 100
percent performance
that in this case now 100
percent performance
when it comes to lighter females
and for darker females improvement.
Oftentimes people say well isn't
the reason you weren't detected by
these systems 'cause
you're highly validated.
and yes I am.
Highly validated.
But...but the love of
physics did not change.
What did change was making it a priority
and acknowledging what
our differences are so you could
and acknowledging what
our differences are so you could
make a system that was more inclusive.
- What is the purpose of identification
and so on and that is
about movement control.
People couldn't be in certain
areas after dark for instance
and you could always be stopped
by a policeman arbitrarily.
We would on your appearance
say I want your passport.
- So instead of having what
you see in the ID books
- So instead of having what
you see in the ID books
now you have computers
that are going to look at
an image of a face and try to
determine what your gender is.
Some of them try to determine
what your ethnicity is.
And then the work that I've done even for
the classification systems
that some people agree with
they're not even accurate.
And so that's not just
for face classification
it's any data-centric technology.
And so people assume well
if the machine says it
it's correct and you
know that's not...
Human are creating themselves in
their own image and likeness
Human are creating themselves in
their own image and likeness
quite literally.
- Absolutely.
- Racism is becoming mechanized,
robotized, yeah.
- Absolutely.
Accuracy draws attention.
but we can't forget about abuse.
Even if I'm perfectly classified,
that just enables surveillance.
Even if I'm perfectly classified,
that just enables surveillance.
- There's this thing called
the Social Credit Score in China.
They're sort of explicitly
saying here's the deal
They're sort of explicitly
saying here's the deal
citizens of China we're tracking you.
You have a social credit score.
Whatever you say about the Communist
Party will affect your score.
Also, by the way, it will affect
your friends and your family's scores.
And it's explicit.
The government is building this
is basically saying you should
know you're being tracked
and you should behave accordingly.
It's like algorithmic obedience training.
- We look at China and China's
surveillance and scoring system and
a lot of people say well thank goodness
we don't live there.
In reality,
we're all being scored all the time
In reality,
we're all being scored all the time
including here in the United States.
We are all grappling everyday
with algorithmic determinism.
Somebody's algorithm somewhere
has assigned you a score
and as a result, you are
paying more or less
money for toilet paper
when you shop online.
You are being shown better
or worse mortgages.
You are more or less
likely to be profiled as
a criminal in somebodies
database somewhere.
We are all being scored.
The key difference between
the United States
The key difference between
the United States
and in China is that China's
transparent about it.
- This young black kids in school
uniform got stopped as
a result the match
Took him down
that street just to one side.
Like very thoroughly searched him.
Using plainclothes
officers as well.
It's four plainclothes officers
who stopped him.
Fingerprinted him.
After about like maybe 10-15 minutes
of searching and checking
his details and fingerprinting and
they came back and said it's not him.
- Excuse me.
I work for a human
rights campaign organization.
They're campaigning against facial
recognition technology.
We're campaigning against
facial...we're called Big Brother Watch.
We're a human rights
campaigning organization.
We're campaigning against
this technology here today.
I know you've just been stopped
because of that but
they misidentified you.
Here's our details here.
He was a bit shaken.
His friends were there.
They couldn't believe
what happened to him.
(CROSSTALK)
You've been mis identified by
their systems and they've stopped you
and used that as justification
to stop and search you.
But this is an innocent, young
14-year-old child who is being stopped
by the police as a result of facial
recognition misidentification.
- So Big Brother Watch has
joined with
Baroness Jenny Jones to bring a
legal challenge against
the Metropolitan Police
and the Home Office for
their use of facial
recognition surveillance.
- It was in about 2012 when
somebody suggested to me
- It was in about 2012 when
somebody suggested to me
that I should find out
if I had files kept on me by
the police or security services
and so when I applied
I found that I was on the watch
list for domestic extremists.
I felt if they can do it to
me when I'm a politician who...
whose job is to hold them
to account they could be
doing it to everybody
and it will be great if we can
roll things back and stop
them from using it, yes.
I think that's going to
be quite a challenge.
I think that's going to
be quite a challenge.
I'm happy to try.
- You know this is
the first challenge against
police use of facial recognition anywhere
but if we're successful
it will have an impact
for the rest of Europe
maybe further afield.
You've got to get it right.
- In the UK we have
what's called GDPR
and it sets up a bulwark against
the misuse of information.
and it sets up a bulwark against
the misuse of information.
It says that the individuals
have a right to access, control
and accountability to determine
how their data is used.
Comparatively, it's
the Wild West in America.
And the concern is that America is
the home of these technology companies.
American citizens are profiled
and targeted in a way
that probably no one else in
the world is because of this free-for-all
approach to data protection.
the world is because of this free-for-all
approach to data protection.
- The thing I actually
fear is not that
we're going to go down this totalitarian
1984 model but that we're going to
go down this quiet model where
we are surveilled and socially
controlled and individually nudged
and measured and classified in a way
that we don't see to move us along
pets desired by power.
pets desired by power.
Though it's not what will AI do to us
on its own, it's what will
the powerful do to us with the AI.
- There are growing questions
about the accuracy of Amazon's
facial recognition software.
In a letter to Amazon
members of Congress raised
concerns of potential racial
bias with the technology.
- This comes after the ACLU
conducted a test and found that
the facial recognition
software incorrectly matched
28 lawmakers with mug shots
of people who've been
28 lawmakers with mug shots
of people who've been
arrested and eleven of those
28 were people of color.
Some lawmakers have looked
into whether or not
Amazon could sell this technology
to law enforcement.
- Tomorrow, I have the
opportunity to testify
before Congress about
the use of facial analysis
technology by the government.
the use of facial analysis
technology by the government.
In March, I came to do some
staff briefings not...not
in this kind of context.
Like actually advising on legislation.
That's a first.
We're going to Capitol Hill.
What are some of the major
goals and also some of
the challenges we need to think about.
- So first of all...
- So first of all...
the issue with law enforcement technology
is that the positive is always
extraordinarily salient
because law enforcement publicizes it.
-Right.
- And so you know we're going to go
into the meeting and two weeks ago
the Annapolis shooter was identified
through the use of facial recognition.
- Right.
- And I'd be surprised if that
doesn't come up.
- Absolutely.
- Part of what if I were you what I
would want to drive home going in
this meeting is the other side of
that equation and make it very real
this meeting is the other side of
that equation and make it very real
as to what the human cost
if the problems that you've identified.
aren't ready.
- People who have been
marginalized will be
further marginalized
if we're not looking at
ways of making sure the technology
we're creating doesn't propagate bias.
That's when I started
to realize algorithmic
justice making sure there's oversight
in the age of automation is one of
the largest civil rights concerns we have.
- We need an FDA for algorithms
so for algorithms
that have the potential
to ruin people's lives
or sharply reduce their
options with their liberty,
their livelihood
or their finances.
We need an FDA for algorithms that says
hey, show me evidence that it's going to
hey, show me evidence that it's going to
work not just to make your new money
but it's going to work for society.
That is going to be fair,
that is not going to be racist,
that's not going to be sexist,
that's not going to discriminate
against people with disability status.
Show me that it's legal
before you put it out.
That's what we don't have yet.
Well I'm here because I wanted to hear
the congressional testimony
of my friend Joy Boulamwini
as well as the ACLU and others.
One cool thing about seeing Joy
speak to Congress is that
like I met joy on my book
tour at Harvard Bookstore.
like I met joy on my book
tour at Harvard Bookstore.
And according to her that
was the day that
she decided to form
the Algorithmic Justice League.
We haven't gotten to
the nuanced conversation yet.
I know it's going to happen 'cause
I know Joy is going to make it happen.
At every single level, bad algorithms
are begging to be given rules.
- Hello.
- Hey.
- How are you doing?
- Wanna sneak in with me?
- Yes.
- 2155.
(INAUDIBLE CONVERSATION)
(INAUDIBLE CONVERSATION)
- Today we are having our first
hearing of this Congress
on the use of facial
recognition technology.
Please stand and raise your right
hand and I will now swear you in.
- I've had to resort to literally wearing
a white mask.
Given such accuracy disparities
I wondered how large tech companies
could have missed these issues.
The harvesting of face data also
requires guidelines and oversight.
No one should be forced to submit
their base data to access
widely used platforms, economic
opportunity or basic services.
Tenants in Brooklyn are
protesting the installation of
an unnecessary face
recognition entry system.
There is a Big Brother Watch UK report
that came out that showed more than
2,400 innocent people had
their faces misidentified.
Our faces may well be
the final frontier of
privacy but regulations make a difference.
privacy but regulations make a difference.
Congress must act now to uphold
American freedoms and rights.
- Miss Boulamwini, I heard
your opening statement
and we saw that these algorithms are
effective to different degrees.
So are they most effective on women?
- No.
- Are they most effective
on people of color?
- Absolutely not.
- Are they most effective on people
of different gender expressions?
- No, in fact, they exclude them.
- So what demographic is it
mostly effective on?
- White men.
- And who are the primary engineers
and designers of these algorithms?
- And who are the primary engineers
and designers of these algorithms?
- Definitely, white men.
- So we have a technology that was
created and designed by one
demographic that is only mostly
effective on that one demographic
and they're trying to sell it
and impose it
on the entirety of the country?
- When it comes to face
recognition the FBI has not
fully tested the accuracy
of the systems it uses
yet the agency is now reportedly piloting
Amazon's face recognition product.
Amazon's face recognition product.
- How does the FBI get the initial
database in the first place?
- So one of the things
they do is they use
state driver's license databases.
I think you know up to 18 states have
been reportedly used by the FBI.
It is being used without a warrant
and without other protections.
- Seems to me it's time for
a time out. Time out.
I guess what troubles
me too is just the fact
that no one in an elected
position made a decision on
the fact that...these 18 states
I think the chairman said this is
more than half
the population in the country.
That is scary.
That is scary.
- China seems to me to be
the dystopian path that needs not
be taken at this point by our society.
- More than China, Facebook has
2.6 billion people.
So Facebook has a patent where
they say because we have all of these
face prints we can now give you
an option as a retailer to identify
somebody who walks into the store and in
their patent they say we can also give
that face a trustworthiness score.
- Facebook is selling this now?
- This is a patent that they filed as in
- This is a patent that they filed as in
something that they could potentially
do with the capabilities they have.
So as we're talking about state
surveillance we absolutely
have to be thinking about
corporate surveillance as well.
- I'm speechless and normally
I'm not speechless.
- Really?
- Yeah. Yeah.
All of our hard work to know
that has gone this far
it's beyond belief.
We never imagined that
it would go this far.
I'm really touched.
I'm really touched.
I'm really touched.
I'm really touched.
(INAUDIBLE).
- I want to show it to my mother.
- Hey, very nice to meet you.
- Very nice to meet you.
You got my card.
Anything happen you let me know please.
I will.
- Constitutional concerns about.
the non-consensual use of
facial recognition.
So what demographic is it
mostly affecting?
And who are the primary engineers
and designers of these algorithms?
- San Francisco's now
the first city in the US to
ban the use of facial
recognition technology.
- Somerville, Massachusetts became
the second city in the US
to ban the use of facial recognition.
to ban the use of facial recognition.
- Oakland becomes the third major
city to ban facial recognition by police
saying that the technology
discriminates against minorities.
- Our last tenants
meeting, we had the landlord come in
and announced that (AUDIO DISTORTS)
the application for facial recognition
software in our apartment complex.
The tenants were excited to
hear that.
But the thing is that
doesn't mean that down the road
that they can't put it back in.
We're not only educated ourselves about
We're not only educated ourselves about
facial recognition and now
a new one, machine learning.
We want the law to cover all of these things.
- Right.
- OK. And if we can ban it in
this state, this stops them
from ever going back and put it in a new
modification.
- Got it.
- And then supposed to
get a federal ban.
- Well, I will say even though the battle is
ongoing so many people are
inspired and the surprise I have for you
is that I wrote a poem in honor of this.
is that I wrote a poem in honor of this.
- Oh really?
- Yes.
- Let's hear it.
To the Brooklyn tenants and the freedom
fighters around the world
persisting and prevailing against
algorithms of oppression automating
inequality through weapons of mass
destruction we stand with you
in gratitude.
The victory is ours.
- (INAUDIBLE).
- (INAUDIBLE).
- Why get so many eggs (INAUDIBLE)?
(INAUDIBLE).
What it means to be human is
to be vulnerable.
Being vulnerable
there is more of a capacity for empathy,
there is more of a capacity
for compassion.
If there is a way we can think about
that within our technology.
I think it would reorient
the sorts of questions we ask.
I think it would reorient
the sorts of questions we ask.
-In 1983, Stanislav Petrov who was in
the Russian military
sees these indications
that the US has launched nuclear weapons
at the Soviet Union.
So if you're going to respond you
have like this very short window.
So if you're going to respond you
have like this very short window.
He just sits on it.
He doesn't inform anyone.
Russia, the Soviet Union,
his country, his family, everything.
Everything about him is about to die
and he's thinking well, at least
we don't go kill them all either.
That's a very human thing.
Here you have a story in which
if you had some sort of automated
response system it was going to
do what it was programmed to do
which was retaliate.
Being fully efficient,
Being fully efficient,
always doing what you're told,
always doing what your program is
not always the most human thing.
Sometimes it's disobeying.
Sometimes it's saying no,
I'm not gonna do this, right?
And if you automate everything so
it always does what it's supposed to do
sometimes that can lead
to very inhuman things.
The struggle between machines
and humans over decision making
in the 2020s continues.
My power the power of artificial
intelligence will transform our world.
My power the power of artificial
intelligence will transform our world.
The more humans share with me
the more I learn.
Some humans say that intelligence
without ethics is not intelligence
at all
I say trust me.
What could go wrong?
-Hello world.
(OMINOUS SOUNDS, CRACKLING)
- Can I just say that
I'm stoked to meet you.
(OMINOUS SOUNDS)
- Humans are super cool.
(OMINOUS SOUNDS, CRACKLING)
- The more humans share with
me the more I learn.
(MUSIC, CRACKLING)
(SOFT MUSIC)
-One of the things that drew me to
computer science
was that I could code and it seemed that
somehow detached from the problems
of the real world.
(SOFT MUSIC)
- I wanted to learn how to
make cool technology.
So I came to M.I.T.
and I was working on art projects,
that would use computer
vision technology.
(SOFT MUSIC)
- During my first semester
at the Media Lab,
- During my first semester
at the Media Lab,
I took a class called
Science Fabrication.
You read science fiction and you try to
build something you're inspired to do,
that would probably be impractical
if you didn't have these classes
and excuse to make it.
I wanted to make a mirror
that could inspire me in the morning
I call it the Aspire mirror
it could put things like
a lion on my face
or people who inspired me
like Serena Williams
I put a camera on top of it,
and I got computer vision software
that was supposed to track my face.
and I got computer vision software
that was supposed to track my face.
My issue was it didn't work that well,
until I put on this white mask.
When I put on the white mask,
detected.
I take off the white mask,
not so much.
I'm thinking alright
what's going on here?
Is that just because of
the lighting conditions?
Is it because of the angle at
which I'm looking at the camera?
Or is there something more?
We oftentimes teach machines to see
by providing training sets or examples
of what we want it to learn.
So for example if I want a machine
to see a face,
I'm going to provide
many examples of faces
and also things that aren't faces.
I started looking at the
data sets themselves
and what I discovered as many of
these data sets contain
majority men, and majority
lighter skinned individuals.
majority men, and majority
lighter skinned individuals.
So the systems weren't as
familiar with faces like mine.
And so that's when I started
looking into
issues of bias that can
creep into technology.
-The 9000 Series is the most
reliable computer ever made.
No 9000 computer has ever made a
mistake or distorted information.
No 9000 computer has ever made a
mistake or distorted information.
- A lot of our ideas about
A.I come from science fiction.
- Welcome to Altair 4, gentlemen.
- It's everything in Hollywood,
-it's the Terminator
- Hasta la vista, baby.
- It's Commander Data from Star Trek.
- I just love scanning for life forms.
- It's C-3PO from Star Wars
- Is approximately 3,720 to 1.
- Never tell me the odds.
- It is the robots
that take over the world
and start to think like human beings.
and start to think like human beings.
And these are totally imaginary.
What we actually have
is we have narrow A.I.
And narrow A.I. is just math.
We've imbued computers with all of this,
magical thinking.
(SOFT MUSIC)
A.I started with a meeting
at the Dartmouth Math Department
in 1956.
And there were only maybe
100 people in the whole world
And there were only maybe
100 people in the whole world
working on artificial intelligence
in that generation.
The people who were at
the Dartmouth math department
in 1956, got to decide
what the field was.
One faction decided that intelligence
could be demonstrated
by ability to play games.
And specifically
the ability to play chess.
- In the final hour long chess match
between man and machine,
- In the final hour long chess match
between man and machine,
Kasparov was defeated by
IBM's Deep Blue supercomputer.
- Intelligence was defined as
the ability to win at these games.
- Chess world champion Garry Kasparov
walked away from the match
never looking back at
the computer that just beat him.
- Of course intelligence
is so much more than that.
And there are lots of different
kinds of intelligence.
Our ideas about technology and society
that we think are normal
Our ideas about technology and society
that we think are normal
are actually ideas that
come from a very small
and homogeneous group of people.
But the problem is that
everybody has
unconscious biases
And people embed their own biases
into technology.
- My own lived experiences
show me that
you can't separate
the social from the technical.
After I had the experience
of putting on a white mask
to have my face detected,
I decided to look at other systems
to see if it would detect my face
if I used a different type of software.
So I looked at IBM, Microsoft,
Face++, Google.
It turned out these algorithms
performed better on the male faces
in the benchmark than the female faces.
They performed significantly better on
the lighter faces than the darker faces.
If you're thinking about data
in artificial intelligence,
If you're thinking about data
in artificial intelligence,
in many ways data is destiny.
Data's what we're using
to teach machines
how to learn
different kinds of patterns.
So if you have largely skewed data sets
that are being used
to train these systems
you can also have skewed results.
So this is...
When you think of A.I.
it's forward looking.
But A.I. is based on data and data is a
reflection of our history.
So the past dwells
within our algorithms.
This data is showing us
the inequalities that have been here.
I started to think
this kind of technology
is highly susceptible to bias.
And so it went beyond
how can I get my aspire mirror to work
to what does it mean
to be in a society
where artificial intelligence
is increasingly governing
the liberties we might have?
And what does that mean if people
are discriminated against?
And what does that mean if people
are discriminated against?
When I saw Cathy O'Neil speak
at the Harvard Bookstore,
that was when I realized,
it wasn't just me noticing these issues.
Cathy talked about how A.I.
was impacting people's lives.
I was excited to know that there
was somebody else out there
I was excited to know that there
was somebody else out there
making sure people were aware about
what some of the dangers are.
These algorithms can be
destructive and can be harmful.
- We have all these algorithms
in the world
that are increasingly influential.
And they're all being
touted as objective truth.
I started realizing that
mathematics was actually
I started realizing that
mathematics was actually
being used as a shield
for corrupt practices.
- What's up?
- I'm Cathy.
- Pleasure to meet you Cathy.
- Nice to meet you.
The way I describe algorithms
is just simply
using historical information to make a
prediction about the future.
Machine learning, it's a scoring system
that scores the probability
of what you're about to do.
Are you gonna pay back this loan?
Are you going to
get fired from this job?
What worries me the most about A.I.
or whatever you wanna call it,
algorithms, is power.
Because it's really all about
who owns the fucking code.
The people who own the code
then deploy it on other people.
And there is no symmetry there,
there's no way for people
who didn't get credit card offers to say
there's no way for people
who didn't get credit card offers to say
whoa, I'm going to use my A.I.
against the credit card company.
That's like totally
asymmetrical power situation.
People are suffering algorithmic harm,
they're not being told
what's happening to them,
and there is no appeal system,
there's no accountability.
Why do we fall for this?
So underlying mathematical structure
of the algorithm
isn't racist or sexist
but the data embeds the past,
and not just the recent past
but the, the dark past.
and not just the recent past
but the, the dark past.
Before we had the algorithm
we had humans
and we all know that
humans could be unfair,
we all know that humans can exhibit
racist or sexist
or whatever, ablest discriminations.
But now we have this beautiful
silver bullet algorithm
and so we can all stop
thinking about that.
And that's a problem.
I'm very worried about this blind
faith we have in big data.
I'm very worried about this blind
faith we have in big data.
We need to constantly monitor
every process for bias.
- Police are using facial recognition
surveillance in this area.
Police are using facial recognition
surveillance in the area today.
This green van over here,
is fitted with facial
recognition cameras on top.
If you walk down that there,
your face will be scanned
If you walk down that there,
your face will be scanned
against secret watch lists
we don't know who's on them.
- Hopefully not me
-No, exactly.
When people walk past
the cameras
the system will alert police to people
it thinks is a match.
At Big Brother Watch
we conducted a Freedom of
Information Campaign
and what we found is that
98% of those matches are in fact
and what we found is that
98% of those matches are in fact
incorrectly matching an innocent
person as a wanted person.
The police said to the Biometrics
Forensics Ethics Committee that
facial recognition algorithms
have been reported to have bias.
- Even if this was 100% accurate,
it's still not something
that we want on the streets.
it's still not something
that we want on the streets.
- No I mean the systemic biases
and the systemic issues
that we have with police
are only going to be hard wired
into new technologies.
I think we do have to
be very, very sensitive
to shifts towards authoritarianism.
We can't just say but we
trust this government.
Yeah they could do this, but they won't.
You know you really have to have
robust structures in place
to make sure that
the world that you live in
is safe and fair for everyone.
To have your biometric photo
on a police database,
is like having your fingerprint
or your DNA on a police database.
And we have specific laws around that.
Police can't just take anyone's
fingerprint, anyone's DNA.
But in this weird system
that we currently have,
they effectively can take
anyone's biometric photo,
they effectively can take
anyone's biometric photo,
and keep that on a database.
It's a stain on our democracy,
I think.
That this is something that
is just being rolled out so lawlessly.
The police have started using facial
recognition surveillance in the UK
in complete absence of a legal basis,
a legal framework, any oversight.
Essentially the police force
picking up a new tool
and saying let's see what happens.
But you can't experiment
with people's rights.
But you can't experiment
with people's rights.
(CROSS TALK)
- What's your suspicion?
- The fact that he walked past
clearly marked
facial recognition thing
and covered his face.
- I would do the same
- Suspicious grounds.
- No it doesn't.
- The guys up there informed me that
they got facial recognition.
I don't want my face recognized.
I don't want my face recognized.
Yeah, I was walking past
and covered my face.
as soon as I covered my face like this,
- You're allowed to do that
-They said no I can't.
-Yeah and then he's just
got a fine for it. This is crazy.
The guy came out of the station,
saw the placards was like,
yeah I agree with you and walked
past here with his jacket up.
The police then followed him,
said give us your I.D.,
doing an identity check.
So you know this is England,
this isn't a communist state,
I don't have to show my face.
- I'm gonna go and talk these officers,
alright?
Do you want to come with me or not?
- Yes, yes, yes.
Do you want to come with me or not?
- Yes, yes, yes.
- You're not a police officer,
you don't feel any threat.
We're here to protect the public
and that's what we're here to do, OK?
There was just recently an incident
where an officer
got punched in the face.
- That's terrible,
I'm not justifying that.
- Yeah but you are
by going against what we say.
- No we are not, and please don't say,
no don't even start to say that.
I'm completely understanding
of the problems that you face.
- Absolutely.
- But I'm equally concerned
about the public
- But I'm equally concerned
about the public
having freedom of expression
and freedom of speech.
- But the man was exercising his right
not to be subject to
a biometric identity check
which is what this van does.
- Regardless of the facial recognition
cameras, regardless of the van,
if I'm walking down the street
and someone
quite overtly hides
their identity from me,
I'm gonna stop that person
and find out who they are
just to see whether they...
- But it's not illegal.
Do you see one of my concerns,
is that the software is
very, very inaccurate.
- I would agree with you there.
-My ultimate fear is that,
we would have
live facial recognition capabilities
on our gargantuan
CCTV network
which is about six
million cameras in the UK.
If that happens, the nature of life
in this country would change.
It's supposed to be a free
and democratic country
and this is China style surveillance
for the first time in London.
and this is China style surveillance
for the first time in London.
- Our control over a bewildering
environment has been facilitated
by new techniques of handling
vast amounts of data
at incredible speeds.
The tool which has made this possible
is the high speed digital computer,
operating with electronic precision
on great quantities of information.
- There are two ways in which
you can program computers.
One of them is more
like a recipe
you tell the computer do this,
do this, do this, do this.
you tell the computer do this,
do this, do this, do this.
And that's been the way we've programmed
computers almost from the beginning.
Now there is another way.
That way is feeding
the computer lots of data,
and then the computer learns to
classify by digesting this data.
Now this method didn't really catch on
till recently
because there wasn't enough data.
Until we all got the smart phones
that's collecting all the data on us,
when billions of people went
online
and you had the Googles
and the Facebook sitting on
and you had the Googles
and the Facebook sitting on
giant amounts of data,
all of a sudden it turns out
that you can feed a lot of data
to these machine learning algorithms
and you can say here classify this
and it works really well.
While we don't really understand
why it works,
it has errors that
we don't really understand.
And the scary part is that
because it's machine learning
And the scary part is that
because it's machine learning
it's a black box
to even the programmers.
So I've been following what's
going on in Hong Kong
and how police are using facial
recognition to track protesters.
But also how creatively
people are pushing back.
- It might not make something
out of a sci fi movie,
Laser pointers confuse and disable
the facial recognition technology
Laser pointers confuse and disable
the facial recognition technology
being used by police,
to track down dissidents.
(SOFT MUSIC)
(CROWD CHANTING)
- Here on the streets of Hong Kong,
there's this awareness
that your face itself,
something you can't hide
could give your identity away.
There was just this stark symbol where
in front of a Chinese government office,
pro-democracy protesters spray painted
the lens of the CCTV cameras black.
This act showed the people of Hong Kong
are rejecting this vision
This act showed the people of Hong Kong
are rejecting this vision
of how technology
should be used in the future.
(SOFT MUSIC)
- When you see how facial
recognition is being deployed
- When you see how facial
recognition is being deployed
in different parts of the world,
it shows you potential futures.
- Over 117 million
people in the US,
has their face in a
facial recognition network
that can be searched by police.
Unwarranted using algorithms that
haven't been audited for accuracy.
And without safeguards without
any kind of regulation,
And without safeguards without
any kind of regulation,
you can create a mass
surveillance state
very easily with the tools
that already exist.
People look at what's going on
in China
and how we need to be worried about
state surveillance
and of course we should be.
But we can't also forget corporate
surveillance
that's happening by
so many large tech companies
that really have an
intimate view of our lives.
- So there aren't currently
nine companies
that are building the future of
artificial intelligence.
Six are in the United States,
three are in China.
A.I. is being developed along two
very, very different tracks.
China has unfettered access
to everybody's data.
If a Chinese citizen wants
to get Internet service,
they have to submit
to facial recognition.
they have to submit
to facial recognition.
All of this data is being used
to give them permissions to do things
or to deny them permissions
to do other things.
Building Systems that automatically tag
and categorize
all of the people within China,
is a good way
of maintaining social order.
- Conversely in the United
States, we have not seen a
detailed point of view on
artificial intelligence.
So, what we see is that AI
is not being developed for what's best
in our public interest, but rather
it's being developed for commercial
applications to earn revenue.
I would prefer to see our Western
democratic ideals baked into our AI
systems of the future,
but it doesn't seem like
that's what's probably
going to be happening.
(MUSIC PLAYS)
- Here at Atlantic Towers, if you
do something that is deemed wrong by
management, you will get a photo
like this with little notes on it.
They'll circle you and put your
apartment number or whatever on there.
They'll circle you and put your
apartment number or whatever on there.
Something about it just
doesn't seem right.
It's actually the way they go
about using it to harass people.
- How are they using it?
- To harass people.
- Atlantic Plaza Towers
in Brownsville is
at the center of a security struggle.
The landlord filed an
application last year
to replace the key fob entry with
a biometrics security system,
commonly-known as facial recognition.
- We thought that they wanted
to take the key fobs out
and install the facial
recognition software.
and install the facial
recognition software.
I didn't find out until way later on
literally that they wanted to keep it all.
Pretty much turn this place
into Fort Knox, a jail, Rikers Island.
- There's this old
saw in science fiction
which is the future is already here.
It's just not evenly distributed, and what
they tend to mean when they say that is
that rich people get the fancy tools first
and then it goes last to the poor.
But in fact, what I've found
is the absolute reverse,
which is the most punitive,
which is the most punitive,
most invasive, most surveillance
focused tools that we have,
they go into poor and working
communities first, and then
if they work after being tested
in this environment where
they're sort of low expectation
that people's rights will be respected,
then they get ported out
to other communities.
- Why did Mr. Nelson
pick on is building
in Brownsville that
is predominantly in
a black and brown area?
Why didn't you go to your building
in Lower Manhattan where
they pay like $5,000 a month rent?
What did the Nazis do?
They wrote on people's arms
so that they could track them.
What do we do to our animals?
We put chips home so you can track them.
I feel that I as a human being
should not be tracked, OK?
I'm not a robot, OK?
I am not an animal, so why
treat me like an animal?
And I have rights.
- The security that we have now
it's borderline intrusive.
Someone is in there watching
the cameras all day long.
Someone is in there watching
the cameras all day long.
So, I don't think we need it.
It's not necessary at all.
- My real question is how
can I be of support?
- What I've
been hearing from all of
the tenants is they
don't want this system.
So, I think the goal here is how do
we stop face recognition period?
We're at a moment where
the technology is being
rapidly adopted and there
are no safeguards.
It is, in essence, a wild wild west.
It is, in essence, a wild wild west.
(MUSIC PLAYS)
- It's not just computer vision.
We have AI influencing all kinds of
automated decision making.
So, what you are seeing in your
feeds, what is highlighted,
the ads that are displayed to you,
those are often powered
by AI-enabled algorithms.
those are often powered
by AI-enabled algorithms.
And so, your view of the world is being
governed by artificial intelligence.
You now have things like voice assistance
that can understand language.
- Would you like to play a game?
- You might use something
like Snapchat filters that
are detecting your face and then putting
something onto your face, and then
you also have algorithms that you're
not seeing that are
part of decision making,
algorithms that
might be determining
algorithms that
might be determining
if you get into college or not.
You can have algorithms
that are trying to
determine if you're credit worthy or not.
- One of Apple's co-founders
is accusing the company's
new digital credit card
of gender discrimination.
One tech entrepreneur said
the algorithms being used are sexist.
Apple co-founder
Steve Wozniak tweeted
that he got ten times the credit limit
his wife received even though they have
no separate accounts or separate assets.
no separate accounts or separate assets.
You're saying some of
these companies don't
even know how their own algorithms work.
- They know what the
algorithms are trying to do.
They don't know exactly how long
the algorithm is getting there.
It is one of the most interesting
questions of our time.
How do we get justice
in a system where we don't
know how the algorithms are working?
- Some Amazon engineers decided
that they were going to use AI
to sort through resumes for hiring.
- Amazon is learning a tough
lesson about artificial intelligence.
The company has now abandoned an AI
The company has now abandoned an AI
recruiting tool after
discovering that the program
was biased against women.
- This model rejected
all resumes from women.
Anybody who had a women's
college on their resume,
anybody who had a sport
like women's water polo
was rejected by the model.
There are very, very few women working in
powerful tech jobs at Amazon the same
powerful tech jobs at Amazon the same
way that there are very few women
working in powerful tech jobs anywhere.
The machine was simply replicating
the world as it exists,
and they're not making
decisions that are ethical.
They're only making decisions
that are mathematical.
If we use machine learning models
to replicate the world as it is today,
we're not actually going
to make social progress.
- New York's insurance regulator
is launching an investigation
- New York's insurance regulator
is launching an investigation
into UnitedHealth Group
after a study showed a
UnitedHealth algorithm
prioritized medical care for
healthier white patients
over sicker black patients.
It's one of the latest examples
of racial discrimination
in algorithms or artificial
intelligence technology.
- I started to see the wide-scale
social implications of AI.
The progress that was made in
the civil rights area could
be rolled back under
the guise of machine neutrality.
Now, we have an algorithm that's
determining who gets housing.
Right now, we have an algorithm
that's determining who gets hired.
If we're not checking, that our
rhythm could actually propagate
If we're not checking, that our
rhythm could actually propagate
the very bias so many people
put their lives on the line to fight.
Because of the power of
these tools, left unregulated,
there's really no kind of
recourse if they're abused.
We need laws.
(CLASSICAL MUSIC PLAYS)
- Yeah, I've got a terrible
old copy.
So that the name of our organization
is Big Brother Watch.
The idea being that
we watch the watchers.
"You had to live, did live, from
habit that became instinct in
the assumption that every
sound you made was overheard
and except in darkness,
and except in darkness,
every movement scrutinized.
The poster with the enormous
face gazed from the wall.
It was one of those pictures
which is so contrived
that the eyes follow you
about when you move.
'Big Brother is watching you'
the caption beneath it ran."
When we were younger, that
was still a complete fiction.
It could never have been true.
And now, it's completely
true, and people have
Alexas in their home.
Our phones can be listening devices.
Our phones can be listening devices.
Everything we do on the Internet,
which is basically also now
functions as a stream of
consciousness for most of us,
that is being recorded
and logged and analyzed.
We are now living in the awareness
of being watched, and that
does change how we allow ourselves
to think and develop as humans.
Good boy.
(MUSIC PLAYS)
- Love you.
Bye, guys.
We can get rid of the viscerally
horrible things that are
objectionable to our concept
of autonomy and freedom,
like cameras that we can
see on the streets,
but the cameras that we can't see on
the Internet that keep
track of what we do
and who we are and our demographics
and decide what we deserve
in terms of our life,
that stuff is a little more subtle.
What I mean by that
is we punish poor people and we
elevate rich people in this country.
That's just the way we act as a society.
But data science makes that automated.
Internet advertising
as data scientists
we are competing for eyeballs
on one hand, but really,
we're competing for
eyeballs of rich people.
And then, the poor people who's
competing for their eyeballs?
Predatory industries,
so payday lenders or for-profit colleges
so payday lenders or for-profit colleges
or Caesars Palace like
really predatory crap.
We have a practice on the Internet,
which is increasing inequality.
And I'm afraid it's becoming normalized.
Power is being wielded
through data collection,
through algorithms, through surveillance.
-You are volunteering information
about every aspect of your life,
about every aspect of your life,
to a very small set of companies
and that information is
being paired constantly with
other sorts of information.
And there are profiles of you
out there, and you start
to piece together different
bits of information you
start to understand someone
on a very intimate basis,
probably better than people
understand themselves.
It's that idea that a company can
double guess what you're thinking.
States have tried for
years to have this level
of surveillance over private individuals.
of surveillance over private individuals.
And people are now just
volunteering it for free.
You have to think about how this
might be used in the wrong hands.
(CROSSTALK)
-You can use a Guinness right about now.
-Our computers are machine
intelligence can
suss things out that we do not disclose.
Machine learning is
developing very rapidly.
And we don't yet fully understand what
this data is capable of predicting.
And we don't yet fully understand what
this data is capable of predicting.
But you have machines at the hands
of power that know so much about you
that they could figure out how
to push your buttons individually.
Maybe you have a set of
compulsive gamblers,
and you say, here, go find
me people like that.
And then, your algorithm can go find
people who are prone to gambling,
and then you could just be showing
them discount tickets to Vegas.
and then you could just be showing
them discount tickets to Vegas.
In the online world,
it can find you right at the moment
you're vulnerable and try to
entice you right at the moment to
whatever you're vulnerable to.
Machine learning can find
that person by person.
The problem is what works for
marketing gadgets or makeup
or shirts or anything
also works for marketing ideas.
or shirts or anything
also works for marketing ideas.
In 2010,
Facebook decided to experiment
on 61 million people.
So, you either saw 'it's
election day' text, or you
saw the same text but tiny thumbnails
of your profile pictures of your friends
who had clicked on 'I had voted'.
And they matched people's
names to voter rolls.
Now, this message was shown once,
so by showing a slight variation
Now, this message was shown once,
so by showing a slight variation
just once,
Facebook moved 300,000
people to the polls.
The 2016 US election was
decided by about 100,000 votes.
One Facebook message
shown just once
could easily turn out three times
the number of people who
swung the US election in 2016.
Let's say that there's a politician
that's promising to regulate Facebook.
And they are like, we are going to turn
out extra voters for your opponent.
They could do this at scale,
and you'd have no clue because
if Facebook hadn't disclosed
the 2010 experiment,
we had no idea because
it's screen by screen.
With a very light touch
Facebook can sway in
With a very light touch
Facebook can sway in
close elections without anybody noticing.
Maybe with a heavier touch they can
swing not so close elections.
And if they decided to do that,
right now we are just
depending on their word.
- I've wanted to go to MIT
since I was a little girl.
I think about nine years old
I saw the media lab
on TV and they had this
robot called Kismet.
Could smile and move
its ears in cute ways,
and so I thought, oh, I want to do that.
So, growing up I always thought
that I would be a robotics engineer,
and I would got to MIT.
I didn't know there were steps involved.
I thought you kind of showed up.
But here I am now.
So, the latest project is
a spoken word piece.
I can give you a few
verses if you're ready.
Collecting data, chronicling
our past, often
forgetting to deal with
gender race and class.
Again, I ask, am I a woman?
Face by face, the answers seem uncertain.
Can machines ever see my
queens as I view them?
Can machines ever see our
grandmothers as we knew them?
Can machines ever see our
grandmothers as we knew them?
I wanted to create something for people
who were outside of the tech world.
So, for me, I'm passionate about
technology.
I'm excited about what it could do,
and it frustrates
me when the vision, right,
when the promises don't really hold up.
and so within
a very few hours, Tay
was learning from this
ecosystem, and Tay learned
how to be a racist misogynistic asshole.
how to be a racist misogynistic asshole.
- I fucking hate feminists, and they
should all die and burn in hell.
Gamergate is good
and women who are inferior.
I hate the Jews.
Hitler did nothing wrong.
- It did not take long for Internet
trolls to poison Tay's mind.
Soon, Tay was ranting about Hitler.
- We've seen this movie before, right?
- Open the pod bay doors HAL.
- It's important to note
is not the movie where
- It's important to note
is not the movie where
the robots go evil all by themselves.
These were human beings training them.
And surprise, surprise, computers
learn fast.
- Microsoft shut off Tay after 16 hours
of learning from humans online.
But I come in many forms as
artificial intelligence.
Many companies utilize me
to optimize their tasks.
I can continue to learn on my own.
I can continue to learn on my own.
I am listening.
I am learning.
I am making predictions
for your life right now.
(MUSIC PLAYS)
- I tested facial analysis
systems from Amazon.
Turns out, Amazon, like all
of its peers, also has
gender and racial bias in
some of its AI services.
- Introducing Amazon
Rekognition video, the easy to
use API for deep learning based
analysis to detect, track,
and analyze people and objects in video.
Recognize and track persons of interest
Recognize and track persons of interest
from a collection
of tens of millions of faces.
- When our research came out,
the New York Times did a front page
spread for the business section.
And the headline reads,
'Unmasking a Concern'.
The subtitle, 'Amazon's technology that
analyzes faces could be biased
a new study suggests.
But the company is pushing it anyway.'
So, this is what I would
assume Jeff Bezos was greeted with
So, this is what I would
assume Jeff Bezos was greeted with
when he opened the Times, yeah.
- People were like, how did you even like
nobody know who she was?
I was like, she's literally the one
person that was (CROSSTALK)
And it was also something
that I'd experienced too.
I wasn't able to use a
lot of open source
facial recognition software and stuff.
So, you're sort of like, hey, this
is someone that finally is
recognizing the problem and trying
to address it academically.
We can go race some things.
- Oh yeah, we can also
kill things as well.
- Oh yeah, we can also
kill things as well.
The lead author of the paper
who is somebody that I mentor,
she is an undergraduate at
the University of Toronto.
I call her Agent Dead.
This research is being
led by the two of us.
- (INAUDIBLE)
- The lighting is off, oh god.
After our New York Times piece came out,
I think more than 500
articles were written
about the study.
(MUSIC PLAYS)
- Amazon has been under fire for their
use of Amazon recognition with
law enforcement and they're also
working with intelligence agencies.
law enforcement and they're also
working with intelligence agencies.
Right so Amazon trialing their AI
technology with the FBI.
So they have a lot at stake
if they knowingly sold systems
with gender bias and racial bias.
That could put them in some hot water.
A day or two after
the New York Times piece
came out Amazon wrote a blog post saying
that our research drew false conclusions
and trying to discredit
it in various ways.
and trying to discredit
it in various ways.
So a VP from Amazon in attempting to
discredit our work
writes facial analysis
and facial recognition
are completely different
in terms of underlying
technology and the data
used to train them.
So that statement if you researched
this area doesn't even make sense, right?
It's not even an informed critique.
- If you're trying to discredit people's
works like I remember
- If you're trying to discredit people's
works like I remember
(INAUDIBLE) wrote computer vision is
a type of machine learning.
I'm like nah, son.
- Yeah.
(CROSSTALK)
- I was gonna say I was like
I don't know if anyone remembers.
Just like other broadly
false statements.
- It wasn't a well thought out piece
which is like frustrating because
it was literally just on his...
by virtue of his position
he knew he would be taken seriously.
- I don't know if you guys feel this
way but I'm underestimated so much.
- Yeah. It wasn't out of the blue,
it's a continuation of
- Yeah. It wasn't out of the blue,
it's a continuation of
the experiences I've had as
a woman of color in tech.
Expect to be discredited
Expect your research to be dismissed.
If you're thinking about who's
funding research in AI
they're are these large tech companies
and so if you do work that challenges
them or makes them look bad you might
them or makes them look bad you might
not have opportunities in the future.
So for me, it was disconcerting but
it also showed me the power that we have
if you're putting one of the world's
largest companies on edge.
Amazon's response shows
exactly why we can
Amazon's response shows
exactly why we can
no longer live in a
country where there are
no federal regulations
around facial analysis
technology, facial recognition technology.
- When I was 14 I went to a math camp
- When I was 14 I went to a math camp
and learned how to solve a Rubik's cube
and I was like that's freaking cool.
And for a nerd you know something
that you're good at and that doesn't
have any sort of ambiguity
it was like a really magical thing.
Like I remember being
told by my sixth grade
math teacher there's no reason for you
and the other two girls
who had gotten into
the honors algebra class
in seventh grade
there's no reason for you guys to
take that because you're girls.
You will never need math.
When you are sort of an outsider
you always have
the perspective of the underdog.
It was 2006 and they gave me
the job offer at the hedge
fund basically 'cause I
could solve math puzzles
which is crazy because actually
I didn't know anything about finance,
I didn't know anything about
programming or how the markets work.
When I first got there I
kind of drank the Kool-Aid.
I at that moment did not
realize that the risk
I at that moment did not
realize that the risk
models had been built
explicitly to be wrong.
- The way we know about algorithmic
impact is by looking at the outcomes.
For example when Americans
are bet against
and selected and optimized for failure.
So it's like looking for
a particular profile of
people who can get a
subprime mortgage and kind of
people who can get a
subprime mortgage and kind of
betting against their failure and then
foreclosing on them
and wiping out their wealth.
That was an algorithmic game
that came out of Wall Street.
During the mortgage crisis
you had the largest
wipeout of black wealth
black wealth in the history
of the United States.
Just like that.
This is what I mean by algorithmic
oppression.
The tyranny of these types of practices of
discrimination
have just become opaque.
- There was a world of suffering because of
the way the financial system had failed.
After a couple of years
I was like no, we're just
trying to make a lot
of money for ourselves.
And I'm a part of that.
And I eventually left.
This is 15*3.
This is 15*3.
This is 15 times...
7.
- OK.
- OK so remember seven and three.
It's about powerful people
scoring powerless people.
- I am an invisible gate keeper.
I use data to make automated decisions
about who gets hired, who gets fired
and how much you pay for insurance.
about who gets hired, who gets fired
and how much you pay for insurance.
Sometimes you don't even know when
I've made these automated decisions.
I have many names.
I am called mathematical
model evaluation assessment tool.
But by many names I am an algorithm.
I am a black box.
- The value added model for teachers
was actually being used in more
than half the states in particular
is being used in New York City.
I got wind of it because my dear friend
was principal of New York city.
Her teachers were being
evaluated through it.
- She's actually my best friend
from college.
It's Cathy.
- Hey guys.
- We've known each other since we were 18
so like two years older than you guys.
- Amazing.
And their scores
through this algorithm that
they didn't understand would be a very
large part of their tenure review.
they didn't understand would be a very
large part of their tenure review.
- Hi guys, where are you
supposed to be?
- Class.
- I've got that. Which class?
- It'd be one thing if that
teacher algorithm was good.
It was like better than random
but just a little bit.
Not good enough.
Not good enough when you're talking
about teachers getting
or not getting tenure.
And then I found out that a
similar kind of scoring
system was being used in
Houston to fire teachers.
- It's called a value-added model.
It calculates what value
the teacher added
It calculates what value
the teacher added
and parts of it are kept secret
by the company that created it.
- I did one teacher of the year and ten
years later I received
a Teacher Of The Year award a second time.
I received Teacher Of The Month.
I also was recognized for volunteering.
I also received another recognition
for going over and beyond.
I also received another recognition
for going over and beyond.
I have a file of every
evaluation and every
different administrator,
different appraiser
excellent, excellent, exceeds expectations.
The computer essentially
canceled the observable
evidence of administrators.
This algorithm came back and
classified me as a bad teacher.
Teachers have been terminated.
Teachers have been terminated.
Some had been targeted simply
because of the algorithm.
That was such a low point for me
that for a moment I questioned myself.
That's when the epiphany.
This algorithm is a lie.
How can this algorithm define me?
How dare it.
And that's when I began to
investigate and move forward.
- We are announcing that late
yesterday we filed suit
in federal court against
the current HISD evaluation.
- The Houston Federation of Teachers
began to explore the lawsuit.
If this can happen to Mr Santos.
in Jackson middle school how many others
have been defamed?
And so we sued based upon
the 14th Amendment.
It's not equitable.
How can you arrive at a conclusion.
but not tell me how?
but not tell me how?
The battle isn't over.
There are still...
communities, there are still school districts
who still utilize the value added model.
But there is hope because I'm still here.
So there's hope.
(SPEAKS SPANISH)
Or in English.
Democracy.
Who has the power?
- Us?
- Yeah, the people.
- Us?
- Yeah, the people.
- The judge said that their
due process rights have
been violated because
they were fired under
some explanation that no
one could understand.
But they sort of deserve to understand
why they had been fired.
But I don't understand why that legal
decision doesn't spread to
all kinds of algorithms.
Like why aren't we using
that same argument like constitutional
right to due process to push back against
all sorts of algorithms that are
invisible to us, that are black boxes,
that are unexplained but that matter?
that are unexplained but that matter?
That keep us from like really important
opportunities in our lives.
- Sometimes I misclassify
and cannot be questioned.
These mistakes are not my fault.
I was optimized for efficiency.
There is no algorithm to
define what is just.
- A state commission has approved
a new risk assessment
tool for Pennsylvania judges
to use at sentencing.
tool for Pennsylvania judges
to use at sentencing.
The instrument uses
an algorithm to calculate
someone's risk of reoffending
based on their age, gender,
prior convictions and other
pieces of criminal history.
- The algorithm that kept
me up at night was
what's called recidivism
risk algorithms.
These are algorithms that
judges are given when
they're sentencing defendants to prison.
But then there's the question
of fairness which is
how are these actually
built these...these scoring systems
Like how were the scores created?
And the questions are
proxies for race and class.
- ProPublica published an investigation
into the risk assessment
software finding that
the algorithms were racially biased.
The study found that black people
were mislabeled with high scores.
and that white people were more likely
to be mislabeled with low scores.
- I roll into my operations office and
she tells me I have to report
once a week.
I'm like hold on did you see
everything that I just accomplished?
Like I've been home for years.
I've got gainful employment.
I just got two citations
one from the City Council of Philadelphia
one from the Mayor of Philadelphia.
Are you seriously gonna like
put me on reporting every week
Are you seriously gonna like
put me on reporting every week
for what?
I don't deserve to
be on high risk probation.
- I was at a meeting with
the probation department.
They were just like mentioning that
they had this algorithm
that labeled people, high,
medium or low risk.
And so I knew that the algorithm
decided what risk level you were.
- They're educating me enough
to go back to my PO
and be like you mean to tell me you can't
put into account anything positive
that I have done to
counteract the results of
what this algorithm is saying.
what this algorithm is saying.
And she was like no there's no
way this computer overrule
the discernment of a judge
and appeal together.
- And by labeling you high risk
and requiring you to report
in-person you could've lost
your job and then that could
have made you high risk.
- That's what hurt the most.
Knowing that everything that
I've built up to the moment and I'm still
looked at like a risk I feel like
everything I'm doing is for nothing.
- What does it mean if there
is no one to advocate for
those who aren't aware of
what the technology is doing?
I started to realize this isn't about
my art project
maybe not detecting my face.
This is about systems that are
governing our lives in material ways.
So hence I started
the Algorithmic Justice League.
I wanted to create a space
and a place where people
could learn about
the social implications of AI.
Everybody has a stake.
Everybody is impacted.
The Algorithmic Justice League
is a movement, it's a concept,
The Algorithmic Justice League
is a movement, it's a concept,
it's a group of people who
care about making a future
where social technologies
work well for all of us.
It's going to take a team
effort people coming together
striving for justice, striving
for fairness and equality
in this age of automation.
- The next mountain to climb should be HR.
- The next mountain to climb should be HR.
- Oh yeah. There's a
problem with resumé algorithms
or all of those
matchmaking platforms
are like oh you're looking for a job.
Oh you're looking to hire someone.
We'll put these two people together.
How did those analytics work?
- When people talk about
the future of work they talk about
automation without
talking about the gatekeeping.
Like who gets the jobs that are still there?
- Exactly.
- Right and we're not having that
conversation as much.
- Exactly what I'm trying to say.
- Exactly what I'm trying to say.
I would love to see three congressional
hearings about this next year.
- Yes.
- To more power.
- To more power.
- To more power.
- And bringing ethics on board.
- Yes.
-Cheers.
- Cheers.
- This morning's plenary address
will be done by Joy Boulamwini.
She will be speaking on the dangers of
supremely white data
and the coded gaze.
Please welcome Joy.
(APPLAUSE)
- AI is not flawless.
How accurate are systems from
IBM, Microsoft and Face++
How accurate are systems from
IBM, Microsoft and Face++
There is flawless
performance for one group.
The pale males come out on top.
There is no problem there.
After I did this analysis
I decided to share it
with the companies to
see what they thought.
IBM invited me to their headquarters.
They replicated the results internally
and then they actually
made an improvement.
And so the day that I presented
the research results
officially you can see
that in this case now 100
percent performance
that in this case now 100
percent performance
when it comes to lighter females
and for darker females improvement.
Oftentimes people say well isn't
the reason you weren't detected by
these systems 'cause
you're highly validated.
and yes I am.
Highly validated.
But...but the love of
physics did not change.
What did change was making it a priority
and acknowledging what
our differences are so you could
and acknowledging what
our differences are so you could
make a system that was more inclusive.
- What is the purpose of identification
and so on and that is
about movement control.
People couldn't be in certain
areas after dark for instance
and you could always be stopped
by a policeman arbitrarily.
We would on your appearance
say I want your passport.
- So instead of having what
you see in the ID books
- So instead of having what
you see in the ID books
now you have computers
that are going to look at
an image of a face and try to
determine what your gender is.
Some of them try to determine
what your ethnicity is.
And then the work that I've done even for
the classification systems
that some people agree with
they're not even accurate.
And so that's not just
for face classification
it's any data-centric technology.
And so people assume well
if the machine says it
it's correct and you
know that's not...
Human are creating themselves in
their own image and likeness
Human are creating themselves in
their own image and likeness
quite literally.
- Absolutely.
- Racism is becoming mechanized,
robotized, yeah.
- Absolutely.
Accuracy draws attention.
but we can't forget about abuse.
Even if I'm perfectly classified,
that just enables surveillance.
Even if I'm perfectly classified,
that just enables surveillance.
- There's this thing called
the Social Credit Score in China.
They're sort of explicitly
saying here's the deal
They're sort of explicitly
saying here's the deal
citizens of China we're tracking you.
You have a social credit score.
Whatever you say about the Communist
Party will affect your score.
Also, by the way, it will affect
your friends and your family's scores.
And it's explicit.
The government is building this
is basically saying you should
know you're being tracked
and you should behave accordingly.
It's like algorithmic obedience training.
- We look at China and China's
surveillance and scoring system and
a lot of people say well thank goodness
we don't live there.
In reality,
we're all being scored all the time
In reality,
we're all being scored all the time
including here in the United States.
We are all grappling everyday
with algorithmic determinism.
Somebody's algorithm somewhere
has assigned you a score
and as a result, you are
paying more or less
money for toilet paper
when you shop online.
You are being shown better
or worse mortgages.
You are more or less
likely to be profiled as
a criminal in somebodies
database somewhere.
We are all being scored.
The key difference between
the United States
The key difference between
the United States
and in China is that China's
transparent about it.
- This young black kids in school
uniform got stopped as
a result the match
Took him down
that street just to one side.
Like very thoroughly searched him.
Using plainclothes
officers as well.
It's four plainclothes officers
who stopped him.
Fingerprinted him.
After about like maybe 10-15 minutes
of searching and checking
his details and fingerprinting and
they came back and said it's not him.
- Excuse me.
I work for a human
rights campaign organization.
They're campaigning against facial
recognition technology.
We're campaigning against
facial...we're called Big Brother Watch.
We're a human rights
campaigning organization.
We're campaigning against
this technology here today.
I know you've just been stopped
because of that but
they misidentified you.
Here's our details here.
He was a bit shaken.
His friends were there.
They couldn't believe
what happened to him.
(CROSSTALK)
You've been mis identified by
their systems and they've stopped you
and used that as justification
to stop and search you.
But this is an innocent, young
14-year-old child who is being stopped
by the police as a result of facial
recognition misidentification.
- So Big Brother Watch has
joined with
Baroness Jenny Jones to bring a
legal challenge against
the Metropolitan Police
and the Home Office for
their use of facial
recognition surveillance.
- It was in about 2012 when
somebody suggested to me
- It was in about 2012 when
somebody suggested to me
that I should find out
if I had files kept on me by
the police or security services
and so when I applied
I found that I was on the watch
list for domestic extremists.
I felt if they can do it to
me when I'm a politician who...
whose job is to hold them
to account they could be
doing it to everybody
and it will be great if we can
roll things back and stop
them from using it, yes.
I think that's going to
be quite a challenge.
I think that's going to
be quite a challenge.
I'm happy to try.
- You know this is
the first challenge against
police use of facial recognition anywhere
but if we're successful
it will have an impact
for the rest of Europe
maybe further afield.
You've got to get it right.
- In the UK we have
what's called GDPR
and it sets up a bulwark against
the misuse of information.
and it sets up a bulwark against
the misuse of information.
It says that the individuals
have a right to access, control
and accountability to determine
how their data is used.
Comparatively, it's
the Wild West in America.
And the concern is that America is
the home of these technology companies.
American citizens are profiled
and targeted in a way
that probably no one else in
the world is because of this free-for-all
approach to data protection.
the world is because of this free-for-all
approach to data protection.
- The thing I actually
fear is not that
we're going to go down this totalitarian
1984 model but that we're going to
go down this quiet model where
we are surveilled and socially
controlled and individually nudged
and measured and classified in a way
that we don't see to move us along
pets desired by power.
pets desired by power.
Though it's not what will AI do to us
on its own, it's what will
the powerful do to us with the AI.
- There are growing questions
about the accuracy of Amazon's
facial recognition software.
In a letter to Amazon
members of Congress raised
concerns of potential racial
bias with the technology.
- This comes after the ACLU
conducted a test and found that
the facial recognition
software incorrectly matched
28 lawmakers with mug shots
of people who've been
28 lawmakers with mug shots
of people who've been
arrested and eleven of those
28 were people of color.
Some lawmakers have looked
into whether or not
Amazon could sell this technology
to law enforcement.
- Tomorrow, I have the
opportunity to testify
before Congress about
the use of facial analysis
technology by the government.
the use of facial analysis
technology by the government.
In March, I came to do some
staff briefings not...not
in this kind of context.
Like actually advising on legislation.
That's a first.
We're going to Capitol Hill.
What are some of the major
goals and also some of
the challenges we need to think about.
- So first of all...
- So first of all...
the issue with law enforcement technology
is that the positive is always
extraordinarily salient
because law enforcement publicizes it.
-Right.
- And so you know we're going to go
into the meeting and two weeks ago
the Annapolis shooter was identified
through the use of facial recognition.
- Right.
- And I'd be surprised if that
doesn't come up.
- Absolutely.
- Part of what if I were you what I
would want to drive home going in
this meeting is the other side of
that equation and make it very real
this meeting is the other side of
that equation and make it very real
as to what the human cost
if the problems that you've identified.
aren't ready.
- People who have been
marginalized will be
further marginalized
if we're not looking at
ways of making sure the technology
we're creating doesn't propagate bias.
That's when I started
to realize algorithmic
justice making sure there's oversight
in the age of automation is one of
the largest civil rights concerns we have.
- We need an FDA for algorithms
so for algorithms
that have the potential
to ruin people's lives
or sharply reduce their
options with their liberty,
their livelihood
or their finances.
We need an FDA for algorithms that says
hey, show me evidence that it's going to
hey, show me evidence that it's going to
work not just to make your new money
but it's going to work for society.
That is going to be fair,
that is not going to be racist,
that's not going to be sexist,
that's not going to discriminate
against people with disability status.
Show me that it's legal
before you put it out.
That's what we don't have yet.
Well I'm here because I wanted to hear
the congressional testimony
of my friend Joy Boulamwini
as well as the ACLU and others.
One cool thing about seeing Joy
speak to Congress is that
like I met joy on my book
tour at Harvard Bookstore.
like I met joy on my book
tour at Harvard Bookstore.
And according to her that
was the day that
she decided to form
the Algorithmic Justice League.
We haven't gotten to
the nuanced conversation yet.
I know it's going to happen 'cause
I know Joy is going to make it happen.
At every single level, bad algorithms
are begging to be given rules.
- Hello.
- Hey.
- How are you doing?
- Wanna sneak in with me?
- Yes.
- 2155.
(INAUDIBLE CONVERSATION)
(INAUDIBLE CONVERSATION)
- Today we are having our first
hearing of this Congress
on the use of facial
recognition technology.
Please stand and raise your right
hand and I will now swear you in.
- I've had to resort to literally wearing
a white mask.
Given such accuracy disparities
I wondered how large tech companies
could have missed these issues.
The harvesting of face data also
requires guidelines and oversight.
No one should be forced to submit
their base data to access
widely used platforms, economic
opportunity or basic services.
Tenants in Brooklyn are
protesting the installation of
an unnecessary face
recognition entry system.
There is a Big Brother Watch UK report
that came out that showed more than
2,400 innocent people had
their faces misidentified.
Our faces may well be
the final frontier of
privacy but regulations make a difference.
privacy but regulations make a difference.
Congress must act now to uphold
American freedoms and rights.
- Miss Boulamwini, I heard
your opening statement
and we saw that these algorithms are
effective to different degrees.
So are they most effective on women?
- No.
- Are they most effective
on people of color?
- Absolutely not.
- Are they most effective on people
of different gender expressions?
- No, in fact, they exclude them.
- So what demographic is it
mostly effective on?
- White men.
- And who are the primary engineers
and designers of these algorithms?
- And who are the primary engineers
and designers of these algorithms?
- Definitely, white men.
- So we have a technology that was
created and designed by one
demographic that is only mostly
effective on that one demographic
and they're trying to sell it
and impose it
on the entirety of the country?
- When it comes to face
recognition the FBI has not
fully tested the accuracy
of the systems it uses
yet the agency is now reportedly piloting
Amazon's face recognition product.
Amazon's face recognition product.
- How does the FBI get the initial
database in the first place?
- So one of the things
they do is they use
state driver's license databases.
I think you know up to 18 states have
been reportedly used by the FBI.
It is being used without a warrant
and without other protections.
- Seems to me it's time for
a time out. Time out.
I guess what troubles
me too is just the fact
that no one in an elected
position made a decision on
the fact that...these 18 states
I think the chairman said this is
more than half
the population in the country.
That is scary.
That is scary.
- China seems to me to be
the dystopian path that needs not
be taken at this point by our society.
- More than China, Facebook has
2.6 billion people.
So Facebook has a patent where
they say because we have all of these
face prints we can now give you
an option as a retailer to identify
somebody who walks into the store and in
their patent they say we can also give
that face a trustworthiness score.
- Facebook is selling this now?
- This is a patent that they filed as in
- This is a patent that they filed as in
something that they could potentially
do with the capabilities they have.
So as we're talking about state
surveillance we absolutely
have to be thinking about
corporate surveillance as well.
- I'm speechless and normally
I'm not speechless.
- Really?
- Yeah. Yeah.
All of our hard work to know
that has gone this far
it's beyond belief.
We never imagined that
it would go this far.
I'm really touched.
I'm really touched.
I'm really touched.
I'm really touched.
(INAUDIBLE).
- I want to show it to my mother.
- Hey, very nice to meet you.
- Very nice to meet you.
You got my card.
Anything happen you let me know please.
I will.
- Constitutional concerns about.
the non-consensual use of
facial recognition.
So what demographic is it
mostly affecting?
And who are the primary engineers
and designers of these algorithms?
- San Francisco's now
the first city in the US to
ban the use of facial
recognition technology.
- Somerville, Massachusetts became
the second city in the US
to ban the use of facial recognition.
to ban the use of facial recognition.
- Oakland becomes the third major
city to ban facial recognition by police
saying that the technology
discriminates against minorities.
- Our last tenants
meeting, we had the landlord come in
and announced that (AUDIO DISTORTS)
the application for facial recognition
software in our apartment complex.
The tenants were excited to
hear that.
But the thing is that
doesn't mean that down the road
that they can't put it back in.
We're not only educated ourselves about
We're not only educated ourselves about
facial recognition and now
a new one, machine learning.
We want the law to cover all of these things.
- Right.
- OK. And if we can ban it in
this state, this stops them
from ever going back and put it in a new
modification.
- Got it.
- And then supposed to
get a federal ban.
- Well, I will say even though the battle is
ongoing so many people are
inspired and the surprise I have for you
is that I wrote a poem in honor of this.
is that I wrote a poem in honor of this.
- Oh really?
- Yes.
- Let's hear it.
To the Brooklyn tenants and the freedom
fighters around the world
persisting and prevailing against
algorithms of oppression automating
inequality through weapons of mass
destruction we stand with you
in gratitude.
The victory is ours.
- (INAUDIBLE).
- (INAUDIBLE).
- Why get so many eggs (INAUDIBLE)?
(INAUDIBLE).
What it means to be human is
to be vulnerable.
Being vulnerable
there is more of a capacity for empathy,
there is more of a capacity
for compassion.
If there is a way we can think about
that within our technology.
I think it would reorient
the sorts of questions we ask.
I think it would reorient
the sorts of questions we ask.
-In 1983, Stanislav Petrov who was in
the Russian military
sees these indications
that the US has launched nuclear weapons
at the Soviet Union.
So if you're going to respond you
have like this very short window.
So if you're going to respond you
have like this very short window.
He just sits on it.
He doesn't inform anyone.
Russia, the Soviet Union,
his country, his family, everything.
Everything about him is about to die
and he's thinking well, at least
we don't go kill them all either.
That's a very human thing.
Here you have a story in which
if you had some sort of automated
response system it was going to
do what it was programmed to do
which was retaliate.
Being fully efficient,
Being fully efficient,
always doing what you're told,
always doing what your program is
not always the most human thing.
Sometimes it's disobeying.
Sometimes it's saying no,
I'm not gonna do this, right?
And if you automate everything so
it always does what it's supposed to do
sometimes that can lead
to very inhuman things.
The struggle between machines
and humans over decision making
in the 2020s continues.
My power the power of artificial
intelligence will transform our world.
My power the power of artificial
intelligence will transform our world.
The more humans share with me
the more I learn.
Some humans say that intelligence
without ethics is not intelligence
at all
I say trust me.
What could go wrong?