Today we travel to a future where militaries employ killer robots. Should a robot be allowed to take a human life? Is the speed of war increasing too quickly? What does a tech worker do when they find out their work is being used for war?
Guests:
- Kelsey Atherton, a reporter who covers military technology.
- Ryan Calo, a professor of law at the University of Washington.
- Dr. Ryan Jenkins, an associate professor of philosophy at California Polytechnic State University.
- Liz O’Sullivan, the CEO of Parity.
- Dr. Lucy Suchman, a professor emerita of sociology at Lancaster University.
- Dr. Jesse Kirkpatrick, an assistant professor of philosophy at George Mason University.
- Dr. Carlotta Berry, chair of the electrical and computer engineering department at the Rose-Hulman Institute of Technology.
- Kate Conger, a technology reporter at the New York Times.
Voice Actors:
- Rachael Deckard: Richelle Claiborne
- Chad: Brett Tubbs
- Lina: Ashley Kellem
- Brad: Brent Rose
- Halimah: Zahra Noorbakhsh
- Malik: Henry Alexander Kelly
- Summer: Shara Kirby
- Ashoka: Anjali Kunapaneni
- Eliza: Chelsey B Coombs
- Dorothy Levitt: Tamara Krinsky
- John Dee: Keith Houston
- Seargent William Walter: Jarrett Sleeper
Further Reading:
- The First Drone Strike
- Drone Warfare — The Bureau of Investigative Journalism
- Hounds Of The Uncanny Valley Of Death
- Mapping the Development of Autonomy in Weapon Systems
- Next generation military robots have minds of their own
- As 164 Countries Ban Landmines, US Holds Fifth-Largest Stockpile of the Weapon
- Sea Of Lies — 1992 Newsweek investigation on Iran Air Flight 655
- Times Investigation: In U.S. Drone Strike, Evidence Suggests No ISIS Bomb
- 41 men targeted but 1,147 people killed: US drone strikes – the facts on the ground
- Can Americans Resist Surveillance?
- A Dilemma for Moral Deliberation in AI
- Coded Bias
- Development of a Leadership, Policy, and Change Course for Science, Technology, Engineering, and Mathematics Graduate Students
- Cold War Armory: Military Contracting in Silicon Valley
- Strategic computing
- The United States Spends More On Defense Than The Next 11 Countries Combined
- Google Employees Resign in Protest Against Pentagon Contract
- Google employees’ letter about Project Maven
- Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program
- AI at Google: our principles
- Despite a surge of tech activism, Clarifai plans to push further into government work
- Liz’s open letter to Clarifai
- I Quit My Job to Protest My Company’s Work on Building Killer Robots
- The case for a federal robotics commission
- Google Wants to Work With the Pentagon Again, Despite Employee Concerns
- The problem with AI ethics
- DOD Adopts Ethical Principles for Artificial Intelligence
- Defense Innovation Unit Publishes ‘Responsible AI Guidelines’
- Toward a critique of algorithmic violence
- Drones and the Martial Virtue Courage
- Should a robot decide when to kill?
- Killer Robots and the New Era of Machine-Driven Warfare
Episode Sponsors:
- The Long Time Academy: A new podcast about time, and how we think about time.
- Bird Note Daily: A short, 2-minute daily dose of bird — from wacky facts, to hard science, and even poetry.
- Nature: The leading international journal of science. Get 50% off your yearly subscription when you subscribe at go.nature.com/flashforward.
- BetterHelp: Making professional therapy accessible, affordable, and convenient. Visit betterhelp.com/flashforward and get 10% off your first month.
Flash Forward is hosted by Rose Eveleth and produced by Julia Llinas Goodman. The intro music is by Asura and the outro music is by Hussalonia. The episode art is by Mattie Lubchansky. Amanda McLoughlin and Multitude Productions handle our ad sales.
FULL TRANSCRIPT BELOW
(transcript provided by Emily White at The Wordary)
▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹
FLASH FORWARD
S7E16 – “ROBOTS: Should A Robot Be Allowed To Kill?”
[Flash Forward intro music – “Whispering Through” by Asura, an electronic, rhythm-heavy piece]
ROSE EVELETH:
Hello and welcome to Flash Forward! I’m Rose and I’m your host. Flash Forward is a show about the future. Every episode we take on a specific possible… or sometimes not-so-possible future scenario. We always start with a little field trip into the future to check out what is going on, and then we teleport back to today to talk to experts about how that world that we just heard might actually go down. Got it? Great!
Just a quick note, this episode is about war and does include descriptions of violence and death.
This episode, we’re starting in the year 2042.
FICTION SKETCH BEGINS
[retro ‘80s-style poppy techno beat; robot voice sings “X Marks the Bot”]
RACHAEL DECKARD:
Good evening, and welcome to X Marks the Bot! Last time, Team X stunned the competition with their perfect shooting, and Team Double Trouble was sent home for adding one too many horses to the party. Tonight, in the second round of our four-part competition, the remaining teams will have to build a robot that can overcome death itself! It’s time for some BOT-y building! John, you set this challenge, what are the contestants up to today?
JOHN DEE:
Tonight, your challenge is to design a robotic combat medic.
HALIMA:
This is why people hate robots! Because they’re used by horrible militaries in war.
SUMMER:
Our team is all about peace and love, man. So building something that can heal people? I feel like that’s right up our alley.
RACHAEL:
After your robot is completed, it will be put through a series of tests. First: the roll test. Combat situations can involve rough terrain, loose gravel, and other obstacles. Second: the stitch test. One of the key skills for any medic is stitching up open wounds on the battlefield to prevent infection. Your robot must exhibit the dexterity to sew up a gaping wound on our crash test dummy while being pelted with rubber bullets.
SUMMER:
Okay, maybe this is gonna be harder than we thought.
LINA:
I’m looking around at all the other teams, and they all look freaked out. But I feel like we have a secret weapon here because Chad actually used to be in the military.
CHAD:
(laughs) I mean, technically, yes; I was a chef. I know how to cook for the military, not heal people.
RACHAEL:
You will have 15 minutes to grab specialty materials from the Spare Parts Closet, followed by 6 hours to design and build your robot. Your time starts… Now!
CHAD:
So in my mind, I’m like, “We gotta get some tires, right?” I’m rolling out these giant car wheels, tiny little tricycle tires. Whatever I can get, I’m just grabbing it.
HALIMA:
Oh my god, they’re taking ALL of the scrap plastic.
BRAD:
Excuse me, we’ll just take some of that.
MALIK:
Hey man, there’s no need to push. We can all share.
ELIZA:
False, we are not allowed to share supplies.
SUMMER:
Okay, so I feel like we want something with really big legs, right? Something that can step over all of the rocks and stuff?
MALIK:
Yeah, totally, man! It’s gotta have really long fingers too so it can hold the needle for stitches. Like, a big cool spider doctor!
SUMMER:
(laughing) Doctor Long Legs!
HALIMA:
Let’s go for something like a human, right? I mean, humans literally invented war, so it’s probably pretty well designed for our bodies.
BRAD:
What if the other teams come up with something even better, though?
ELIZA:
I find war quite unpalatable.
ASHOKA:
Agreed, but this task should be relatively simple.
ELIZA:
The mechanics yes, but the stitch test will be a challenge. The human body is annoyingly squishy.
ASHOKA:
True.
BRAD:
On your right!
HALIMA:
I can already tell we’re gonna be short on time, so I’m just, like, running around, waving blowtorches all over the place, trying to get the torso finished.
BRAD:
I’m in charge of the limbs, so I’m using this metal tubing, and then we’ll solder them to the torso.
SUMMER:
(sort of frantic) Should the head go over here? Or here? Or… Do we even need a head?!
MALIK:
Hey man, relax. You’re stressing me out.
RACHAEL:
Two hours and 30 minutes to go!
LINA:
Hey Chad, what’s up? You were going to figure out the hands for the sewing. You okay?
CHAD:
Uh, yeah. I mean, no. I mean… I don’t think I can do this.
LINA:
…What?
CHAD:
I don’t want to build a war robot.
LINA:
(trying to be sympathetic but also starting to panic) Chad… we’re sort of in the middle of the competition.
CHAD:
Yeah, and I don’t want to do it!
LINA:
Chad, we can’t just stop.
CHAD:
We totally can, actually, just stop. In fact, the world would be a better place if people just stopped building this stuff.
LINA:
You worked for the military for years, dude, and now you suddenly care, when we’re in the middle of this thing, when there are cameras?
CHAD:
I left for a reason, Lina.
RACHAEL:
Hey guys, what’s going on over here?
LINA:
(trying to save face) Oh, we’re just having a light ethics convo!
RACHAEL:
Oh yeah?
CHAD:
I don’t want to make a war robot.
RACHAEL:
Even if it’s being used to heal people?
CHAD:
Yeah, that’s the line they always give. “You’re helping people. In the end, it’s all just and moral. We’re here to liberate.” Nobody thinks the chefs ever see anything, but we do. And I don’t want to be a part of it.
LINA:
(desperate) Look, this is an important conversation that we should have, but can’t we just finish the challenge first?
CHAD:
You can, but I won’t.
LINA:
You know there’s no way I can finish this thing without you.
CHAD:
I’m sorry, but I can’t help you. I’m out of here.
SUMMER:
Oh my god, did he just leave?
HALIMA:
Hell yes! Stand up for your morals, dude! Wait, should we quit too?
DOROTHY:
What just happened?
RACHAEL:
Chad left the competition over an ethical disagreement.
JOHN:
Well, he picked the wrong field if he wanted to have ethics.
DOROTHY:
John! You don’t mean that. You can definitely be an ethical roboticist.
JOHN:
(skeptical) Yeah, whatever you need to tell yourself to sleep at night, Dorothy.
RACHAEL:
Competitors! You have 15 minutes left! Please put the finishing touches on your machines.
BRAD:
Okay, we just need to put on the hands.
HALIMA:
I’ll grab the fingers!
ASHOKA:
Okay, this goes here…
ELIZA:
…that goes there…
ASHOKA:
…and, done!
RACHAEL:
Aaaand…TIME! Please put down your tools and bring your robots up for testing.
MALIK:
Heck yeah, man! We totally got this!
SUMMER:
Awesome job!
LINA:
(sighs loudly) Well, I am screwed.
[theme music plays]
RACHAEL:
Okay builders, you’ve worked really hard, but let’s see how your robots do, shall we? Our guest judge joining John and Dorothy this week is Sergeant William Walter. Sergeant Walter has over 20 years of experience as a combat medic and a purple heart to show for it. Sergeant Walter, what are you looking for in these robots in particular?
SERGEANT WALTER:
Well, you know, we really prioritize durability, efficiency, and friendliness when we’re in the field. A lot of folks are skeptical of our technologies and our uniforms, and we really want it to be clear that we’re here to help.
RACHAEL:
Well, let’s find out what these robots are capable of! First up, the roll test. Competitors, please line up your robots at the starting mark.
[countdown timer, followed by squeaking wheels and crashing sounds]
RACHAEL:
Team Solarpunk is getting off to a… rocky start. (chuckles at her own joke) It looks like those long, spindly legs might not have been the best decision.
Sergeant Walter, what are your thoughts so far?
SGT. WALTER:
Well, Team X’s robot is clearly in the lead. On the other hand, it’s not moving very much like a human would.
ASHOKA:
No one said it had to be humanlike.
ELIZA:
We designed for the task. The human body is not the best design for climbing over rocks. Or… many other things.
RACHAEL:
And Team X’s robot is first to cross the finish line! Followed closely by Team Rust in Peace.
Next up, we have the stitch test. Please collect your robots and bring them to the trauma zone.
SGT. WALTER:
This challenge is all about dexterity. Your robot not only needs to sew up the crash test dummy, but also do so without creating any additional injuries. These robots register pressure and impact that would be traumatic to a human body, so if your robot grabs too hard, you will see fake bruising on these arms.
LINA:
Chad was in charge of the arms and hands, and honestly I can’t even really tell how close he got to finishing so… I don’t know what to expect.
RACHAEL:
On your marks, get set, SEW!!
[mechanical whirring and sewing sounds]
RACHAEL:
Team Solarpunk’s robot is first to get the needle into the wound. Team Rust in Peace appears to be having some trouble actually holding onto the needle.
BRAD:
Copying human hands turned out to be harder than we thought.
RACHAEL:
Team Solarpunk is first to seal the wound!
SGT. WALTER:
There’s clearly some minor bruising, but nothing life-threatening. I’ve seen some human medics do worse, honestly.
RACHAEL:
And… Oh dear, what’s happening over there?
LINA:
I knew this was going to be a disaster…
RACHAEL:
Team Reboot, your robot appears to be repeatedly… stabbing the crash test dummy with the needle rather than actually closing the wound. Oh god. Someone please go turn it off?
LINA:
That was definitely more painful for me than for the dummy.
RACHAEL:
Well, that was exciting, wasn’t it! Now it’s time for the judges to deliberate.
[transition music]
RACHAEL:
This week was clearly much more of a challenge for these teams. Not a lot of perfect performances here.
DOROTHY:
On the other hand, there were some moments that really impressed me. Team Rust in Peace did very well.
JOHN:
Well, they did create the most humanoid robot, which, trust me, is not easy to do.
DOROTHY:
Team X obviously did a great job with both tests. And their crash test dummy clearly came out the best.
JOHN:
I wasn’t impressed with their creativity, though. This is a challenge about combat – blood and guts! I wanted to see more drama from them.
SGT. WALTER:
Well, we do try to minimize medical drama if we can, in the field, you know? There’s usually a lot of other stuff going on already… It’s kind of dramatic. Don’t really need to add to it.
RACHAEL:
So, who were you not impressed with?
SGT. WALTER:
Team Solarpunk definitely had some challenges with the terrain.
JOHN:
Really not sure why they thought those long spider legs were a good idea.
DOROTHY:
And, of course, Team Reboot. Losing a team member halfway through the challenge is hard to come back from.
SGT. WALTER:
It’s hard to compare performances in a tense, fast-paced environment like this. But I think there was a clear loser here… just like in every war I’ve fought in.
RACHAEL:
So you all agree?
JOHN:
Yes, I think we’re all on the same page.
[transition music]
RACHAEL:
What an exciting performance! Teams, you all overcame some serious challenges here. But we do have to say goodbye to one team today. It was a difficult decision, but the judges were unanimous. The team going home is… Team Reboot.
SGT. WALTER:
Lina, you did a great job trying to roll with the punches and losing a team member halfway through. But your robot just didn’t stack up on the second task.
LINA:
Thank you. This was a dream opportunity.
I’m going to kill Chad. Like, genuinely. I’m not sure how we can be friends after this. How could he do this to me?!
RACHAEL:
Join us next week for more daring feats of robotkind, on X Marks the Bot.
[theme plays and fades out]
FICTION SKETCH END
ROSE:
Okay, so today we are tackling war. And war robots are one of the most commonly requested episodes of this show. I get emails from folks all the time asking about weapons, and robots, and autonomous killing, so we are gonna get into it. So let’s start by defining what we actually mean by “war robots.”
KELSEY ATHERTON:
No one in the military describes anything they do as a war robot. They’ll use ‘remotely-controlled combat vehicle’. They’ll use ‘unmanned aerial vehicle’, ‘remotely piloted’… There’s a whole range of slightly more specific and then a lot less widely used terms that the military uses to describe these things.
ROSE:
This is Kelsey Atherton, a reporter who covers military tech.
KELSEY:
But broadly, we are talking about a machine that, under some form of human direction, is used by a military to do a function leading to an effect, which is the big set of euphemisms for “it will find someone and either put a weapon on them, or it will direct other people to put weapons on them.”
ROSE:
There are tons of “war robots” that can do this kind of thing, but the one that most people picture and that has dominated the conversation for a while now is drones.
RYAN CALO:
So, the way I would think about it is, there are robots like drones that are capable of meting out violence at a distance.
ROSE:
This is Ryan Calo, a law professor at the University of Washington.
RYAN:
So the idea is that there’s a human operator sitting in Arizona, and then there is a drone with a missile attached to it in the Theater of War, okay? And say it’s in Afghanistan.
KELSEY:
There’s no human on board. They fly for long times. Like, the Reaper, I believe, can fly for 24 hours and you rotate pilots at remote stations because you can’t legally… you can’t keep a pilot alert for that long.
RYAN:
And so that’s the idea that, without risking any American soldiers, or at least less of a risk, because often soldiers are still involved in terms of painting the target and doing reconnaissance initially right? But anyway, with far less risk to American soldiers’ lives, you’re able to mete out violence at a distance, and that’s a huge piece of what’s going on.
ROSE:
Now, it’s worth pointing out that drones did not always have the ability to kill. At first, their purpose was reconnaissance; scanning the area, gathering information. The first drones were not armed.
KELSEY:
They were armed after the CIA believed it saw Bin Laden, but they didn’t have… They had a Predator over Afghanistan, but they didn’t have a weapon on it. They’re like, “Well, in the future, we need to make sure that we don’t miss this shot.” That was really the impetus.
ROSE:
It’s really hard to put a number on how many drone strikes happen today and how many people are killed by them. But the Bureau of Investigative Journalism estimates that drones have killed at least 9,000 people since 2004.
But drones are just the tip of the war robot iceberg. More recently you’ve probably seen stories about robot dogs being used by the military. In 2006, Boston Dynamics unveiled the first version of their four-legged robot, BigDog. Now, if you look at these early versions of these robot dogs, or even, honestly, the modern versions of them… they’re not actually all that dog-like. They have four legs, that is true, but so do a lot of other animals. The choice to call them dogs is a specific one.
KELSEY:
I think we call them dog-like because that was how Boston Dynamics chose to present their four-legged machine in a way that would make it seem less scary. There’s so many other ways you could frame this, and I think ‘dog’ is a way to make it seem like, “No, no, no, this is a companion to the military. This is a soldier’s companion,” and not, “This is a moving machine we can put all sorts of weapons on.”
ROSE:
Much like early drones, these dogs were first presented not as a way to carry weapons, but instead, more like a pack mule to help carry a soldier’s gear.
But just like with drones, that obviously did not last. Recently, a company called Ghost Robotics demoed their robot dog.
KELSEY:
The Vision 60 QUGV by Ghost Robotics, a name everyone knows and finds just slips off the tongue. This particular system wasin the news because it was on display at an arms exposition in Washington, DC, put on by the Army Association, and this one had a rifle mounted on its back.
ROSE:
So, what happens when you take that robot dog and give it a brain? That is what a lot of people see as the future of war. Not bigger bombs, or more deadly missiles, or more accurate guns. But instead, making all that stuff… “smart.”
KELSEY:
And a lot of what is in the works is adding autonomy to, sort of, existing systems, adding autonomy to robots that move in different ways, adding autonomy to robots on, like, ground or underwater. And I think that is sort of the idea, that it’s, sort of, remotely controlled warfare to what we might see in the later part of this decade or the early next one, which is sort of remotely directed warfare.
ROSE:
And this brings us to one of the biggest questions right now in the realm of ethics, robotics, and the future of war: Should a robot be able to autonomously make killing decisions? Should a machine be able to pick targets and carry out strikes without a human in the chain at any point? That’s the debate we’re going to wade into when we come back. But first, a quick break.
ADVERTISEMENT: BirdNote
Today’s show is supported by BirdNote. BirdNote makes podcasts about – yes, you guessed it – birds!
On BirdNote Daily, you get a short, two-minute daily dose of bird; from whacky facts, to hard science, and even poetry. You never know what you will learn, called megapodes, bury their eggs. A four-pound owl can fly with six pounds of prey clutched in their talons. Or, the smallest bird in the world is the bee hummingbird in Cuba, and it weighs less than a dime!
Look, dinner parties are starting to happen again, and I know that I am restocking my mental pantry of weird animal facts, and BirdNote Daily is an incredible place to get them.
Now is the perfect time to catch up on BirdNote’s long-form podcasts too. Threatened and Bring Birds Back. You can find them all in your podcast listening app or at BirdNote.org.
ADVERTISEMENT END
ADVERTISEMENT: THE LONG TIME ACADEMY
The episode is supported by The Long Time Academy, a new podcast about time and how we think about time.
Some of us find it difficult to plan for tomorrow, or next week, let alone a year or ten years from now. If you’ve been listening to Flash Forward for a while, you know that we talk about this concept a lot. Practicing long-term thinking is proven to help you feel happier and be more prepared for what might happen in the future.
If you have ever felt exhausted, worried about the future, anxious about climate change, powerless to change the path that our world is on… I mean, whomst amongst us has not felt any of those things? The Long Time Academy can help.
You’ll hear from really interesting, smart people like Brian Eno, Celeste Headlee, George The Poet, Roman Krznaric, Jay Griffiths, and Adrian Marie Brown, and learn how they embrace long-term thinking. The Long Time Academy is an audio documentary, but it also includes practical exercises designed to expand your sense of time.
One of the episodes they recently released is about death – which I’m really interested in – and features one of my thinkers on the topic, Alua Arthur. That episode also happens to feature Brian Eno, no big deal. It talks about how engaging with the reality of our own death can help us relate to ourselves and the world in the future, long after we’re gone. And can help us develop a sense of collective responsibility and interdependence.
If you like Flash Forward and thinking about the more, sort of, philosophical, big-picture kind of galaxy brain pieces of the future, I think you’ll like The Long Time Academy. Find The Long Time Academy anywhere you listen to podcasts. We will also include a link in the show notes.
Life is short; time is long. The Long Time Academy.
ADVERTISEMENT END
ROSE:
Today, one of the big buzzwords in considering the future of war robots is autonomy. Autonomous weapons systems, autonomous decisions. So let’s start with some definitions. What even is an autonomous weapon?
DR. RYAN JENKINS:
An autonomous weapon is usually defined as one that can select and engage human targets without direct human oversight or direct human intervention.
ROSE:
This is Dr. Ryan Jenkins, an Associate Professor of Philosophy at California Polytechnic State University.
There are a couple of key pieces to the definition that Ryan gave, including the stipulation that we are talking mostly about weapons that can engage human targets. Today, there are all kinds of systems that are designed to, say, detect and intercept enemy missiles. Which is, sort of, like machine-on-machine violence. It’s when you start to turn those weapons on people… that’s where the big ethical debate really pops up. Then there’s the “direct human oversight” element of the definition.
LIZ O’SULLIVAN:
Rocket defense systems always have somebody who’s supervising and can shut it down if something goes wrong. But if you’re talking about a fleet of 1,000 3D-printed drones that have been flown halfway around the world, your likelihood of being able to step in and prevent catastrophe is much reduced.
ROSE:
This is Liz O’Sullivan, the CEO of a company called Parity and an activist working to ban autonomous weapons.
Today’s drones and robot dogs are not autonomous weapons systems under this definition – there is a human, in the loop, making decisions for them. But militaries all around the world are very interested in using AI to make machines that can do this; that can identify, track, and kill targets on their own. And they’re interested in this technology for a couple of reasons.
KELSEY:
One is that the military very much wants to be in a position where it can get where it can do the violence that is the job, and the application, and the role of the military without risking its own people. ‘Force protection’ is a term you’ll hear.
ROSE:
This is probably the most common argument you hear; that these systems allow for the US, or whatever other country in question, to do what it supposedly has to do without risking the lives of its soldiers. There’s also just the very basic arms race element here.
LIZ:
As you might imagine, with something this strategic, where every nation is out there claiming that they’re going to be the AI ruler of innovation and all of these, you know, posturing things, it’s been pretty difficult to sway, you know, nations like Russia. China actually supports a ban on the application of force but not on research, and a few unexpected players as well. The United States, in fact, very, kind of, opposed to a strong regulation against them. Australia, and most recently, India also came out in support of these weapons where so many other nations are in a coalition against them.
I think there’s a lot we can learn from that, that it’s kind of the nations who want to use killer robots versus the nations who think they’ll be used on them. And I think… That’s very telling to me about the state of this diplomacy and this issue in the world.
ROSE:
People also sometimes argue that perhaps these machines might actually be better soldiers than humans are.
DR. JENKINS:
So, think of the kinds of mistakes that a human being might make during wartime. They might target the wrong people. They might fire at the wrong time. They might get tired, get stressed, get vengeful, all of these kinds of things. And a lot of people are optimistic that, if we offload the task of killing onto a machine, it wouldn’t be infected with these kinds of human weaknesses.
DR. LUCY SUCHMAN:
And into that come people who would really like there to be a technological solution to that problem.
ROSE:
This is Dr. Lucy Suchman, a Professor Emerita of Sociology at Lancaster University.
LUCY:
And that’s where the idea of creating a, kind of, all-knowing network of sensors that are producing data that is being analyzed with perfect accuracy and that is creating this kind of omniscient, perfect situational awareness… This is the fantasy that’s really driving current investments in the automation of weapons systems.
ROSE:
Lucy uses the word ‘fantasy’ here very deliberately because she says that just automating these decisions doesn’t actually make them more accurate, fair, or correct.
LUCY:
This is completely illogical, right? I mean, you’re taking a system which you can already see is demonstrably full of a lot of noise and uncertainty. And if you’re automating that, all you’re doing is amplifying that uncertainty.
ROSE:
We’re going to return to this question of data, and bias, and uncertainty, and whether these AI systems can ever be trusted to make decisions, in a second.
The last reason you hear for developing these systems is, I’ll be totally honest, the hardest one for me to really take seriously.
KELSEY:
Robots can be faster than people.
LUCY:
Well, we’ve got to automate because things are moving faster and faster. So, the pace of war fighting is getting to the point where it exceeds the ability of humans to respond quickly enough.
KELSEY:
And this is the one that’s, sort of, even more ethically fraught. I don’t think anyone’s particularly upset that bomb squad robots exist, but the idea that if you have soldiers going on patrol, and they have a robot with them, and the robot’s job is to shoot faster…
LUCY:
This, to me, is a completely self-fulfilling cycle, right?
[clip from I Think You Should Leave: “We’re all trying to find the guy who did this!”]
I think we just have to interrupt that. We have to say, “No. If you really are concerned about that, then what we need is to slow things down. Dial things back.”
ROSE:
Now, I’m just going to come out and say that I have a very hard time imagining a situation in which I personally would be comfortable with an autonomous weapons system that identifies, stalks, and kills a target all on its own. I also personally believe that war is terrible and that we should be looking to demilitarize, not invent new ways to kill.
But when you look at the debates that are currently happening around these questions, there are actually a bunch of different ways to think about these technologies and try and figure out if they are ethical, moral, just, acceptable, whatever words you want to use here. And I want to walk through, kind of, a quick and dirty tour of all of the arguments that are at play.
The first place we can look is international law. There are some weapons out in the world that have been banned, that are internationally considered to be evil.
Lucy and Liz both work with the International Committee for Robot Arms Control, trying to convince the international community to agree to a ban on fully autonomous weapons systems.
LIZ:
And the hard work of that is, first of all, convincing major powers that it’s in their best interest, of course. But even more than that, before we can even get there, there’s so much negotiation that needs to take place on simply defining terms. Like, what are we going to call fully autonomous? What are the elements of this weapon that should be outlawed? Or are they even compliant with international humanitarian law?
ROSE:
So how do you get an international body like the United Nations to agree that a weapon should be banned? It’s not easy, especially if that weapon might confer some tactical advantage. And different countries can choose whether or not to be part of these agreements. For example, we mentioned landmines earlier. The US is actually not part of the specific international agreement not to use those. But there are a few key arguments you can make to try and sway a body like the UN, so let’s talk about them.
The first is targeting.
DR. JENKINS:
So one of the widely accepted rules of war is that you should discriminate between people who are legitimate targets and people who are not.
ROSE:
This is why, as we mentioned earlier, landmines are generally not allowed, because they can’t actually discriminate between targets.
DR. JENKINS:
So, nuclear weapons fall into this category. Biological weapons fall into this category. Firebombs like napalm or incendiary bombs would fall into this category because it’s very difficult to control and to direct their effects towards people who are actually legitimate targets. And some people will say autonomous weapons are like that. Why? Because the artificial intelligence that guides their decisions is untrustworthy and will never be confident enough that they can discriminate reliably between people who are soldiers and people who are civilians.
ROSE:
And this isn’t just speculation, right? There are plenty of examples of these systems making mistakes.
DR. JESSE KIRKPATRICK:
So, in 2003 there was a friendly fire incident.
ROSE:
This is Dr. Jesse Kirkpatrick, an Assistant Professor of Philosophy at George Mason University. The incident he’s talking about happened on the fourth official day of the Iraq war. A British fighter jet was heading back to a base in northern Kuwait and the jet flew in range of a US Patriot anti-missile system. That system identified the jet as an enemy missile and presented that information to humans on the ground.
JESSE:
And it presented bad information to these people and they believed it. And with tragic consequences.
ROSE:
The list of these kinds of mistakes is long.
LUCY:
You know, as a tragic example, in the supposed, so-called last strike of the war in Afghanistan, which many of your listeners, I’m sure, will know about… this was in Kabul at the very end of the US withdrawal. There was a drone strike attack in a residential area in the center of Kabul that killed 10 people, including seven children.
ROSE:
And this is just one example of this, there are, unfortunately, many more.
LUCY:
There have been thousands and thousands of people killed in drone strikes. A very small fraction of the number of people who have been killed, well under 10%, have been actually, positively people who are identified as so-called high-value targets.
ROSE:
If a weapon cannot accurately pick targets that are actually dangerous, it shouldn’t be allowed, right? So that’s one argument.
DR. JENKINS:
Another argument says, even if the weapons are perfectly discriminant, there’s something horrible about the effects they have on their victims. So, napalm also, or flamethrowers fall into this category, that there’s something especially gruesome and excruciating about being burned alive by a flamethrower, or napalm, or something like that such that, you know, it’s a particularly nasty way to die.
ROSE:
And the Convention on Certain Conventional Weapons – yes that is the real name of it, I had to practice that one a couple of different times – actually have banned weapons based on this logic.
LIZ:
Blinding lasers was one weapon that was unanimously outlawed.
DR. JENKINS:
Which creates the, sort of, bizarre paradox that it can be permissible to kill an enemy but not blind them.
ROSE:
So under this argument, you might say that there is something inherently terrible about being tracked, targeted, and killed by a robot.
DR. JENKINS:
Even if they can discriminate perfectly, even if we’re not worried about them killing civilians, we do think that there’s something monstrous and dehumanizing about being targeted by a machine and killed by a machine. As one author puts it, “It turns war into pest control and it treats your enemies like mere vermin that you can just extinguish them,” you know, by just wiping your hand across the battlefield or something like that.
ROSE:
Others also argue that a weapon like this just makes it way too easy to escalate a conflict.
KELSEY:
If you have a weapon that makes it seem like the risk of doing violence is lower, then the possibility of commanders or politicians encouraging it seems higher. And that’s really where everything is from. We can talk about error, it’s important to acknowledge machines won’t work as they intend, but we should also talk about, right, that the problem is that it’s fundamentally in service of war.
LIZ:
We’re talking about a reality where the technology working well is even scarier than the technology working poorly.
ROSE:
And it’s not just escalation. The ability to offload some of the most challenging parts of war to drones and robots also means you can just… be at war. Like, all the time.
JESSE:
I think one thing that’s become clearer is that there is this worry, you know, that the US and partner nations are going to be engaged in war forever, right? And drone warfare makes that easier. I think of the fact that, you know, I have a son who’s 15 years old and the US has been engaged in armed conflict, you know, declared on armed conflict, since he’s been born. And that’s unprecedented and really worrisome.
ROSE:
One thing that I think is really interesting here is the thing that Liz said; this idea that the technology working perfectly is actually scarier than the mistakes that it makes. A lot of debate around killer robots focuses on the mistakes because they are very real and they lead to people’s deaths.
The one downside to this argument is that it can open the door to a response that says, “Well, okay sure. Yes, today the algorithms are not great, but we just have to make them better! If the problem is the mistakes, then we just have to keep at it and fix those mistakes!” You hear this all the time when you talk about AI and bias, not just in military technology. But is that true?
Lucy is unconvinced that fixing these mistakes is actually possible because the reason that this data is biased, the reason that these mistakes are made, are deep and incredibly hard to prevent. They come from things like structural racism, decades of political history, deep assumptions about how the world works and a country’s place in it.
LUCY:
My question is: How will it get better? What new kinds of deeper understandings of both our relations in the world and the effects of our actions in the world are going to be developed that are going to somehow make these systems better informed? And what kind of outcomes are they designed to produce? And who benefits and who loses?
ROSE:
To make these systems perfectly ethical and discriminate, you would have to, you know, solve systemic racism. Which, for the record, I am absolutely in favor of! But I don’t think that this is what technologists actually have in mind when they say, “We just have to make the system better.” Oh, and the other thing that you would need to do for this to work perfectly is, you know, just a complete surveillance state.
LUCY:
Would we want to militarize our relations with the world to the point where there was a perfect system of surveillance and we believed that that really made us secure? I mean, how would that work?
ROSE:
And even if we did all of that… Just stick with me here. Even if we did all of that, we had these perfectly accurate killing machines, we would then have to ask a much more fundamental, philosophical question: Can robots make moral choices?
DR. JENKINS:
Could a machine answer moral questions in the way that a morally sensitive person would? I think the answer to that might be yes. But even if a computer, you know, a chat bot could answer questions about morality in a way that an intelligent and ethically sensitive person would, that’s not really the same thing as saying that they are thinking about morality or that they understand morality in the way that we do.
ROSE:
Of course, the question that follows from this is: Does it matter? Do we care that the machine making a killing decision understands morality? Should the choice to take a human life come from something that actually, truly understands what’s at stake and grapples with that choice?
Ryan Jenkins is a moral philosopher who has spent years writing about these technologies. And when we talked, he walked me through a ton of different arguments: some people say this, some people say that…
But here’s what he said when I asked him: Okay, but what do you think?
DR. JENKINS:
I think that I’m in a position that a lot of people are in, which is that I harbor a very deep, a very pronounced, profound unease at the thought of autonomous weapons. But it’s hard for me to say why. It’s very hard for me to articulate what it is specifically about an autonomous weapon that I think is so upsetting. And so it leaves me in this position. It’s an uncomfortable position for a philosopher to be in where I basically say, “This is what I think but I’m not sure why.”
I mean, that’s the kind of thing that a professional philosopher should not say. And so, one way of thinking about my project for the last, you know, close to a decade, the last several years, is to explore different candidate arguments against autonomous weapons and to say, “Here’s an argument that would justify a prohibition on autonomous weapons. Does it look like a good one? Does it stand up to scrutiny? It doesn’t really stand up to scrutiny. We’re going to have to keep looking.” That’s the way it goes. And so even though I haven’t come to a firm conclusion yet, I can tell you which arguments are not good arguments and that counts as making progress.
[clip from The Good Place]
Chidi: Making decisions isn’t necessarily my strong suit.
Michael: I know that, buddy. You once had a panic attack at a make-your-own-sundae bar.
ROSE:
These questions are big and hard, and everybody I spoke with for this episode has different answers. Lucy, for example, thinks that banning autonomous weapons systems should just be the first step in a much bigger change.
LUCY:
I personally feel the only way we can begin to increase our security is by radically demilitarizing and redirecting… You know, it’s a kind of abolition. Redirecting that $730 billion that’s just been allocated, you know, as discretionary spending for the US Defense Department to, you know, mitigating the incredible threats, you know, not just to those of us who are lucky enough to be sitting comfortably wherever we’re sitting. But threats to, you know, our planet, to our co-humans, our more-than-human worlds, environments.
ROSE:
For the record, just so you all know where I’m coming from, I happen to agree with Lucy. I will also say that we did reach out to DARPA and some of the companies developing these technologies but nobody agreed to talk to us for the episode. I am going to link in the show notes to a handful of interviews that those people have done on other shows so you can go hear that if you want to.
But what about the people who actually build these systems? Can you ethically build robotic war machines? And what happens when you find out that your work is being used by the military, even when you were told it wouldn’t be?
ROSE (on call):
When you put in your notice, what was that like? What did that feel like?
LIZ:
Oh my God, I cried so hard. I absolutely was hugging people and just sobbing the whole way out the door.
ROSE (mono):
And we are going to talk about that when we come back.
ADVERTISEMENT: NATURE
Flash Forward is supported by Nature.
Want to stay up to date on the latest in global science and research? Subscribe today to Nature, the leading international journal of science. For example, in this episode, you might be interested to learn that in 2018 there was a Nature editorial titled “Military Work Threatens Science and Security,” arguing that asking researchers at universities to develop weapons for the military “breaks down the bonds of trust that connect scientists around the world and undermines the spirit of academic research.”
And Nature is offering a special promotion for Flash Forward listeners until December 31st. Get 50% off your yearly subscription when you subscribe at Go.Nature.com/FlashForward. Your Nature subscription gives you 51 weekly print issues and online access to the latest peer-reviewed research, news, and commentary in science and technology. Visit Go.Nature.com/FlashForward for this exclusive offer.
ADVERTISEMENT END
ADVERTISEMENT: BETTERHELP
This podcast is sponsored by BetterHelp.
Is there something interfering with your happiness or preventing you from achieving your goals? Maybe you’ve just spent the last month, like, I don’t know, just totally hypothetically, reading about the future of war and all the weapons that are currently being developed around the world and you’re feeling some sort of way about that. Just hypothetically.
BetterHelp will assess your needs and match you with your own licensed, professional therapist who you can start communicating with in under 48 hours. It’s not a crisis line, it’s not self-help. It is professional therapy done securely online.
There’s also a broad range of expertise available, which may not be locally available in every area. And service is available to clients worldwide. You can log into your account anytime and send a message to your therapist, and you’ll get timely and thoughtful responses. Plus, you can schedule weekly video or phone sessions if that is more comfortable for you.
Visit BetterHelp.com/FlashForward and join the over 2 million people who have taken charge of their mental health with the help of an experienced professional. And Flash Forward listeners can get 10% off their first month by going to BetterHelp.com/FlashForward.
ADVERTISEMENT END
ROSE:
So, if you’ve been listening to this show, or just paying attention to technology news in general, you probably know that in the last several years conversations about bias in AI have become relatively mainstream.
DR. CARLOTTA BARRY:
And I hear more, really recently, it does seem to lie a lot around software, and I think a lot of it has come because of the work of Joy Buolamwini and coded bias. And you know, you hear the stories about police departments using image recognition to arrest people, and then we find out they’re making mistakes.
ROSE:
This is Dr. Carlotta Berry, Chair of the Electrical and Computer Engineering Department at the Rose-Hulman Institute of Technology and the co-founder of the organization Black in Robotics.
CARLOTTA:
But when I teach my students, my freshman design and senior design students, I actually do focus on the hardware aspects.
ROSE:
This is not always a natural instinct for budding roboticists, to think about the potential ways their tech could be used to hurt people.
CARLOTTA:
I understand your algorithm is cool. I understand your hardware is cool. But don’t forget the human aspect of what you’re doing. Is what you’re doing ethical? You know, engineering can sometimes get lost in that it’s a highly technical field and you forget. But engineers are here to design things to improve the world and make the world a better place. Don’t forget that goal as you bog down in your calculus, your kinematics, your math, your coding, and all of those other things.
ROSE:
Science and technology has a long relationship with the military. And not just by inventing new weapons.
KELSEY:
Like there’s a long history of, like, how Silicon Valley gets its origin because they were making the microelectronics and computer parts for missiles. Silicon Valley pretends to not have this origin, but it’s there.
KATE CONGER:
The research that formed the basis of what we now know as the internet was, you know, funded by the US military. And some of that early connectivity came from that source. So there’s a long and deep relationship.
ROSE:
This is Kate Conger, a technology reporter at the New York Times.
And with that relationship, there has always been tension. Lucy has been asking questions about military systems and AI for 40 years, starting with a job at Xerox Parc, this research center that Xerox had.
LUCY:
So this was the early ‘80s when some of your listeners may remember Ronald Reagan launched the Strategic Defense Initiative, the so-called Star Wars initiative, which was this idea of a kind of perfect defensive shield that would protect against nuclear missiles. And I and a number of other colleagues started an organization called Computer Professionals for Social Responsibility, and the focus of that was arguing that the automation of the command and control of nuclear weapons systems was a really bad idea.
ROSE:
Today, as the future of war becomes more and more about software rather than hardware, we’re seeing all kinds of tech companies working with the military to provide things like cloud storage, computer vision systems, and more.
Amazon, Microsoft, Google, pretty much every big tech company that you can think of has at one point, or is currently, working with at least one military to develop technology. But not all employees of these companies are happy about that. And sometimes, like Lucy did in the ‘80s, those employees push back.
In 2018, employees at Google learned that the company had quietly taken a contract with the US military to work on something called Project MAVEN.
KATE:
Project MAVEN was an initiative that the Department of Defense started where they wanted to use artificial intelligence to help classify and categorize footage that was being gathered by drones.
ROSE:
When they got the contract, Google actually specifically worked to make sure that its name wasn’t listed.
KATE:
And they were kind of working on it in secret.
ROSE:
But when employees found out about Project MAVEN, they started asking questions.
KATE:
An employee posted about the project on internal Google Plus, which was the internal employee social network. And then it kind of blew up from there. They started to push back on it and put together a letter asking leadership to pull out of the project. That ended up being signed by over 4,000 employees.
ROSE:
I will post a link to the full letter in the show notes. These employees had a few key arguments as to why they thought Google should not be taking on this kind of work. And a lot of them go back to something that we’ve talked about already: mistakes.
KATE:
You know, there’s a lot of research that shows that AI can be, you know, racially-biased, can be biased on the basis of gender, can misclassify objects. Like, it’s not the most reliable of technologies that we have at this point. And so, you know, I think there was some concern among employees that this technology is not ready to be used in a context where the consequences of mistakes are someone’s life.
ROSE:
Some tech workers also felt that, as an international company with international employees, Google shouldn’t be partnering with any military.
KATE:
You know, it’s a multinational company and its employees in Ireland, or in Asia, or wherever around the world, have no patriotic duty or allegiance to the US military, should not be compelled to produce work that benefits the military.
ROSE:
It didn’t help that Google had tried to obfuscate its involvement in the project, so employees who were working on this also felt tricked and lied to.
KATE:
Some of the reaction that you see from Google employees to this stuff is thinking, you know, “I’m working here because I believed that it was a different kind of place.”
ROSE (on call):
A little bit of the like, “Are we the baddies?” kind of vibe.
KATE:
Yeah, yeah, there’s a meme generator internal at Google called MemeGen, and that “Are we the baddies?” meme was a popular one during all of this.
ROSE (mono):
And in response to the letter and the outcry from their employees, Google actually changed their minds.
KATE:
A couple of months went by and Google announced, finally, “You know what? Once this contract is complete, we’re not going to seek to renew it, and we are going to set some AI principles that will govern our artificial intelligence work going forward so that we have an ethical framework for deciding what kind of work we will and won’t do.”
ROSE:
This was a big deal when it happened.
KATE:
I was very taken aback when I learned that Google was going to be ending their work on Project MAVEN. There were just so many people in leadership that were so supportive of the project. You know, over the course of my reporting on it, I reported all these internal emails among very senior people at the company who were backing this project 100% percent. And so it just didn’t really cross my mind that they would stop. And when that happened, you know, it was an exciting moment of breaking news. But I think I was as surprised as anyone else who was clicking on that article. (laughs) I just didn’t expect it to happen.
ROSE:
And in fact, it was Project Maven that turned Liz O’Sullivan into an activist.
LIZ:
I had kind of a roundabout journey to this path and it wasn’t something that I ever expected to happen or see for myself.
ROSE:
In 2017, Liz was hired by a company called Clarifai, which used AI to develop computer vision systems, which basically means teaching computers to recognize the stuff in images. At first, she says Clarifai told her they were not going to pursue any military contracts. But when the opportunity to do just that arose, they changed their mind.
LIZ:
We didn’t know what it was at first, and we didn’t know it was military. And then eventually it had a name and that name was, of course, Project MAVEN.
ROSE:
Now, Liz had been brought on to help try and train these models with as little bias as possible, which means that she had a front-row seat to just how bad and biased the data and AI really was. So the idea that this model would potentially be used by the military to make life or death decisions really freaked her out.
LIZ:
And it made me so worried, in fact, that I wrote an open letter to the CEO and I asked him to promise, you know, alongside thousands of scientists who had signed Future of Life’s Promise, never to work on fully autonomous weapons systems, realizing that this technology was a core component of this technology that maybe didn’t have all the kinks worked out yet, or maybe never would.
ROSE:
She sent this letter to the CEO and was actually hopeful that he might listen.
LIZ:
I really thought that he would sign the pledge. You know, like, in all honesty, I don’t think it’s too much to ask to decline to contribute in what is, essentially, a great power rivalry and escalation of armament. To me, it’s a no-brainer. I didn’t think it was too much to ask.
ROSE:
Her CEO did not agree.
LIZ:
And when the CEO refused to sign the pledge, I quit my job and I became an activist.
ROSE (on call):
When you put in your notice, what was that like? What did that feel like?
LIZ:
Oh my God, I cried so hard. I absolutely was hugging people and just sobbing the whole way out the door.
ROSE (mono):
Now, I think Liz’s choice brings up a really interesting conundrum. Because computer vision is a technology that has a ton of applications. It can be used to detect climate change, or spot cancer, or see dangerous abnormalities in bridges or other infrastructure. But it can also be used by militaries to try and identify possible targets and kill them.
Now, there are tons of technologies like this, right, that can be used for good… or for evil. Nuclear power is probably the most extreme example of this; it can be used to generate power, or, you know, end the world.
LIZ:
And so, as you might imagine, these technologies are very deeply regulated and there’s a lot of red tape that you have to cut through in order to do these experiments and gain access to the knowledge that they would give you. The difference is that there’s no such regulations when it comes to artificial intelligence. And with artificial intelligence, we’re just naturally talking about scale. So in that regard, any research that you’re doing to further this field of computer vision is, by its very nature, furthering the likelihood that these kinds of weapons will come into existence.
ROSE:
For Liz and the employees who spoke up at Google, that simply wasn’t a bargain they were willing to make.
And I think this is a really interesting challenge for a lot of technology employees. How do you grapple with this, ethically? Can you work on tech systems for the military without building something that might be used to kill? Carlotta says that she thinks you can.
CARLOTTA:
For example, if you think about military robotics and, you know, they have these robots that they’re designing that are going to carry these hundred-pound gear so that the officers and the military personnel don’t have to carry it, that’s a good thing, right? You don’t want them carrying all this stuff. Or a robot that is going to go out and detect IEDs and try to disable them or alert the military personnel where they are. That’s a good thing, right? Because you don’t want to risk a person, you want to risk the robot to do those sort of things.
Those are all good things, but making sure that they don’t have scope creep, so like I said, somebody goes, “Oh, that’s cool, they do that really well, let’s put a gun on it,” you know? So I think it’s important that, you know, we want our students to be in those spaces. We want diverse people to be in those spaces because if it’s going to happen regardless, it’s better for somebody to be there who has the education or the foresight to go, “Hey, have we considered this?” as opposed to just going, “I’m not going and I’m not going to be there because, ethically, I don’t believe in this.”
ROSE:
But not everybody feels this way. Remember, drones were not always armed. Military robots like BigDog, which were debuted as something that would carry equipment, are now being mounted with weapons.
KELSEY:
The old quip, right, is that if all you have is a hammer… But you’re specifically building hammers, which means you are shaping more of the possible answers into nails. It’s not just that, like, this happens in the vacuum. It’s that by choosing the kind of hammers you have, you’re setting yourself up for what policy problems you will try to resolve with robot violence in the future.
ROSE:
Google’s AI policy says that it won’t develop AI “whose principal purpose or implementation is to cause or directly facilitate injury to people,” and the words “principal purpose” are doing a lot of heavy lifting there. And obviously, Google can’t keep the military from using the tech it does develop on weapons. If your company has a contract with the military on something like, say, computer vision, Liz says that maybe it’s naïve to think that you can guarantee that it won’t one day be used in a weapon.
LIZ:
I’m not willing to say… It’s a question I’ve mulled over for so, so long because I do think that AI is so promising, and especially computer vision has a lot of really great use cases. But at this moment in time, I don’t see that it is possible to publish new open-source research, to publish new models, that won’t immediately get used by the military. I think it’s just a naïve view.
ROSE:
It’s also worth remembering that, in many places, like the United States, these military machines often wind up in the hands of domestic police forces.
LIZ:
Longer term, we see this alarming trend that sophisticated military technology usually makes its way back to the United States as well in the form of militarized policing.
ROSE:
Law enforcement departments have all kinds of war machines at their disposal that most civilians don’t even know about. And often, there are no written policies around how they can or can’t use those machines.
A couple of years ago, the Dallas Police Department strapped a bunch of C-4 explosives to a robot and sent it into an active shooter situation to blow the guy up.
RYAN:
And while I can understand arguments about how, “Well, he was an active shooter and if they had had a line on him with a sniper, then they could have…” you know what I mean? And so on. I understand it. And then, “This is just functionally equivalent.”
ROSE:
That’s Ryan Calo again.
RYAN:
But the truth, and what was unforgivable about that… I mean, you know, let’s set aside for a moment this violence-at-a-distance and some of these things like that, if that’s possible to bracket. But what was also alarming to me was there was no policy in place for how to use that robot that anybody could point to.
Earlier this year, the New York Police Department revealed that they leased a robotic dog from Boston Dynamics for a cool $94,000, and deployed it in response to a home invasion. When New Yorkers found out about this, there was this big outcry and the city quietly returned the robot.
DR. JENKINS:
You know, when I’m feeling particularly dystopian and I’m teaching this in class, I’ll tell my students, “It’ll probably be our lifetime we’ll see robot dogs like this chasing down offenders in the streets, you know, unless there’s a proactive ban placed on it.”
ROSE:
So, not only does working on these systems mean that you’re contributing to the efforts of the US military, it also means that you’re likely developing systems that can wind up back in the hands of local police forces. And perhaps used against your own community. And on top of that, US lawmakers don’t always know how to think about new technologies, and how to draft useful, intelligent laws to regulate them.
RYAN:
The advent of cyber-physical systems, of artificial intelligence, or robotics, their increased prevalence in society, their increased sophistication, has just really dramatized the lack of expertise in government.
ROSE:
You have probably seen cases where lawmakers in DC ask questions of tech companies that… make no sense. Like when Congressman Steve King asked the CEO of Google a question about iPhones.
Imagine then, a local security company or police department getting an autonomous dog robot and letting it loose to apprehend suspects. If Congress doesn’t know what’s an iPhone and what’s an Android, can they really make smart decisions about the nature of machine learning and autonomous robots?
Ryan has argued that the US should put together a special federal robotics commission to help guide lawmakers through these questions so that they can evaluate these technologies better.
RYAN:
And part of what sparked it was not even a robot issue, but like, I remember when the Toyota sudden acceleration thing happened and they couldn’t figure out whether it was the software or possibly, you know, the floor mats or what it was. And Congress ended up asking the Department of Transportation to figure out, you know, what was going on. And DoT was sort of like, “We don’t have the expertise.” So they turned around and they asked NASA. Which is amazing.
Like, imagine that for a moment. You go to NASA and you say, “Hey, would you take a break putting robots on Mars for a moment and look at this Toyota for us?” Right? So, they look at the Toyota and they find that, you know, it turns out that it’s probably not a software issue and it’s clear. That’s not a sustainable model, right?
ROSE:
We’re going to talk a lot more about legal liability around these robots on the bonus podcast this week, which, by the way, is going to be very full because there is a lot that we could not fit into this one. If you want that, all you have to do is become a Patron or a member of the Time Traveler’s Club. You can find out more at FlashForwardPod.com/Support.
Going back to tech workers. Where does this leave tech workers, many of whom did not go into tech to build military AI systems? Some, like Liz, quit their jobs. But obviously, not everybody can do that.
LIZ:
It’s just not fair of us to say, “I disagree with this thing. And so anybody who’s associated with it is, by default, evil.” I would never say that about anybody, especially with regard to their own, you know, ability to support themselves and have a life. But it was too much for me.
ROSE:
Of course, there are not just two options here. It’s not just “quit your job now” or “stay silent and put your head down no matter what.”
LUCY:
I mean, I wrote many years ago about what I called “located accountability” as a designer of technology. Designers, I think, can rightfully say that they don’t control, they can’t determine the uses to which the things that they’re designing can be put. And I agree with that, but my argument is that, the fact that you can’t control that doesn’t mean that you don’t have some responsibility to stay connected to the things that you’re involved in designing. See where they go, see how they’re being used, and speak about that, and be engaged with that.
You can care about who your company’s customers are and you can care about the life trajectory of the technology that you’re developing, whether it’s computer vision or whatever else. And how do you feel about that? What do you want to say about that?
ROSE:
And this is perhaps increasingly important as it becomes clear that simply hiring ethicists and having an ethical code isn’t actually enough to keep a company honest and ethical. Remember how Google said that it wasn’t going to pursue these kinds of DoD contracts after the employee outcry about project Maven? Well… it’s changed its mind.
LUCY:
Google’s position is, “Well, there are going to be multiple contracts,” and so, you know, they’re sort of vaguely indicating that somehow Google will be able to navigate this in a way that will allow it to adhere to the AI ethics principles that were set up after Project MAVEN. And now there’s the, sort of, I think, pretext that “You’re just going to be doing a basic infrastructure. You’re not going to be doing customized AI for particular weapons systems. It’s just going to be a kind of generalized infrastructure.” But it is called the Joint Warfighting Cloud Computing contract, right?
ROSE:
Plenty of tech companies have ethics departments these days. But do they listen to them? What role do they really serve? Often, at least from the stories that I’ve heard and reports in the news, these folks are sidelined and sometimes even fired for speaking up about the very things they were hired to evaluate. Other times, they’re brought in at the last minute when all the decisions have already been made.
And when it comes to a weapon that powerful people might see as the difference between winning and losing a war, one has to wonder if we’re really going to listen to the ethicists on this one.
RYAN:
I mean, part of me, the sort of realpolitik part of me, says that limits on weapons tend to track with their efficacy. And so, I feel very strongly that if other countries are showing that they can gain a military advantage through robotics and AI, no amount of political upheaval would cause the United States military not to adopt that technology.
ROSE:
Right now, on paper, the US military says that they actually do think a human should be involved in killing decisions. And here’s the thing: fully autonomous robots that can actually walk around like Terminator, sensing the world, tracking people, following them, killing them… that’s a long ways away. We are not close to that at all.
That does not mean we shouldn’t consider those technologies seriously and think about the ethics of them. But we also shouldn’t let that focus distract from the messier and more real next steps that the military will likely take. And that is robot/human hybrid systems.
JESSE:
I think it’s primarily what we’re going to be seeing in the near term. And already happening now is kind of the human-machine partnership model, right? Some have referred to it as, kind of, the centaur model of humans and robotics and AI, where artificial intelligence is going to just augment human decision making, cognitive capabilities, you know, planning, logistics, and so on and so forth, in which really the humans and machines are going to team up as partners.
DR. JENKINS:
So for example, you might see a weapon that autonomously detects or autonomously matches a face to a list of faces that it’s looking for and then passes that decision on to a human.
KELSEY:
It might show up as a robot walking alongside autonomously with a squad, send a notification to a soldier’s tablet, and the soldier has to click yes or no whether or not it should shoot the thing it thinks it should shoot.
ROSE:
And these robot-human hybrid systems are, frankly, a lot harder to parse, ethically. How do you make sure that we are actually getting meaningful human control here? That the humans understand the information that they are receiving and know how to process it? That they won’t just defer to the machines the way they did in 2003, or they won’t get so confused by the data being presented that shoot down a commercial airliner like they did in the 1980s?
LUCY:
So we’re right back into the problem of how the world is being read. And the more that gets automated, the more reliance there has to be on various kinds of stereotypic profiling.
JESSE:
There are these, sort of like, related kind of issues about, like, the way that AI is designed, whether it’s sort of a black box. Is the algorithm traceable? Is it transparent? Do we know how it got to the decision that it got to?
ROSE:
The US Department of Defense has been trying to figure out where it stands on these questions for years now.
JESSE:
The DoD a couple of years ago stood up a center called the Joint Artificial Intelligence Center, JAIC for short. And JAIC’s task was to really tackle the very issues that we’ve been discussing today.
ROSE:
In February of 2020, JAIC released a set of ethical principles to govern its work in AI. The rules said that all work in this area within the Department of Defense should be: Responsible, Equitable, Traceable, Reliable, and Governable.
There are also conversations about these ethical questions happening within individual departments of the DoD. But while the DoD is putting these kinds of documents out and teams together, the US is also not currently willing to sign international treaties prohibiting the use of autonomous weapons.
And here is where even Ryan Jenkins, who was the most optimistic person I talked to for this episode, who really does think that philosophy and ethics make a difference in the way that wars are fought, who said things like this:
DR. JENKINS:
A lot of people think that philosophy is purely abstract or nebulous. But if I were asked to provide one example of how philosophy has had concrete consequences for people to make the world a better place, I would point to the morality of war. Because here you have a conversation that stretches back several millennia, that takes place across cultures and times, and the conclusions that philosophers have come to have found their way into international treaties like The Hague Convention or the Geneva Conventions that are signed by every country in the world, that we take as a kind of uncontroversial baseline for how armies ought to conduct themselves.
And what all these armies are committed to are basically moral principles that philosophers crafted and refined in conversation with each other and in conversation with soldiers and statesmen over thousands of years. And I think that that really gives proof to the project of moral philosophy.
ROSE:
Even he is a little bit pessimistic about the realities of war and the allure of these robots to certain powers.
DR. JENKINS:
If they’re going to be banned, I think they have to be banned before they see battle because it might turn out that their benefits are just too irresistible. And once the powers of the world see the kind of advantages that they confer, then no one’s going to be willing to sign on to a treaty that bans them after that. I think there’s clearly a race going on between people that are lobbying for a proactive ban and the technologists that are trying to develop them and perhaps deploy them before that ban is put into place. And we’ll see who gets there first.
ROSE:
Well, that is a very dark ending. So instead of ending there, let’s end with the note that this is not decided yet. The international community has banned certain weapons before and it can do so again. As much as it might seem like anything goes, that’s actually not necessarily true. Employee activism can make a difference, as can educating lawmakers and trying to get ahead of this technology. If you have a visceral gut reaction to the idea of a robot fed messy and biased information stalking and killing people on its own, you are not alone. And it’s not a done deal. At least not yet.
JESSE:
You know, it’snot a foregone conclusion about how they’re going to be built and, you know, what type ofethics, values, and social mores are going to be instantiated in them. That’s up to us, right? And you know, that might be a little bit too optimistic because, I mean, look at the shit show that is Silicon Valley right now, and it’s not up to us, and surveillance capitalism is just embedded in our lives. No one asked me, right? But you know, there are ways that we can maybe look to, like, the dumpster fire that’s been social mediaand really think about some lessons learned, and best practices, and ways to avoid some of the pitfalls that have occurred to the extentthat we can.
[Flash Forward closing music begins – a snapping, synthy piece]
Flash Forward is hosted by me, Rose Eveleth, and produced by Julia Llinas Goodman. The intro music is by Asura and the outro music is by Hussalonia. The episode art is by Mattie Lubchansky. The voice actors from this future this episode were played by Richelle Claiborne, Brett Tubbs, Ashley Kellem, Brent Rose, Zahra Noorbakhsh, Henry Alexander Kelly, Shara Kirby, Anjali Kunapaneni, Chelsey B Coombs, Tamara Krinsky, Keith Houston, Jarrett Sleeper. The theme music for X Marks the Bot is by Ilan Blanck. You can find out more about all of these amazing people in the show notes. Please do check out their work. They’re all awesome.
A quick reminder that this is the beginning of the end of Flash Forward. There are, after this episode, two more episodes left. And I do hope that you can join us at the event to celebrate the end of the show. We’re calling it the Flash Forward 1.0 Wrap Party. That’s on December 17th. If you go to the Flash Forward website, you can find out how to get an invite. You will need to get an invite and sign up so that I can send you the link to the event. That will be, again, on December 17th, and I hope you can join us.
Like I mentioned, this episode is going to have a very long bonus episode attached to it for people who are supporters because there was so, so much stuff we couldn’t get to. So if you want to get that, support the show as it moves into its next phase. You can go to FlashForwardPod.com/Support. If financial giving is not happening for you, that’s totally understandable. A way you can help the show is by leaving a review on Apple Podcasts. It’s our last two months of the show and it would be really nice to end with some nice reviews. So if you haven’t reviewed the show already, now is a great time to do it.
That’s all for this future. Come back next time and we’ll travel to a new one.
[music fades out]