Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

I, Robot, Am The LAW!

Robot Dogs, Lethal Autonomous Weapons, and the Dawn of Robo-dystopia

by J.D. Schmidt

Back in 1942 when science fiction author Isaac Asimov introduced his “Three Laws of Robotics” in a short story, he was imagining how we could grapple with some of the problems that might arise as human beings worked with increasingly autonomous thinking and acting machines. In Asimov’s vision of a future society in which robots played an ever more important part, his First Law of Robotics states that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Asimov might well roll over in his grave at DHS’s February 1, 2022, announcement that its Science and Technology Directorate (“S&T”) is helping Customs and Border Patrol field-test 4-legged robot “dogs” for surveillance missions along the U.S.-Mexico border. He might spin a little faster upon learning that Ghost Robotics, the company that makes these digital snoop-hounds, has contracted with another up and coming military-industrial company, SWORD Defense Systems, to produce a version of the bot mounted with a “special purpose unmanned rifle.”

These two developments put the robot dogs’ creators on the cutting edge of a global trend in which governments and law enforcement agencies push for the development and deployment of semi- or fully autonomous machines capable of surveilling, targeting, maiming, and killing human beings. In the worst-case scenario, robotic systems currently in development, and some already in use, may even be granted the power to make the choice of when, where, and on whom to deploy lethal force. With this step, governments are ceding the power of judge, jury, and executioner to a mechanical device with a digital “intelligence” composed of algorithms. Human rights groups and civil libertarians are organizing at the local, state, national, and international levels to oppose the crossing of this deadly bright line.

As with most death-dealing technological innovations, fully autonomous killer robots appear to be making their debut in theaters of war. According to a United Nations report from May of 2021, it may already be happening with the deployment of Turkish drone bombs on battlefields in North Africa. A March 17, 2022, newsletter from the Bulletin of the Atomic Scientists reports that analysts have detected the possible use of a Russian-made drone bomb capable of fully autonomous action in Ukraine. The UN classifies machines like these as “Lethal Autonomous Weapons” or LAWs. Their actual use in fully autonomous mode has not yet been conclusively confirmed as of press time, but the high probability of it in both of these cases means that humanity may already have set foot into this dark new frontier.

On the home front, DHS’s decision to field-test its new easily-weaponized digital hounds with Customs and Border Patrol—targeting migrants, asylum seekers, and climate refugees under the banner of “drug interdiction”—is perfectly in line with a long-standing tradition of new and dangerous, military-style tactics and technologies being tested out on the most vulnerable populations, from militarized SWAT teams developed to quell urban uprisings, to the ever-increasing array of “less lethal” weapons and digital monitoring systems being deployed against those incarcerated in U.S. jails and prisons. Federal law enforcement getting in on the arm-able robot game begs the question: how long until the military-industrial-police pipeline funnels these corporate killing machines into the hands of local law enforcement agencies? As Matthew Guariglia writes in a July 16, 2021, editorial for the Electronic Frontier Foundation (“EFF”), “Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime.”

The DHS blog post describes the unarmed Ghost Robotics Vision 60 robot “dogs” being deployed along the border as “Automated Ground Surveillance Vehicles” or AGSVs. These 100-pound “force multipliers” as DHS calls them, can be fitted with a variety of surveillance equipment, including radios, cameras, and infrared sensors and may be monitored or directly controlled via laptop computer or handheld remote. The testing programs include evaluating the robots’ abilities to traverse a wide variety of rugged terrain in the border regions, from desert sands to grasslands, rocky hillsides, and ravines. S&T personnel tout the robots’ legs as their ultimate advantage—existing robots that move on wheels or treads are simply not as adaptable as a legged machine with sensors in its “feet” and limbs that help it move across uneven and angled surfaces. The quadrupeds are also being tested alongside Customs and Border Patrol personnel in the types of human-built environments that CBP’s “operators” encounter when hunting human beings in cities and suburbs.

The robo-dogs’ ability to operate semi-autonomously is a huge selling point, according to DHS S&T. They can accompany human personnel or be sent into the field alone to perform programmed patrol missions—and can adapt to changing conditions as they go, all while sending real-time video and other types of data back to their human monitors. The Department of Defense is also testing out Ghost Robotics quadrupeds on similar missions surveilling the perimeters of military bases. These two programs make the quad-bots the federal government’s entry into the field of next-generation robotic policing. As local, state, and now federal law enforcement agencies increasingly embrace semi- and fully autonomous computerized machines for surveillance purposes, civil liberties groups, politicians, and others have been sounding the alarm about the invasions of privacy and other harms that such programs could entail. The warnings these critics put forward stand in direct contradiction to the benign, “cool” and even cutesie image put forward in promotions such as the DHS blog post, articles by fawning media outlets, and videos pushed online by the robots’ designers.

The EFF is just one of the many voices of popular reaction against the use of robot “dogs” in surveillance and other operations by police forces. A 4-legged bot leased by the New York City Police Department was returned to its manufacturer, Boston Dynamics, when residents, local and national politicians, and others reacted in anger after it was deployed in a public housing project. As reported in a story on the CLN website, the quadruped bot’s use by cops was considered so alarming that New York City council members drafted legislation to prevent police from arming such robots in the future. [See: CLN online, NYPD Forced to Put Down “Digidog” Robot, July 21, 2021.] A similar Boston Dynamics bot generated a storm of criticism when police in Honolulu used it to take the temperatures of residents in a homeless shelter as part of COVID-19 protocols.

Much of the criticism in these cases revolves around the invasion of privacy, coupled with the robo-dogs’ use against already over-policed and criminalized populations. As EFF points out, Customs and Border Patrol’s field-test program in the desert Southwest is just one more imposition on the civil liberties of people in what is already one of the nation’s most-surveilled regions. And CBP has an abysmal track record on human rights. A February 2022 EuroNews article notes that Human Rights Watch “uncovered 160 internal reports of misconduct and abuse, including physical and sexual abuse, of asylum applicants at the hands of officers within several DHS components, particularly CBP officers and Border Patrol agents between 2016 and 2021.”

Of course, robotic “dogs” are not the only type of robot being used to surveil, harass, and even attack human beings by U.S. law enforcement. Over a thousand law enforcement agencies in the U.S. have made regular use of flying drones since the mid-2000s. Robots have also become established in the fields of explosives disposal and hostage crisis intervention. In addition to Boston Dynamics and Ghost Robotics’ quadrupeds, the new generation of AGSVs used by law enforcement also includes the Knightscope corporation’s large, rolling, dome-shaped robots. All of these robots are being used in increasingly invasive ways. As described in recent CLN articles, these “robo stalkers” are being programmed and deployed to do everything from reading license plate numbers and collecting cell phone data, to scanning and learning to recognize faces while surveilling groups of people in the street for “suspicious behavior.” [See: CLN, March 2021, p.50; April 2021, p.50.] Built into all of this programming is the potential for error and abuse, either through the transmission of human biases such as racism or via software glitches and mechanical failures.

Already, in just the first few years of the dawn of this brave new world, police and private “security” robots are breaking Asimov’s First Law, either by accident or, increasingly, by design. There has been at least one incident in which the most benign-seeming of surveillance robots, a Knightscope corporate-security bot, accidentally injured a human being. According to a 2016 article in the Los Angeles Times, one of the company’s semi-autonomous trashcan-shaped machines bumped into and then ran over a toddler while on patrol at a Silicon Valley mall. Luckily, the child was only slightly injured. The repercussions for Knightscope from this incident, other than feeling compelled to issue a public apology and having its fleet of bots at the mall temporarily docked, are unclear. More disturbingly, that same summer, Dallas police used a bomb-disposal robot armed with a pound of C4 to kill the suspect in a sniper attack on cops during an anti-police brutality protest. A grand jury declined to indict any of the officers involved in that action.

The robot used by Dallas PD, a Northrop-Grumman Remotec Androx Mark V A-1, is typical of the wheeled or treaded devices used by many police departments in bomb-disposal scenarios. But they have other uses, some of which fly in the face of the benign, even cuddly image cultivated by robot manufacturers and law enforcement. Robots like the one used to kill the sniper suspect are routinely used to “disorient and incapacitate suspects that are barricaded,” according to a July 2016 CNN investigative report. Usually, these attacks are carried out using “less lethal” munitions such as a flash-bang grenade. Militaries the world over are already using wheeled and tracked robots similar to police bomb-squad bots. More are in development every year, with applications running the gamut from communications and surveillance, to logistics support, to mobile, semi-autonomous firearms and rocket launchers.

Enter Ghost Robotics’ semi-autonomous, sniper rifle armed 4-legged “dog.” The Ghost Robotics/SWORD Defense Systems sniper-dog is different from other military robots primarily in its size, shape, and mode of transportation. In a puff piece published in Forbes, Ghost Robotics CEO Jiren Parikh defends his company’s trajectory with its legged robots. The article says Parikh “supports Ghost Robotics’ defense customers to outfit the robots as they see fit to keep people safe. Further, he says, the robot dogs with weapons are more akin to drones, since they are not fully autonomous and require a remote human operator to make any decision to fire. ‘They put weapons systems on all sorts of autonomous tanks, autonomous track robots and aerial drones. What is a guided missile? It’s a robot. They’ve been around for a decade. We just happened to build a robot with legs ... [t]he idea that these robots are sentient beings and have AI to do whatever is silly.’”

Nonetheless, as evidenced by New York City lawmakers’ efforts to ban such robots from being armed by police, something about a deliberately bio-mimetic machine being used to injure or kill humans produces a visceral reaction in many people. We have all seen this movie—sci fi and fantasy films from Bladerunner and Star Wars to Robocop and Chappie depict just such nightmarish scenarios in which artificially-intelligent machines that mimic biological creatures are used to kill actual living beings, including humans. Despite “Pentagon policy ... that all robotic weapons should be under the control of a human operator,” according to an October 2021 New Scientist article on the Ghost Robotics/SWORD collaboration, “the technology for small drones to select and attack targets autonomously has already been developed.”

In fact, the New Scientist article goes on to state, “LAW” technology—a fully autonomous robotic weapons system guided not by a human operator or monitor, but by its own sophisticated “artificial intelligence” programming—may already have been used on human targets in Libya. In a May 2021 article, the Bulletin of the Atomic Scientists cites a United Nations report on a 2020 incident in the Libyan civil war in which a Turkish STM Kargu-2 flying drone allegedly “hunted down and remotely engaged” retreating soldiers. The Kargu-2 is “a ‘loitering’ drone that can use machine learning-based object classification to select and engage targets,” according to the Bulletin. The article cites the drone’s manufacturer, STM, as touting the weapon’s “anti-personnel” capabilities, demonstrated to hideous effect in a promotional video in which the small flying robot makes a steep dive toward a group of manikins on the ground before detonating a fragmentation explosive that sprays the crowd of human-shaped targets with shrapnel. 

Police agencies in the United States are already using flying drones to surveil our streets, most notably above allegedly-protected First Amendment group activities such as protests against police brutality in Minneapolis and the anti-pipeline demonstrations at Standing Rock. With ever more flying drones overhead, overgrown R2D2s rolling around malls and parking lots, and robotic dogs roaming woodlands, parks, hillsides, deserts, and cities, and even walking up the stairs of our buildings, is it an inevitability that some of these systems will be weaponized, unleashed from any direct control by human handlers, and turned on those of us who openly rebel or who simply look “suspicious” according to their algorithms? Civil libertarians would like to say that it is not.

It is a huge step, legally and ethically, to shift from semi-autonomous lethal weapons to fully autonomous ones, and groups around the United States and around the world are working to fight this shift. Human Rights Watch (“HRW”) is leading a coalition of organizations in the international Campaign to Stop Killer Robots. As HRW’s website explains, “There are serious doubts that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity.” However, as the EFF notes in its July 2021 editorial, the U.S. government is already on a different trajectory. The U.S. Defense Advanced Research Projects Agency (“DARPA”) began testing Lethal Autonomous Weapons in 2020. This testing was reported just weeks after the National Security Commission on Artificial Intelligence recommended against U.S. participation in the international treaty banning LAWs that is being proposed by the Campaign to Stop Killer Robots.

The results of the struggle within the United Nations and other transnational organizations over Lethal Autonomous Weapons may have significant implications for the deployment of potentially “killer” robots in domestic law enforcement as well, thanks to the increasingly interwoven infrastructure of government and private entities that connect military training, technology, and hardware to policing at the federal, state, and local levels. As for DHS/CBP’s currently-unarmed robo-dogs, EuroNews quotes a tweet from the American Civil Liberties Union stating that “DHS’s plan to use robot patrol dogs on its borders is a civil liberties disaster in the making. The government must retract this dangerous proposal, and the Biden administration must put the brakes on our country’s slide into an anti-immigrant dystopia.”  

Sources: Electronic Frontier Foundation, Bulletin of the Atomic Scientists, EuroNews, Los Angeles Times, Human Rights Watch, dhs.gov, criminallegalnews.org, New Scientist, Reuters, cnn.com, democracynow.org, npr.org

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

PLN Subscribe Now Ad
CLN Subscribe Now Ad
Disciplinary Self-Help Litigation Manual - Side