
The U.S. military has been killing people with robots for decades now, and the nationâs local police now seem eager to get in on the action.
Drone strikes abroad have become so commonplace that the mainstream news media barely bothers to cover them anymore. For years, the military has also been using bomb disposal robots, which are basically glorified radio-controlled cars with tank-treads that can be used to safely dismantle explosives from a distance. These robots have proliferated across local law enforcement in recent years, as military technology so often does. In 2016, the Dallas Police Department took the extraordinary step of mounting a bomb to its bomb disposing robot and using it to kill a mass shooting suspect.
Since that incident, civil liberties groups have raised concerns that law enforcement would seek to use robots to kill people in greater numbers. Police are now showing that those fears were not unfounded. In the last few months, at least two different cities have officially proposed allowing police officers to kill people using robots. In September, the Oakland Police Department discussed the possibility of using a Remotec Andros Mark V-A1 robot equipped with a shotgun to kill suspects during âhigh-risk, high-threatâ events, as local journalist Jaime Omar Yassin and The Intercept reported. After public outcry over potentially being shot to death by a âroided-out Mars Rover, the department abandoned the idea. For now anyway.
The same debate is now playing out across the Bay. On Tuesday, the San Francisco Board of Supervisors voted to enact a policy that will allow its officers to kill people with robots. The ordinanceâs language states that the robots âwill only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.â The departmentâs policy had previously explicitly banned police from unleashing drones and robots that could use force against people.
After one San Francisco supervisor, Dean Preston, issued some mild pushback to the proposal, the San Francisco Police Officersâ Association (POA), the cityâs police union, posted a scathing Twitter thread claiming Preston did not want to stop mass shooters. The final vote was 8-3.
They are tools that require humans to operate. The proposed policy heâs concerned about is for them to be used only as a last resort when risking human lifeâofficers or publicâis too great. Itâs sad that @deanpreston would rather grab a headline than save a life. pic.twitter.com/EddkL5LjAk
â San Francisco POA (@SanFranciscoPOA) November 29, 2022
Other departments seem likely to follow suit. Local cops now own thousands of Andros Mark V-A1 bots, according to The Interceptâs report last month. Local police have also increasingly used drones to monitor all sorts of activities, from peaceful protests to drug deals.
The proliferation of police robots has sparked pushback from civilians, elected officials, and civil rights groups alike. In 2021, residents and local politicians castigated the New York Police Department for its use of a so-called âDigidogââan unarmed, four-legged, robotic âdogâ built by Boston Dynamicsâafter footage of its deployment in two separate incidents went viral. The city has since canceled a $94,000 contract to lease the robo-dog. While Boston Dynamics was one of multiple companies that signed an open letter last week condemning the use of armed robots, other companies havenât drawn such a line in the sand. As TechCrunch noted this week, a Philadelphia company named Ghost Robotics sells its products to the U.S. military and seems totally fine with the strapping of rifles to its robo-dogs.
On Monday, the Electronic Frontier Foundation (EFF), a digital privacy nonprofit, issued a scathing statement slamming the San Francisco Police Departmentâs request for killer robots.
âThis is a spectacularly dangerous idea and EFFâs stance is clear: police should not arm robots,â the organization said.
Police technology goes through mission creepâmeaning equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. Weâve already seen this with military-grade predator drones flying over protests, and police buzzing by the window of an activistâs home with drones.
As EFF noted, the language governing police use-of-force policies tends to be extremely broad, including for the use of robots. While departments often claim that military technology will only be used in rare circumstances, the actual rules are often written to give cops leeway to do virtually whatever they want with the technology. In San Francisco, the policy would allow the department to deploy armed robots during nearly any situationâcops could use these weapons to kill people remotely so long as they claim to fear for their lives.
Likewise, the American Civil Liberties Union has long warned against the arming of police robots. In addition to the many other reasons these machines could be harmful, the ACLU has noted that officers likely do not have the same tactical or situational awareness when using a remote-controlled device. The simple misreading of a personâs movements or other cues could lead to someone getting hurtâor worse.
âWhen officers are acting remotely, they donât have live, 360-degree, multi-sensory, intuitive situational awareness, and their perception of a situation is more likely to be flawed or confused, and force used inappropriately and/or on the wrong targets,â the ACLU warned in 2016 after the Dallas police killing-by-robot.
Signals may also be degraded due to communications and control problems with the robot.
To get a true sense of the terrifying possibilities of this dystopian future, one need only to look at the militaryâs propensity for using drones to kill innocent people. Remote weapons make it easier for soldiers to kill people from afar, and news reports have shown us how severe the consequences can be when state actors misread critical evidence.
In 2021, the New York Times released a series of investigative reports detailing horrors committed by military drone pilots. The Times noted that people were often attacked or killed remotely based on misunderstandings or flimsy evidence. In one incident, officials authorized a drone strike on the home of Basim Razzo, an innocent man, after just 95 minutes of surveillance, killing his wife, daughter, brother, and nephew. In those few minutes of monitoring, the Times reported that the military made stunning errors, such as interpreting the opening of a gate or the absence of women as âISIS activity.â Razzo survived but needed major surgery to correct multiple shattered bones and remove pieces of shrapnel from his body.
Despite these horrors, the U.S. military continues to use remote weapons with troubling frequency. And now, after Americans failed to stop the widespread use of this technology overseas, it may be coming home.
This story has been updated to reflect the San Francisco Board of Supervisorsâ approval of the new use of force policy regarding police robots.
Source: Mronline.org