Axon’s Plan to Develop Taser Drone Prompts Mass Exodus from Ethics Board
By Catherine Dorrough
In the wake of the recent mass shootings in Buffalo, NY, and Uvalde, Texas, Axon recently announced plans to develop a non-lethal, remotely operated Taser drone capable of incapacitating an active shooter. The company positioned the concept as part of a long-term plan to address mass shootings.
The announcement, however, came on the heels of a vote by the Axon AI Ethics Board in which a majority of members voted to advise Axon not to proceed with the technology. Furthermore, WIRED reported that Axon did not ask the board to consider any scenarios involving schools. The company briefly mentioned the board’s initial objection in its June 2 press release detailing the project.
After a public outcry, the company quickly walked back its plans. In a blog post, founder and CEO Rick Smith promised to pause the drone project “to further engage with key constituencies to fully explore the best path forward.”
The wheels were already in motion, though; on June 6, one day after Smith’s post, nine board members quit in protest. Their letter reads, in part:
“Only a few weeks ago, a majority of this Board—by an 8-4 vote—recommended that Axon not proceed with a narrow pilot study aimed at vetting the company’s concept of Taser-equipped drones. In that limited conception, the Taser-equipped drone was to be used only in situations in which it might avoid a police officer using a firearm, thereby potentially saving a life. We understood the company might proceed despite our recommendation not to, and so we were firm about the sorts of controls that would be needed to conduct a responsible pilot should the company proceed. We just were beginning to produce a public report on Axon’s proposal and our deliberations.
“None of us expected the announcement from Axon last Thursday, June 2 regarding a very different use case. That announcement—that the company’s goal is to entrench countless pre-positioned, Taser-equipped drones in a variety of schools and public places, to be activated in response to AI-powered persistent surveillance—leads us to conclude that after several years of work, the company has fundamentally failed to embrace the values that we have tried to instill.”
The resignation letter also took specific aim at the company’s push to use real-time, persistent surveillance in its products. “This type of surveillance undoubtedly will harm communities of color and others who are overpoliced, and likely well beyond that,” the members wrote.