Artificial Intelligence in Law Enforcement

By C. Ross Brown

A stock image of someone working on a computer with artificial intelligence icons incorporated into the photo.

A rapidly developing technology will change the world, perhaps in our lifetime. This technology, artificial intelligence (AI), is defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”1

Because of AI’s hastened progression, most media reporting is well behind its development and implementation. Multiple areas related to public safety will benefit from the enhancement and enrichment of AI; however, police departments must be cautious with the associated power it brings.


Predictive crime analysis and the subsequent enforcement operations will likely lead to decreased criminal activity. Thus, placing available assets where AI anticipates a rise in criminality is paramount. One expert said, “As threats to public safety mutate faster and faster, AI holds the promise of leveling the playing field for governments.”2 For instance, gunshot detection technology, already in use domestically and abroad, will be further enhanced by AI, allowing police agencies to coordinate and predict where gun violence may occur. 

Chief Deputy Brown serves with the Anderson County, South Carolina, Sheriff's Office.

Chief Deputy Brown serves with the Anderson County, South Carolina, Sheriff’s Office and is a graduate of FBI National Academy Session 281.

AI will also help protect officers and the public in the hazardous and often liability-laden areas of emergency vehicle responses and vehicle pursuits. To this end, scientists at NASA’s Jet Propulsion Laboratory are developing the Trusted and Explainable Police Artificial Intelligence (TruePAL) assistant. This technology aids the driver of an emergency vehicle in its operation and mitigates the need to manually control most installed equipment. Additionally, TruePAL introduces intersection safety measures, roadside warnings, hazardous material identification, first aid assistance, and electric vehicle mitigation safety guidelines.3  

One can imagine traffic control devices instantly programmed to vector traffic around or away from an ongoing pursuit. Further, the ability to remotely disable a fleeing vehicle and transmit location-based warning signals to other drivers would enhance survivability for all involved.


With all the benefits and capabilities associated with this developing technology, there are also significant concerns, primarily related to video analytics and facial recognition.

For instance, one company’s facial recognition software had 28 false matches when the images of 535 members of Congress were compared to 25,000 public mugshots. The company explained that the program runs at an 80% confidence level, but it recommends at least 95% for law enforcement applications.4 Given that margin of error, police agencies could conceivably charge the wrong individual with a crime, considering the sheer number of processed photos. 

Additionally, because humans control the interpretation of results and dictate where and how AI is used, the potential for misuse and targeting of certain demographics are legitimate issues. The threshold for criminal activity when looked at from a predictive perspective must be clearly defined.

When is gathering data through surveillance cameras considered harassment?5 Video analytics used to pattern and predict the behavior of individuals involved in everyday moments should concern everyone, including those in law enforcement. This country is based on inherent freedoms granted in the Constitution that people, admittedly, sometimes take for granted or overlook in exchange for convenience and the luxury of new technology.  

“Because of AI’s hastened progression, most media reporting is well behind its development and implementation.”

Policy makers, political leaders, and law enforcement executives must avoid, at all costs, overreach and invasion into the privacy of citizens. The National Defense Authorization Act (NDAA) outlines broad directives and policy guidance for the development of AI.6 In reading portions of the act and digesting the vast amount of associated information, AI’s rapid rate of progression outpaces any practical form of protective regulation.

On a granular level, it would be difficult to interdict any inappropriate applications of technology currently available. Law enforcement agencies may already be using AI programs that have raised concerns.7 One agency noted that as many as 49 false results return for every positive within its facial recognition program.8  


Artificial intelligence will continue in its progression and implementation in police agencies. Although the potential for misuse is a concern, those in the field of serving and protecting others should be encouraged by what is on the horizon. With proper oversight and regulation, the application of AI in law enforcement is destined to offer many significant benefits.  

“With proper oversight and regulation, the application of AI in law enforcement is destined to offer many significant benefits.”

Chief Deputy Brown can be reached at


1 Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence,” California Management Review 61, no. 4 (2019): 5-14,
2 Carl Ghattas, quoted in George Atalla, “How AI is Establishing Itself as the Newest Public Safety Officer,” EY, January 31, 2020,
3 Rob Lawrence, “IACP Quick Take: Could NASA’s AI Platform for Space Exploration Improve Officer Safety?” Police 1, September 24, 2021,
4 Russell Brandom, “Amazon's Facial Recognition Matched 28 Members of Congress to Criminal Mugshots,” The Verge, July 26, 2018,
5 Niraj Chokshi, “How Surveillance Cameras Could Be Weaponized with A.I.,” New York Times, June 13, 2019,
6 National Defense Authorization Act for Fiscal Year 2022, Public Law 117-81, 117th Cong. (December 27, 2021),
7 Ryan Suppe, “Orlando Police Decide to Keep Testing Controversial Amazon Facial Recognition Program,” USA Today, July 10, 2018,
8 Brandom.