© 2024 WFAE
90.7 Charlotte 93.7 Southern Pines 90.3 Hickory 106.1 Laurinburg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Big Data And Bad Cops: Can An Algorithm Predict Police Misconduct? (Part 2)

Over the past year, the CMPD and data researchers with the University of Chicago have taken part in an ambitious experiment. Can an algorithm help stop police misconduct before the incident takes place?

In part 1 of our series, we reported on the differences between the algorithm, known as random forest, and the current Early Intervention System used by the CMPD.

In part 2, we look at the results of this experiment.

CREATING A RISK SCORE

For this experiment to work, the algorithm needs to be able to predict, and therefore help prevent incidents like this.

http://www.youtube.com/watch?v=z6tTfoifB7Q

A police officer from a Dallas suburb yells at a group of African-American teens.

The officers were responding to a disturbance at a pool party.

Seemingly without provocation, one white officer pulls his gun, then holsters it. He then slams an African-American girl to the ground and drives his knee into her back.

All this was captured on video which quickly went viral. The officer resigned soon after.

This incident happened in the summer of 2015, around the same time Joe Walsh and his colleagues started work on an algorithm they hope will predict what makes an officer crack. Walsh opens up his laptop. “I can show you some of the models that have run in the past.”

He launches a program and numbers fill the screen.

Walsh is a researcher with the University of Chicago’s Center for Data Science and Public Policy. He reads off some of the data points. “In your career how many preventable accidents have you had? What is your assignment history? How many hours have you worked like secondary employment.”

Plus citizen complaints, Internal Affairs investigations, how and when officers use sick leave and more. The program runs through millions of data points and then says Walsh, “We can rank every officer in the department from top to bottom on a risk score.”

The higher the score the higher the risk that officer will commit misconduct ranging from crashing their patrol car to firing their gun without cause.

Or so the theory goes.

TESTING THE THEORY

The CMPD provided Walsh data dating back to at least 2005. So the researchers had the algorithm go back in time as well. “We pretend it's January 1, 2010. And we only use data that was available on January 1, 2010, to build the model. And then we see how well it would have predicted for all of 2010.”

Then tweak the equation and do it again on another year.

Since this is a score based system, the overall accuracy depends on how high a score the CMPD sets as the threshold for intervention. Still, the algorithm showed it could better flag problem officers than the current Early Intervention System used by CMPD, says Walsh. “Compared to the current EIS we can basically true positives, which are the officers who are likely to have adverse incidents by 15 percent.”

CMPD Chief Kerr Putney confirms those results. “There were some people that we’ve had, you know, some behavioral issues with in the past that were at the top of the list so the science is bearing out some of the things that we could anecdotally see.”

The algorithm also decreases the number of false positives by about 30 percent. That represents the officers erroneously flagged by the current system. This accuracy allows the department to focus interventions, extra training, and so on, on the officers that really need it. “You’re talking about 300 or 400 cops,” says Putney, “that’s a lot of people that we’re not intervening with who are doing great work.”

PREDICTING WHAT CAUSES GOOD COPS TO BREAK

But for this experiment to be truly successful, the algorithm needs to do more than just flag problem officers. It needs to be able to reliably identify what can push an officer past their breaking point.

So Walsh and the researchers went back through the data provided by the CMPD and looked at all kinds of things, like race. “We have race of the officer, we have race of the member of the public that the officer is interacting with.” But Walsh said race isn’t the best predictor of police misconduct. “We have not seen any evidence that that is predictive of whether an officer is likely to have an adverse incident in the next year or two.”

A better predictor, according to their data, the number of times the same officer has responded to what’s known as a high risk call. “It turns out the more suicide calls that you’ve been on, the more domestic violence calls, especially those involving children that you’ve been on recently, the more likely it is that you’re going to have an adverse incident in the future.”

Take that 2015 pool party incident where the officer slammed the African-American girl to the ground. “It turns out he had been on two suicide calls earlier that shift.”

That’s not a justification for the officer’s excessive use of force. But it may have played a part.

Knowing this, Walsh says, police departments can change which officers respond to which calls – thus preventing some misconduct by cutting down on these cumulative effects. “If there’s a suicide call that comes into CMPD and they have an officer that’s one minute away who’s already been on a suicide call that shift. But they have another officer who’s a minute and a half away who’s been on zero, they might want to consider sending the officer who’s a minute and a half away to that suicide call instead.”

The algorithm to predict and prevent misconduct is still just an experiment.  The decision to put it in place rests in the hands of one man, CMPD Chief Kerr Putney.

“To be quite honest I was skeptical initially until I saw some of the initial results that they sent us back.”

But Putney adds that doesn’t mean he’s fully on board. He still has questions about the accuracy and reliability of the algorithm. “Our issue right now is that until we have confidence in the formula, it’s not something that we’re going to say hey, this is the basis of our new early intervention system.”

Putney says a decision on whether or not the CMPD will use the algorithm, or pull the plug on this experiment will come before the end of October.   

Either way the experiment will go on. Three other police forces, including the Los Angeles County Sheriff’s department are now taking part.

Tom Bullock decided to trade the khaki clad masses and traffic of Washington DC for Charlotte in 2014. Before joining WFAE, Tom spent 15 years working for NPR. Over that time he served as everything from an intern to senior producer of NPR’s Election Unit. Tom also spent five years as the senior producer of NPR’s Foreign Desk where he produced and reported from Iraq, Afghanistan, Yemen, Haiti, Egypt, Libya, Lebanon among others. Tom is looking forward to finally convincing his young daughter, Charlotte, that her new hometown was not, in fact, named after her.