Crime and Big Data: Autopilot vs. Power Steering

There has been a host of recent articles and books decrying the use of “big data” to make decisions about individual behaviors. This is true in commerce (Amazon, Facebook, etc.), but also true in criminal justice, my field of research. Moreover, some of the algorithms that forecast dangerousness are proprietary, making it all but impossible to determine the basis for challenging a sentence based on the algorithm’s outcome. Recent books, such as Weapons of Math Destruction and The Rise of Big Data Policing, underscore the dangers of such activity. This is the essence of an autopilot approach to forecasting behavior – hands off the wheel, leave the driving to us.

There is some research that supports this type of algorithmic decision-making. In particular, Paul Meehl, in Clinical versus Statistical Prediction, showed that, overall, clinicians were not as good as statistical methods in forecasting failure on parole, as well as the efficacy of various mental health treatments. True, this book was written over fifty years ago, but it seems to have stood the test of time.

It is dangerous, however, to relegate to the algorithm the last word, which all too many decision-makers are wont to do (and against which Meehl cautioned). All too often the algorithms, often based on so-so (i.e., same-old, same-old) variables – age, race, sex, income, prior record – are used to “predict” future conduct, ignoring other variables that may be more meaningful on the individual level. And the algorithms may not be sufficiently sensitive to real differences: two people may have the same score even though one person may have started out doing violent crime and then moved on to petty theft, while the other may have started out with petty crime and graduated to violent crime.

That is, the fact that a person has a high recidivist score based on the so-so variables should be seen as a threshold issue, a potential barrier to stopping criminal activity. It should be followed by a more nuanced look at the individual’s additional life experiences (which do not fit into simple categories, and therefore cannot be included as “variables” in the algorithms). That is, everyone has an age and a race, etc., but not everyone was abused as a child, was born in another country, or spent their teen years shuffling through foster homes. Therefore, these factors (and as important, the timing and sequence of these factors) are not part of the algorithm but may be as determinative of future behavior as the aforementioned variables. This is the essence of a power steering approach to forecasting behavior – you crunch the data, but I decide how to use it and where to go.

Regarding power steering, I’m sure that many of you would rather look at an animated map of weather heading your way than to base your decisions (umbrella or not?) on a static (autopilot) weather forecast (BTW, does a 30 percent chance of rain refer to the likelihood of my getting wet in a given time period or to the fact that 30% of the area will be rainy and may skip me entirely?). The same issues are there in crime analysis. A few years ago I coauthored a book on crime mapping, which introduced the term that heads this post. In that book we described the benefit of giving the crime analyst the steering wheel, to guide the analysis based on his/her knowledge of the unique time and space characteristics of the areas in which the crime patterns developed.

In summary, there’s nothing wrong with using big data to assist with decision-making. The mistake comes in when using such data to forecast individual behavior, to the exclusion of information that is not amenable to data-crunching because it is highly individualistic – and may be as important in assessing behavior than the aforementioned variables.

Author: Mike Maltz

Michael D. Maltz is Emeritus Professor of Criminal Justice and of Information and Decision Sciences at the University of Illinois at Chicago. He is currently an adjunct professor of sociology at the Ohio State University His formal training is in electrical engineering (BEE, Rensselaer Polytechnic Institute, 1959; MS & PhD Stanford University, 1961, 1963), and he spent seven years in that field. He then joined the National Institute of Law Enforcement and Criminal Justice (now National Institute of Justice), where he became a criminologist of sorts. After three years with NIJ, he spent thirty years at the University of Illinois at Chicago, during which time he was a part-time Visiting Fellow at the US Bureau of Justice Statistics. Maltz is the author of Recidivism, coauthor of Mapping Crime in Its Community Setting, and coeditor of Envisioning Criminology.

8 thoughts on “Crime and Big Data: Autopilot vs. Power Steering”

  1. Have you seen the recent NBER working paper on this subject? https://www.nber.org/papers/w23180. They find that, compared to human judges making pre-trial release decisions, their algorithm was able to either release a larger number of people who end up committing the same number of crimes while on pre-trial release; or release the same number of people as human judges, who then go on to commit dramatically fewer crimes than the people judges release. Essentially they can tune their algorithm to either maximize the freedom of pre-trial detainees while holding crime constant, or maximize public safety while holding the free of pre-trial detainees constant. Their data is from NYC, and includes most serious instant offense, prior offenses (and outcomes), and age. No race, sex or income.

    1. Interesting — but that's using the algorithm alone. I would imagine (hope) that a judge, having the algorithm as a supplementary source of information, would be able to make better decisions than using just the algorithm. To further the "power steering" metaphor, it would be even better if the judge could tune the algorithm to change weights according to his/her assessment of unmeasured factors, and arrive at an even better outcome.

      1. Wouldn't a judge need big data or the inferences properly drawn from it to make the individualized adjustments you're suggesting? Otherwise he or she is just using fallible intuition to weigh the significance of the factors not included in the algorithm.

        1. I'm not thinking of intuition alone as much as I am of "facts on the ground." Suppose a person has a low risk of recidivism, but it is well-known that there's a gang rivalry in the defendant's neighborhood that has recently been heating up, and the individual is a known gang affiliate. In that case a low recidivism score might be outweighed by local knowledge.

          Or suppose a likely recidivist (according to the algorithm) has a very supportive family (how can this be measured using big data?), and has taken positive steps that the algorithm doesn't catch?

          1. It might be reasonable for a judge to take it as obvious (and not requiring a lot of evidence) that a current gang member is likely to participate in gang conflicts. But you wrote that

            "not everyone was abused as a child, was born in another country, or spent their teen years shuffling through foster homes"

            I doubt that it is reasonable to to use being born in another country, for example, as having some significance for sentencing without having a lot of data to back up that judgment.

  2. The problem is that Big Data is only as good as its training set. So you get things like the facial-recognition result out of china this spring where an algorithm was about to do better than chance at recognizing pictures of convicted criminals and no one wondered whether that said something about police, judges and juries. (My favorite along these lines was a study from the height of the drug craze in florida — old but still illuminating — where mandatory reporting of suspected drug use by pregnant women ran 5:1 along the racial lines you would suspect, but anonymized urine samples showed no statistically significant difference in use.) Even a nominally objective measure such as being arrested for a new crime can be problematic.

  3. Big data, notably analysed by artificial intelligence that can escape its programmers' intent, is dangerous to use to predict individuals' behavior (as distinct from group actions) - and likely to be unfair if the algorithms are kept secret because they are proprietary to their developer. Here is an analysis of how much information should be made available to people whose liberty is at issue, in the US and Canada. http://www.slaw.ca/2017/07/24/proprietary-algorit

Comments are closed.