“They have learned nothing, and forgotten nothing”

Are advocates of risk-needs assessment actually unable to understand swift-certain-fair, or just unwilling?

One of the difficult moments in a research career comes when you’ve made an unjustified attack on work you only partly understand (and desperately want not to understand) and get your hand slapped by the people you accused of being “accomplices” to a con job.

When you’ve demonstrably mis-stated the question, gotten the intellectual history completely wrong, missed most of the policy history, ignored almost all of the empirical evidence, and misquoted key implementation details of the idea you’re attacking, prudence generally counsels backing off.

Alas, as Talleyrand said of the restored Bourbons, some researchers learn nothing (about the world) and forget nothing (about their prejudices).

Of course, those are merely general remarks. As an interested party, it would be out of place for me to comment on the rejoinder (pp. 75ff) to the (admirably restrained critique) (pp. 71ff) to an attack (pp. 57ff) on the idea of swift-certain-fair sanctioning systems (mislabeled “HOPE”) from advocates of the competing assess-and-treat paradigm, incorporating the Risk-Needs-Responsivity assessment process. So I will outsource the commentary to the colleague who alerted me that  the journal Federal Probation had finally published all three items. He summarizes the rejoinder:

The HOPE research is OK so far as it goes (we can’t find any fault with the conduct of the Hawaii Randomized Controlled Trial [RCT]), but it has limited external validity, and there is other research suggesting that threats have limited capacity to influence behavior. And here is a long list of bad things that will probably happen if HOPE is widely adopted.

Meanwhile, we know that RNR works, because we know that it works.

RCTs? We ain’t got no RCTs.  We don’t need no RCTs. We don’t have to show you any stinkin’ RCTs.

I will note, as a mere matter of objectively checkable fact, that the rejoinder addresses none of the substantive points in the critique; rather than either acknowledging or challenging the evidence and analysis that make nonsense of the claims in the original article, the rejoinder merely restates those claims at a higher pitch.  And it ignores the suggestion in the critique tht  the difference of opinion might be adjudicated by doing an experiment, with one group of offenders assigned to RNR and the other to SCF.  That might suggest - to someone with a suspicious mind - that the authors share my view about how that experiment would come out.

As Upton Sinclair remarked, it is remarkably hard to get someone to understand a point when his (or her) paycheck (or academic reputation) depends on not understanding it.

Author: Mark Kleiman

Professor of Public Policy at the NYU Marron Institute for Urban Management and editor of the Journal of Drug Policy Analysis. Teaches about the methods of policy analysis about drug abuse control and crime control policy, working out the implications of two principles: that swift and certain sanctions don't have to be severe to be effective, and that well-designed threats usually don't have to be carried out. Books: Drugs and Drug Policy: What Everyone Needs to Know (with Jonathan Caulkins and Angela Hawken) When Brute Force Fails: How to Have Less Crime and Less Punishment (Princeton, 2009; named one of the "books of the year" by The Economist Against Excess: Drug Policy for Results (Basic, 1993) Marijuana: Costs of Abuse, Costs of Control (Greenwood, 1989) UCLA Homepage Curriculum Vitae Contact: Markarkleiman-at-gmail.com

5 thoughts on ““They have learned nothing, and forgotten nothing””

  1. How does the SCF model fit in with prospect theory? People prefer a nice theory to inconvenient facts, and a recent high-prestige theory (Kahneman & Tversky) to an old one (Beccaria), so the reconciliation looks worthwhile from a cost-benefit point of view.

    I like your procedural justice point. Making serious judicial sanctions depend on an unaccountable and pseudoscientific white-coat or tweed-jacket assessment of "risk and need", rather than on facts about what you've done, looks unjust to many. There are cases where nothing else may be feasible - child abuse and psychotic killers - but that's not an argument for generalizing the practice where there are effective and procedurally fairer alternatives.

    1. One of the things that's become clear over the past 30 years or so (I'm going mainly with results as filtered through Schneier) is that people are really, really bad at estimating probabilities and expected values for rare events. No, even worse than that. (Which makes sense, because just to survive to adulthood you have to have less than a 0.01% chance of getting killed each day.) So anything that ramps up frequency and ramps down severity is going to help put things in a statistical regime our brains are more equipped to handle.

  2. Agreed. One of the points I tried to make in When Brute Force Fails is that behavioral econ. puts a firm theoretical foundation under Beccaria's intuitions about swiftness and certainty. But over time I've gotten convinced that the procedural justice aspect is at least as important: thus Swift-Certain-FAIR.

  3. Shorter Duriez et al: "This new thing looks like it works, is simpler and more common-sense than what we're offering, and is rapidly becoming more popular. We're really, really scared."

    You're in good company, Mark. That's been the reaction in academic criminology to pretty much all of what has in fact driven real progress in crime control over the last forty years: general advances in policing (and specific innovations like problem-oriented and community policing); practical heuristics like situational crime prevention; disorder/broken windows theory; CompStat; focused deterrence; and all the rest. What's in fact moved the field along is new-facts-on-the-ground operational work, which has over time produced results and formal evidence that even academic criminology can't ignore. Stay with it.

Comments are closed.