AI is sending people to jail—and getting it wrong

AI might not seem to have a huge personal impact if your most frequent brush with machine-learning algorithms is through Facebook’s news feed or Google’s search rankings.

But at the Data for Black Lives conference last weekend, technologists, legal experts, and community activists snapped things into perspective with a discussion of America’s criminal justice system.

There, an algorithm can determine the trajectory of your life.

Recommended for You CERN wants to build a particle collider that’s four times bigger than the LHC A startup that makes 3D-printed rockets will now launch from Cape Canaveral Millions of email addresses have been exposed—here’s how to check if yours was one of them We’d have more quantum computers if it weren’t so hard to find the damn cables A country’s ambitious plan to teach anyone the basics of AI The US imprisons more people than any other country in the world.

At the end of 2016, nearly 2.

2 million adults were being held in prisons or jails, and an additional 4.

5 million were in other correctional facilities.

Put another way, 1 in 38 adult Americans was under some form of correctional supervision.

The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle.

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible.

This is where the AI part of our story begins.

Sign up for the The Algorithm Artificial intelligence, demystified hbspt.

forms.

create({ portalId: “4518541”, formId: “687d89a5-264a-492d-b504-be0d4c3640f2” }); By signing up you agree to receive email newsletters and notifications from MIT Technology Review.

You can change your preferences at any time.

View our Privacy Policy for more detail.

Police departments use predictive algorithms to strategize about where to send their ranks.

Law enforcement agencies use face recognition systems to help identify suspects.

These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities.

Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals—even mistaking members of Congress for convicted criminals.

But the most controversial tool by far comes after police have made an arrest.

Say hello to criminal risk assessment algorithms.

Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend.

A judge then factors that score into a myriad of decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before trial, and how severe their sentences should be.

A low score paves the way for a kinder fate.

A high score does precisely the opposite.

The logic for using such algorithmic tools is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether for rehabilitation or for prison sentences.

In theory, it also reduces any bias influencing the process, because judges are making decisions on the basis of data-driven recommendations and not their gut.

You may have already spotted the problem.

Modern-day risk assessment tools are often driven by algorithms trained on historical crime data.

As we’ve covered before, machine-learning algorithms use statistics to find patterns in data.

So if you feed it historical crime data, it will pick out the patterns associated with crime.

But those patterns are statistical correlations—nowhere near the same as causations.

If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime.

But this is precisely what risk assessment tools do: they turn correlative insights into causal scoring mechanisms.

Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores.

As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.

Because most risk assessment algorithms are proprietary, it’s also impossible to interrogate their decisions or hold them accountable.

The debate over these tools is still raging on.

Last July, more than 100 civil rights and community-based organizations, including the ACLU and the NAACP, signed a statement urging against the use of risk assessment.

At the same time, more and more jurisdictions and states, including California, have turned to them in a hail-Mary effort to fix their overburdened jails and prisons.

Data-driven risk assessment is a way to sanitize and legitimize oppressive systems, Marbre Stahly-Butts, executive director of Law for Black Lives, said onstage at the conference, which was hosted at the MIT Media Lab.

It is a way to draw attention away from the actual problems affecting low-income and minority communities, like defunded schools and inadequate access to health care.

“We are not risks,” she said.

“We are needs.

” This story originally appeared in our AI newsletter The Algorithm.

To have it directly delivered to your inbox, subscribe here for free.

Keep up with the latest in AI at EmTech Digital.

The Countdown has begun.

March 25-26, 2019San Francisco, CA Register now.

. More details

Leave a Reply