Detain/Release: simulating algorithmic risk assessments at pretrial

Detain/Release: simulating algorithmic risk assessments at pretrialKeith PorcaroBlockedUnblockFollowFollowingJan 8To date, the debate surrounding pretrial algorithmic risk assessment tools has focused on statistical quality and overall legality.

In 2017, when Jason Tashea and I first taught our Georgetown Law students about risk assessments, our lesson summarized those debates in a lecture and discussion.

Afterwards, we worried that students were taking away a simple, wrong lesson: if a software tool is sufficiently “accurate”, it will solve the problem presented and it does not require further investigation.

The discussion shouldn’t end there.

Software has framing power: the mere presence of a risk assessment tool can reframe a judge’s decision-making process and induce new biases, regardless of the tool’s quality.

This past fall, we wanted our students to engage with this broader, ecosystem-level issue, and to understand the far-reaching consequences that pretrial detention can have on defendants.

This reflects the overall goals of our practicum class, Criminal Justice Technology, Policy, and Law.

We want our students to not just be familiar with technology, but be able to think critically about how software and data-driven tools can influence legal ecosystems — sometimes in unexpected ways.

After all, law students aren’t training to be programmers or data scientists: they’re training to be advocates.

Legal education needs to equip law students to analyze novel technologies in context, and build arguments for or against them.

So, we built a simulation.

We call it Detain/Release.

How it worksDetain/Release puts students in the role of a county judge at a bail hearing, and prompts them to detain or release individual defendants pending trial.

In the simulation, students are served cases that look like this:An example defendant.

Included with a distorted picture of the defendant is his or her biographical and charge details, a prosecutor’s recommendation, a defendant’s statement, and a risk assessment.

The risk assessment provides a low/medium/high likelihood that a defendant will fail to appear, commit a crime, or commit a violent crime.

Under the hood, it uses U.

S.

Census and Bureau of Justice Statistics data to generate defendants and alleged offenses.

The simulation can generate millions of unique defendants.

We added additional mechanics to make the world of the simulation more complete.

Detained defendants are placed in a county jail with limited capacity and stay there for a period of time, depending on their charge.

Released defendants may commit a violation or new crime while on release, which appears in the form of a (sometimes histrionic) local newspaper article:A newspaper article criticizing the student for releasing a defendant after a violation.

These two mechanics simulate outside policy pressures.

To visualize this, we include two meters: jail occupancy, which increases with detentions, and public fear, which increases with violations.

If either reaches its full capacity (indicated as a bar at the top of the user interface) the simulation ends.

These meters are optional; the simulation can run with or without them.

Detain/Release in action.

To facilitate the use of Detain/Release in the classroom, we built a teacher-facing “remote control” to configure runs of the simulation with different options, and generate a virtual “classroom” for students to join.

We also built a real-time dashboard, which we projected onto the wall.

While students were using the simulation, we presented a simple live view, showing each participant’s progress during the current run.

The dashboard’s live view.

The dashboard also included a statistical view, which breaks down the actions of all participants in the classroom: how often are defendants detained?.How influential is a risk assessment or a defendant’s story?The statistical view.

Finally, we included a line-up of faces: defendants that students detained and released.

The lineup view.

How we taught itOur lesson interspersed runs of the simulation with short lectures about risk assessment tools and class discussions about the students’ decision-making process.

Our mini-lectures covered the basics of bail, the mechanics and implementation of risk assessments, and legal and technical challenges to their use in pretrial.

We wanted students to put themselves in the role of a judge, and think about how they would make pretrial detention decisions.

We began with a tutorial run that students completed on their own: ten defendants, no risk assessments.

After that, we divided students into groups and had them do three full runs of the simulation.

We wanted students to talk about how they made their decisions, during and after the simulation runs.

By the third run, we found that students are invested in the simulation and in the detention and release decisions they’ve made.

Throughout, we were deliberately opaque about how the simulation worked — about how accurate the risk assessments actually were, and about what probabilities “low”, “medium”, and “high” corresponded to.

For the most part, no one asked, either in our classroom or during our tests of the simulation.

Despite that, as they progressed through the lesson, students began to feel more confident and assured in their detention and release decisions.

They built interpretive systems to quickly make decisions from the information they had been given.

Some of their rules and systems were expected: high violence usually meant detention.

Others, less so: after seeing two female defendants fail to appear, one team began detaining women by default.

After the third and final run, we showed students the consequences of their decisions, with one last dashboard view: How did pretrial detention decisions affect defendant outcomes?The final dashboard view: consequences.

This reveal takes the air out of the room.

It drives home the framing power of the risk assessment tool we had presented them: students relied on it, deeply, despite receiving no promises about its accuracy, and “corrected” for it in random ways.

This had consequences.

For their part, our students responded positively to the simulation, and recommended we use it again in future classes.

We wanted our lesson to demystify algorithmic tools for students, and build a foundational understanding of how these tools work, how they’re applied in practice, and whether those applications are appropriate.

But most of all, we wanted students to reconsider risk assessments, not as tools for predicting the future, but as tools that can rewrite a defendant’s future long after their case has ended.

What’s nextWe’re excited about Detain/Release, and plan to keep improving on it.

As we do, we’re hoping to engage with legal communities around the country.

To help us improve Detain/Release, we’re planning a limited release of this simulation to other professors and trainers, along with the additional classroom features.

We’re exploring how Detain/Release might be deployed as a research tool in addition to a teaching tool.

Our dashboard can provide deep insight into how people react to the simulation under different parameters.

Similarly, there are real world applications that could leverage this tool to help train stakeholders adopting or contemplating a pretrial risk assessment.

If you’re interested in using Detain/Release for teaching, training, or research, and are willing to provide feedback, please reach out.

.

. More details

Leave a Reply