# Giving algorithms a sense of uncertainty could make them more ethical

Eckersley puts forth two possible techniques to express this idea mathematically.

He begins with the premise that algorithms are typically programmed with clear rules about human preferences.

We’d have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers—even if we weren’t actually sure or didn’t think that should always be the case.

The algorithm’s design leaves little room for uncertainty.

The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty.

You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn’t specify a preference between friendly soldiers and friendly civilians.

In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it.

Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers.

A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers.

The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.

Say the AI system was meant to help make medical decisions.

Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost.

“Have the system be explicitly unsure,” he says, “and hand the dilemma back to the humans.

” Carla Gomes, a professor of computer science at Cornell University, has experimented with similar techniques in her work.

In one project, she’s been developing an automated system to evaluate the impact of new hydroelectric dam projects in the Amazon River basin.

The dams provide a source of clean energy.

But they also profoundly alter sections of river and disrupt wildlife ecosystems.

“This is a completely different scenario from autonomous cars or other [commonly referenced ethical dilemmas], but it’s another setting where these problems are real,” she says.

“There are two conflicting objectives, so what should you do?” “The overall problem is very complex,” she adds.

“It will take a body of research to address all issues, but Peter’s approach is making an important step in the right direction.

” It’s an issue that will only grow with our reliance on algorithmic systems.

“More and more, complicated systems require AI to be in charge,” says Roman V.

Yampolskiy, an associate professor of computer science at the University of Louisville.

“No single person can understand the complexity of, you know, the whole stock market or military response systems.

So we’ll have no choice but to give up some of our control to machines.

” An earlier version of this story originally appeared in our AI newsletter The Algorithm.