Hey Google, sorry you lost your ethics council, so we made one for you

We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next.

If these people had been on ATEAC, the story might have had a different outcome.

“Be transparent and specific about the roles and responsibilities ethics boards have” Rashida Richardson, director of policy research at the AI Now Institute “We have no insight into whether ethics boards are actually a moral compass or just another rubber stamp” Rashida Richardson In theory, ethics boards could be a great benefit when it comes to making sure AI products are safe and not discriminatory.

But in order for ethics boards to have any meaningful impact, they must be publicly accountable and have real oversight authority.

That means tech companies should be willing to share the criteria they’re using to select who gets to sit on these ethics boards.

They should also be transparent and specific about the roles and responsibilities their ethics boards have so that the public can assess their efficacy.

Otherwise, we have no insight into whether ethics boards are actually a moral compass or just another rubber stamp.

Given the global influence and responsibility of large AI companies, this level of transparency and accountability is essential.

“Consider what it actually means to govern technology effectively and justly” Jake Metcalf, technology ethics researcher at Data & Society The ATEAC hullabaloo shows us just how fraught and contentious this new age of tech ethics will likely be.

Google clearly misread the room in this case.

Politically marginal populations that are subject to the classificatory whims of AI/ML technologies are likely to experience the worse ethical harms from automated decision making.

Google favoring Kay Coles James for “viewpoint diversity” over her open hatred of transgendered people shows that they are not adequately considering what it actually means to govern technology effectively and justly.

“Ethics means two different things that can be contradictory in practice.

Companies are amenable to the former and terrified of the latter” Jake Metcalf It’s tricky for companies because ethics means two different things that can be contradictory in practice: it is both the daily work of understanding and mitigating consequences (such as running a bias detection tool or hosting a deliberative design meeting), and the judgment about how society can be ordered most justly (such as whether disparate harms to marginalized communities mean a product line should be spiked).

Corporations are amenable to the former, and terrified of the latter.

But if AI ethics isn’t about preventing automated abuse, blocking the transfer of dangerous technologies to autocratic governments, or banning the automation of state violence, then it’s hard to know what tech companies think it is other than empty gestures.

Underneath the nice new ethics report tool that is copacetic with the company’s KPI metrics is a genuine concern that lives are on the line.

Holding those in your head all at once is a challenge for companies bureaucratically and for ethicists invested in seeing more just technologies win out.

” “First acknowledge the elephant in the room: Googles AI principles” Evan Selinger, professor of philosophy at Rochester Institute of Technology Google put the kibosh on ATEAC without first acknowledging the elephant in the room: the AI principles that CEO Sundar Pichai articulated over the summer.

Leading academics, folks at civil society organizations, and senior employees at tech companies have consistently told me that while the principles look good on paper, they are flexible enough to be interpreted in ways that will spare Google from needing to compromise any long-term growth strategies—not least because the enforcement mechanisms for violating the principles aren’t well-defined, and, in the end, the entire enterprise remains a self-regulatory endeavor.

That said, it would certainly help to make leadership more accountable to an ethics board if the group were (a) properly constituted; (b) given clear and robust institutional powers (rather than just being there to offer advice); and (c) also, itself, be held to transparent accountability standards to ensure it doesn’t become a cog in a rationalizing, ethics-washing machine.

  “Change the people in charge of putting together these groups” Ellen Pao, founder at Project Include This failed effort shows exactly why Google needs better advisors.

But perhaps they also need to change the people in charge of putting together these groups—and perhaps their internal teams should be doing this work as well.

There were several problems with the outcome as weve all seen, but also problems with the process.

When you havent communicated to the whole group about who they will be working with, thats a huge mistake.

Bringing people who are more reflective of the world we live in should have happened internally before trying to put together an external group.

Side note, people should be examining the groups theyre joining, the conference panels theyre speaking at, and their teams before they commit so they know what theyre signing up for.

Its amazing how much you can influence them and how you can change the makeup of a group just by asking.

“Empower antagonism—not these friendly in-house partnerships and handholding efforts” Meg Leta Jones, assistant professor in Communication, Culture & Technology at Georgetown University “Ethics boards at best provide insight and at worst, cover” Meg Leta Jones Ethical boards are nobodys day job, and only offer a possibility for high-level infrequent conversations that at best provide insight and at worst cover.

If we want to establish trust in institutions including technologies, tech companies, media, and government, our current political culture demands antagonism—not these friendly in-house partnerships and handholding efforts.

Empowering antagonists and supporting antagonism may more appropriately and effectively meet the goals of “ethical AI.

” “Look inward and empower employees who stand in solidarity with vulnerable groups” Anna Lauren Hoffmann, Assistant Professor with The Information School at the University of Washington Google’s failed ATEAC board makes clear that “AI ethics” is not just about how we conceive of, develop, and implement AI technologies—it’s also about how we “do” ethics.

Lived vulnerabilities, distributions of power and influence, and whose voices get elevated are all integral considerations when pursuing ethics in the real world.

To that end, the ATEAC debacle and other instances of pushback (for example, against Project Maven, Dragonfly, and sexual harassment policies) make clear that Google already has a tremendous resource in many of its own employees.

While we also need meaningful regulation and external oversight, the company should look inward and empower those already-marginalized employees ready to organize and stand in solidarity with vulnerable groups to tackle pervasive problems of transphobia, racism, xenophobia, and hate.

“A board cant just be some important people we know.

You need actual ethicists” Patrick Lin, director of the Ethics + Emerging Sciences Group at Cal Poly “Imagine if the company wanted to convene an AI law council, but there was only one lawyer on it” Patrick Lin In the words of Aaliyah, I think the next step for Google is to dust yourself off and try again.

 But they need to be more thoughtful about who they put on the board—it cant just be a “lets ask some important people we know” list, as version 1.

0 of the council seemed to have been.

 First, if theres a sincere interest in getting ethical guidance, then you need actual ethicists—experts who have professional training in theoretical and applied ethics.

Otherwise, it would be a rejection of the value of expertise, which were already seeing way too much of these days, for example, when it comes to basic science.

Imagine if the company wanted to convene an AI law council, but there was only one lawyer on it (just as there was only one philosopher on the AI ethics council v1.

0).

 That would raise serious red flags.

Its not enough for someone to work on issues of legal importance—tons of people do that, including me, and they can well complement the expert opinion of legal scholars and lawyers.

But for that council to be truly effective, it must include actual domain experts at its core.

    “The last few weeks showed that direct organizing works”  Os Keyes, a PhD student in Data Ecologies Lab at the University of Washington To be honest, I have no advice for Google.

Google is doing precisely what corporate entities in our society are meant to do; working for political (and so regulatory, and so financial) advantage without letting a trace of morality cut into their quarterly results or strategic plan.

My advice is for everyone but Google.

For people outside Google: phone your representatives.

Ask what theyre doing about AI regulation.

Ask what theyre doing about lobbying controls.

Ask what theyre doing about corporate regulation.

For people in academia: phone your instructors.

Ask what theyre doing about teaching ethics students that ethics is only important if it is applied, and lived.

For people inside Google: phone the people outside and ask what they need from you.

The events of the last few weeks showed that direct organizing works; solidarity works.

    “Four meetings a year are not likely to have an impact.

We need agile ethics input” Irina Raicu, director of the internet ethics program at Santa Clara University I think this was a great missed opportunity.

It left me wondering who, within Google, was involved in the decision-making about whom to invite.

(That decision, in itself, required diverse input.

) But this speaks to the broader problem here: the fact that Google made the announcement about the creation of the board without any explanation of their criteria for selecting the participants.

There was also very little discussion of their reasons for creating the board, what they hoped the boards impact would be, etc.

Had they provided more context, the ensuing discussion might have been different.

There are other issues, too; given how fast AI is developing and being deployed, four meetings (even with a diverse group of AI ethics advisors), over the course of a year, are not likely to have meaningful impact–i.

e.

to really change the trajectory of research or product development.

As long as the model is agile development, we need agile ethics input, too.

“The group has to have authority to say no to projects” Sam Gregory, program director at Witness If Google wants to genuinely build respect for ethics or human rights into the AI initiatives, they need to first recognize that an advisory board, or even a governance board, is only part of a bigger approach.

They need to be clear from the start that the group actually has authority to say no to projects and be heard.

Then they need to be explicit on the framework—we’d recommend it be based on established international human rights law and norms—and therefore an individual or group that has a record of being discriminatory or abusive shouldn’t be part of it.

“Avoid treating ethics like a PR game or a technical problem” Anna Jobin, researcher at the Health Ethics and Policy Lab at the Swiss Federal Institute of Technology “I cant imagine any recommendation of such an advisory panel standing in the way of what the market demands” Adam Greenfield If Google is serious about ethical AI, the company must avoid treating ethics like a PR game or a technical problem and embed it into its business practices and processes.

It may need to redesign its governance structures to create better representation for and accountability to both its internal workforce as well as society at large.

In particular, it needs to prioritize the well-being of minorities and vulnerable communities world-wide, especially people who are or may be adversely affected by its technology.

“Seek not only traditional expertise, but also the insights of people who are experts on their own lived experiences” Joy Buolamwini, founder of the Algorithmic Justice League As we think about the governance of AI, we must not only seek traditional expertise but also the insights of people who are experts on their own lived experiences.

How might we engage marginalized voices in shaping AI?.What could participatory AI that centers the views of those who are most at risk for the adverse impacts of AI look like?.Learning from the ATEAC experience Google should incorporate compensated community review processes in the development of its products and services.

This will necessitate meaningful transparency and continuous oversight.

And Google and other members in the Partnership on AI should set aside a portion of profits to provide consortium funding for research on AI ethics and accountability, without only focusing on AI fairness research that elevates technical perspectives alone.

“Perhaps its for the best that the fig leaf of ethical development has been whisked away” Adam Greenfield, author of Radical Technologies Everything weve heard about this board has been shameful, from the initial instinct to invite James to the decision to shut it down rather than dedicate energy to dealing with the consequences of that choice.

But being that my feelings about AI are more or less those of the Butlerian Jihad, perhaps its for the best that the fig leaf of “ethical development” has been whisked away.

In the end, I cant imagine any recommendation of such an advisory panel, however it may be constituted, standing in the way of what the market demands, and/or the perceived necessity of competing with other actors engaged in AI development.

Learn from the humans leading the way in intelligent machines at EmTech Next.

Register Today!June 11-12, 2019Cambridge, MA Register now.

. More details

Leave a Reply