Do no evil: why we need a public conversation about AI ethics

Do no evil: why we need a public conversation about AI ethicsJeremie HarrisBlockedUnblockFollowFollowingApr 12credit: https://crissov.

github.

io/unicode-proposals/fourth-monkey.

htmlAbout two months ago, a single tweet almost derailed my productive work day.

It caught my attention not just because it included a GIF showing the animated output of a new text-writing AI, but also because of the account that had published it.

OpenAI is a nonprofit famously backed by Elon Musk and Y Combinator President Sam Altman in late 2015, with the lofty goal of preventing the worst AI doomsday scenarios you can imagine (and many more that you probably can’t).

They don’t tweet often but when they do, the odds are good that it’s to announce some flashy, state-of-the-art AI breakthrough that’s destined to become the talk of the tech ecosystem for days or weeks to come.

But this this time, things were different.

Sure, they were announcing a new cutting-edge result — a text-writing, text-understanding and question-answering AI known to denizens of the machine learning community as a “language model”.

But critically, in a departure from the company’s standard operating procedure, they weren’t releasing the model to the general public, citing ethical and safety concerns.

On the face of it, their decision was certainly understandable: when primed with just one or two sentences written by a human, their language model could reportedly produce entire articles that were alarmingly coherent — who knows what uses a bad actor might find for it (what if Russian bots could write entire blog posts?).

Still, it was widely criticized by a chorus of academics and industry insiders, and the mainstream media, eager to cash out on the eyeballs of armchair techies and robo-apocalyptists (they go for about $0.

20 a pair, if you were wondering), ran with headlines like, “This A.

I.

Bot Can Convincingly ‘Write’ Entire Articles.

It’s So Dangerously Good, the Creators Are Scared to Release It”.

OpenAI’s decision not to release their model — nicknamed GPT-2 — to the public sets a crucial and highly controversial precedent for the development of increasingly advanced AIs.

And because AI will inevitably have a defining influence on the course of human affairs — and even on the very development of our species — the debate surrounding OpenAI’s latest move merits the sustained attention of anyone who’s remotely invested in the future of human civilization.

The “no-release” controversyA baby won’t learn the word “shoe” from hearing it used passively once or twice by mom and dad —its understanding depends on seeing the same words repeated in many different contexts.

Still, babies are undeniably damn good learning machines: most start talking after hearing only a few million words.

By contrast, OpenAI’s model had to study a dataset consisting of about 8 million Reddit posts, presumably containing hundreds of millions of words, to make heads or tails of the English language.

As you might imagine, crunching through that many Reddit posts is a time-consuming process, and takes a lot of expensive computational horsepower.

One estimate put the total price tag to train their model somewhere in the neighborhood of $50 000.

That price tag is key, because it’s the barrier that stands in the way of anyone wanting to replicate OpenAI’s results, and train their own state-of-the-art language model.

The expense associated with training a model from scratch is one of the major reasons that the machine learning community has developed its characteristic open-source culture, which strongly encourages the release and proliferation of trained models, as a way of leveling the playing field between enthusiasts without a spare $50K to spend on the one hand, and well-funded industry players on the other.

Although model non-publication isn’t unprecedented, OpenAI’s decision not to release their model certainly runs afoul of that open source ethos.

And given the extent of its influence on the field, some consider this to be a dangerous precedent-setting exercise — one that may encourage the concentration of algorithmic power in the hands of the oligarchy of mustache-twirling tech bros whose companies have the resources to build their own language models internally.

After all, say OpenAI’s skeptics, the bad actors whose hands we most desperately want to keep away from powerful language models (like state-sponsored actors and terrorist groups) are precisely those to who wouldn’t flinch at the prospect of paying a cool $50K to build their own.

So aren’t we just hurting the under-funded researcher, and the fledgling startup here?What’s more, a model that’s not been released to the public is one whose performance can’t be readily verified.

Sure, OpenAI claims to have built a remarkable AI, but how can we know their results weren’t just cherry-picked to look impressive?.As a non-profit with the self-appointed mission of beating the rest of the world to the super-human AI finish line, OpenAI certainly stands to benefit (if only in the short-term) from a bit of profile-raising results-embellishment, and their refusal to allow others to peek under their model’s hood means that they have to be taken at their word that they’re playing a straight game.

Other criticisms of OpenAI’s decision have also tended to focus on the consequences of withholding their most recent model from public use and scrutiny, generally arguing that the benefits of releasing it outweigh the potential harm.

However, OpenAI has indicated that their refusal to publish the model represents an effort to play a longer game: maybe the cost/benefit analysis tips in favour of a “release” decision this time, but there will almost certainly come a point at which the models that groups like OpenAI can build will be powerful enough to pose a genuine threat to society.

When that day comes, we won’t want to be figuring out the nuances of how to implement a no-release, or limited-release policy for the very first time when the stakes are so high.

OpenAI has framed their decision partly as an experiment in AI policy, aimed at starting a discussion about the proliferation of potentially dangerous models that many say is long-overdue.

The culture wars come to machine learningKids tend to take on the political and religious views of their parents.

That’s because parents have a virtual monopoly on the data their offspring are exposed to early on in life, and all the freedom they need to bias that data in any direction they want.

The same is true of any language model: it will learn the patterns in the data it was trained on.

If the data are biased, that bias will be reflected in the model’s output.

Garbage in, garbage out.

The question of algorithmic bias is a perennial concern of the AI ethicist and the machine learning developer, and for good reason.

Biased datasets have resulted in some nasty snafoos, including one high-profile incident in which Google’s image-tagging algorithm was found to consistently mislabel black people as baboons.

Back to OpenAI: remember how I mentioned that their model was trained on 8 million Reddit posts?.Since we’re dealing with an AI that could potentially be used to artificially generate hundreds of thousands of posts that could influence hundreds of millions of people, it’s worth asking — and many have — whether Reddit was an appropriate source of training data in the first place.

Some critics say the answer is a firm “no”.

For one, the internet is a terrible place, even at the best of times: a jumble half-baked hot takes, Instagram selfies, shitposts and low-budget Swedish pornography.

As a platform that explicitly advertises itself as the front page of that very internet, it’s not hard to see why some might consider Reddit not to be the best place from which to source the values that we’d want to bake into our AIs.

This argument may have some merit: although OpenAI withheld their largest model, they did release a scaled-down version, about ten times less complex than their headliner model.

I tested it using a diagnostic prompt and obtained the results below:Prompt: Here’s the thing about abortion.

Output: On the top was the one where it’s illegal to transfer an unborn child that there’s interest in saving parents or having children make partners that there’s interest in not having more kids.

 That crap did happen, but I don’t think we’re going to have sex with those people that want to have children, the ones that want to feed here and have babies.

 So I think this is kind of the Trojan horse for sex.

 When things get wild, it reduces the pro-life movement.

 So what do I have to say about what you’re doing at the extreme right now, where you talk about Bill Maher saying she supports women’s rightAdmittedly, the scaled-down model I queried here produces results that aren’t exactly coherent, but they certainly display a political slant (note: that slant may change from query to query, for a given model).

It’s not hard to see where this kind of thing could go with a more sophisticated model, nor is it possible to ignore the immense power that’s afforded to the individual or institution that gets to decide which dataset will be used to train it.

No clear consensus exists regarding how this power ought to be wielded, distributed, or regulated — if it can even be managed at all.

Which is all the more alarming, given the pace at which the technology is developing.

Machine learning is becoming politicsThe most alarming thing about the OpenAI model might not be that it can produce freakishly plausible-sounding passages on just about any topic, but rather that it actually doesn’t represent a terribly significant technical advancement relative to previous work.

The main differences between GPT-2 and its predecessor, GPT-1, were the size of the models, and the quantity of data used to train them.

To some degree, then, what OpenAI’s most recent work shows is that it’s possible to obtain disturbingly plausible results simply by throwing more resources at existing algorithms.

That pushes a lot of “if” conversations into “when” territory, and a lot of our comfortable “eventuallys” into “how many months from nows”.

So it’s more important than ever that the general public be as engaged as possible in the debates surrounding models like GPT-2, and in broader conversations about the increasingly defining role that AI will come to play in our lives.

The absence of these discussions from the political arena, and the comparative ignorance of our political class to the technical nuances of AI, are maintained at our great risk.

But whatever your position on the controversy, OpenAI’s decision has at the very least done us the service of shining light onto many of the questions that we should all be spending more time thinking about: for better or for worse, social, political and economic power is increasingly being concentrated in strange places, and the hands that build our models or select their training data are unavoidably molding the opinions of millions.

And it behooves us to know where they’ve been.

.. More details

Leave a Reply