Examining the Transformer Architecture: The OpenAI GPT-2 Controversy 

By Nityesh Agarwal, written in collaboration with and sponsorship of Exxact (@Exxactcorp).

“Recycling is NOT good for the world.

It is bad for the environment, it is bad for our health, and it is bad for our economy.

I’m not kidding.

Recycling is not good for the environment.

It is destructive to the earth and it is a major contributor to global warming.

Recycling is not good for our health.

It contributes to obesity and diseases like heart disease and cancer.

Recycling is bad for our economy.

It increases the cost of a product, and in turn, the price of everything that is made with that product.

Recycling is not good for our nation.

We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system.

Recycling is a huge, colossal waste of time, energy, money, and resources.

” — GPT 2 model from OpenAIYes, these aren’t the words of some anti-recycling freak trying to popularize anti-recycling manifesto over social media.

This whole thing is written using a language model… an algorithm… an AI, if we are being crass.

GPT-2 is a generative model, created by OpenAI, trained on 40GB of Internet to predict the next word.

And OpenAI found this model to be SO good that they did not release the fully trained model — which some say is against their founding motto of making AI open to all — due to their “concerns about malicious applications of the technology”.

  Open AI says, “We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):   Or are the above concerns merely the opinions of some paranoid people?This model did not present some big leap on the algorithmic front.

It is just a larger version of the GPT model that was published by the team months ago.

What it did do was show how capable our current language modeling techniques are in text generation.

It is a massively scaled up version of its predecessor.

GPT-2 has a whopping 1.

5 billion parameters (10X more than the original GPT) and is trained on the text from 8 million websites.

You can understand the feat of this model once you compare it with other “popular” generative language models.

   Language models aim to represent the history of observed text succinctly in order to predict the next word.

So, it is basically just learning to predict words.

Give the model a prompt and it will predict the next word, then the next word, then the word after that, and pretty soon it will have formed a meaningful sentence, combine enough of them and you have a coherent paragraph and then.

well pretty much anything you want.

For example, just watch this sci-fi short film released in mid-2016 whose script was created by a generative model using LSTM architecture trained on the scripts of a lot of sci-fi movies and tv shows:They got Thomas Middleditch — Silicon Valley’s Richard — to star in this!!Or how about this Harry Potter AI-generated fanfiction that shot to popularity in the end of 2017:We used predictive keyboards trained on all seven books to ghostwrite this spellbinding new Harry Potter chapter https://t.

co/UaC6rMlqTy pic.

twitter.

com/VyxZwMYVVy— Botnik Studios (@botnikstudios) December 12, 2017As you can see, both of these are much inferior in quality as compared to the GPT-2 example.

Open AI has cherry-picked and published a few more samples on their blog post — Better Language Models and their Implications.

The “unicorn” sample reads like a real science press release.

The “theft of nuclear material” sample reads like a real news story.

The “Miley Cyrus shoplifting” sample reads like a real post from a celebrity gossip site.

The “GPT-2” sample reads like a real OpenAI press release.

The “Legolas and Gimli” sample reads like a real fantasy novel.

The “Civil War homework assignment” reads like a real C-student’s paper.

The “JFK acceptance speech” reads like a real politician’s speech.

The “recycling” sample reads like a real right-wing screed.

Here's a ridiculous result from the @OpenAI GPT-2 paper (Table 13) that might get buried — the model makes up an entire, coherent news article about TALKING UNICORNS, given only 2 sentences of context.

WHAT??!!.pic.

twitter.

com/G8UtQpefyZ— Ryan Lowe (@ryan_t_lowe) February 14, 2019And it’s not just these 8 cherry-picked examples.

Open AI also provides us with a dump of hundreds of raw GPT-2 samples, which can give us a clearer picture of the model’s capabilities.

Yes each of them “reads like” some real human generated content.

But they aren’t.

This article argues that “if you skim text, you miss obvious absurdities.

 The point is OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot.

” So if you aren’t really concentrating and just skimming through, you won’t be able to spot that this was generated by a language model.

Now, that is definitely not true for the other examples that I presented above.

Even “reading like” normal human-generated content is a big feat.

So, yeah.

I would say that this model is really good.

  The fact that Open AI did not release the model came as a huge shock to the AI community and the media.

Some people argue that this was just a publicity stunt on part of Open AI because there was no algorithmic feat.

And another group of people believes that this attempt would be futile since the code is open-sourced and big-companies/ people-willing-to-spend-enough-bucks-on-compute-resources would be able to replicate the results in just a few months’ time.

But there is another group of people who are applauding Open AI for trying to create awareness among the researchers about the effects of the results of their research.

As AI techniques become more and more powerful, the world will face an important challenge to fight synthetic content and misinformation.

I thought @zacharylipton 's post was good.

It feels like OpenAI's press rollout didn't properly distinguish between:1.

"language models are getting powerful, let's have a conversation, it's important" and 2.

"here's our super-powerful world-overtaking model".

— Soumith Chintala (@soumithchintala) February 17, 2019Soumith Chintala is the creator of PyTorch.

This is a thread between him, Jack Clark and Jeremy Howard!So wouldn’t it be cool to know how it works, to know the algorithm that powers it?.- I will discuss that in part Two.

Original.

Reposted with permission.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.. More details

Leave a Reply