What Does GPT-2 Think About the AI Arms Race?

In an obvious nod to April Fools Day, we decided to ask the GPT-2 language model, which made news in February, what it thought of the impending artificial intelligence arms race.

In case you have not heard of GPT-2 or missed the controversy surrounding its recent release, heres a quick overview: GPT-2 is a large transformer-based language model with 1.

5 billion parameters, trained on a dataset of 8 million web pages.

GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text.

The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains.

GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.

A sample of text, or prompt, is fed to the model, which then predicts the next word (and so on, from this point).

To prime the model, and set the stage as to the topic at hand, I fed the following prompt, written for no reason other than trying to highlight a few important terms for the model:The advent of modern neural networks and deep learning first resulted in the promise of state of the art results in a vast array of domains, but quickly helped usher in what is now referred to as the artificial intelligence arms race.

While there is no evidence that modern approaches to neural networks will ultimately provide a direct line to so-called artificial general intelligence, nations and non-state actors, equipped with copious amounts of data, hope that these numerous techniques will help them advance their causes.

This may or may not be the case, but it seems as though we are all overlooking something of grave importance in this discussion.

Using defaults on the small GPT-2 model, I selected what I determined to be the “best” of 50 generated samples.

What makes a sample the “best?” Its pretty subjective, but the one I read which seemed to, at a very cursory read, seem to make the most logical sense.

Sure, I cherry-picked the results, but my goal isnt to prove or disprove that GPT-2 is a competent language generation model, I just wanted to have a bit of fun.

A number of samples were garbage in every sense of the imagination, from spelling to grammar to word sequencing to sample structure.

So, without further delay, heres what GPT-2 thinks of the AI arms race.

The advent of modern neural networks and deep learning first resulted in the promise of state of the art results in a vast array of domains, but quickly helped usher in what is now referred to as the artificial intelligence arms race.

While there is no evidence that modern approaches to neural networks will ultimately provide a direct line to so-called artificial general intelligence, nations and non-state actors, equipped with copious ammounts of data, hope that these numerous techniques will help them advance their causes.

This may or may not be the case, but it seems as though we are all overlooking something of grave importance in this discussion.

It may at the very least be a good idea for us to criticise the way in which humanity is waging a war of attrition against a number of distinct frontiers, trying to decide whether certain fronts are safe or not, deep learning may finally be able to merge disparate forces.

Limiting the overall speed at which AI can deal with tasks, such as completely developing biocatalytic cutting-edge tech, can be fatal to civilization.

A search for alternatives to oil and gas requires repeatedly searching for data, thus preventing perhaps the most powerful and successful AI from overtaking every other domain.

But data on a short schedule can lead to higher readiness and anticipating an ill-fated future.

Evaluating your ability to process long-term data demands if you cant find better ways of dealing with such questions than studying the hundreds of data sets tied to major carbon and energy sources in the world.

Overcoming those hurdles will move us to the next phase of the intelligence process in the long run.

The world is a vast place, full of resources that the normal human mind does not utilise to help determine how many places we want to live.

But as the late Bayesian entrepreneur Robert Parry wrote, “whether human therapies take place in museums or overseas, the human mind will ultimately follow the same process in influencing the mere retrieval of premises,” and these processes will take many more years.

Time is running out for human mental systems.

The coming embrace of deep learning brings a series of important service releases that will be crucial to our understanding not only of the ways life machines perform but also the core theories underpinning our belief that machines are only capable of fully finite calculations.

Martin Shkreli outlined the finance of deep learning: “Everyone who gains might never stop spending time on it, of course.

If we all genuinely pursue common endeavours to better our global futures, then it will no longer be an exercise to perform talents, maintain written records, contemplate how making parts of human needs will have produced such helix-like structures, envisaging new ways of instructing people out of our systems, and ultimately to set ourselves to work on them on a new reality.

“What happens next suggest that failure so far seems like such a mistake that those who might ultimately succeed in implementing deep learning within human systems are unlikely to accept it.

This is because as it turns out, little-understood work has already been co-opted into the effort.

And there you have it.

GPT-2 just dropped a ton of knowledge, with the key ominous takeaway being that “Time is running out for human mental systems.

“If you would like to play around with the language model yourself, try out this GPT-2 playground Colab notebook.

You can find OpenAIs full released code here.

  Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

. More details

Leave a Reply