An Exhaustive Guide to Detecting and Fighting Neural Fake News using NLP

View the code on Gist.

For the above text, Grover fails as it is not trained on these kinds of technical articles: But it’s here that the GPT-2 detector model shines since it’s trained on a wide variety of webpages (8 million!).

This just goes to show that no tool is perfect and you will have to choose which one to use based on the kind of generated text you are trying to detect.

  Case Study 4: Her’s the last experiment we will do.

We will test machine-generated news that is not “fake” but just an example of automated news generation.

This post is taken from The Washington Post that generates automated score updates using a program: View the code on Gist.

Now, the interesting thing here is that the GPT-2 detector model says that it isn’t machine-generated news at all: But at the same time, Grover is able to identify that it is machine written text with a slightly low probability (but still, it does figure it out!): Now, whether you consider this as “fake” news or not, the fact is that it’s generated by a machine.

How you will classify this category of text will be based on what your goals are and what your project is trying to achieve.

In short, the best way for detecting neural fake news is to use a combination of all these tools and reach a comparative conclusion.

  Limitations of Current Fake News Detection Techniques and Future Research Direction It is conspicuous that current detection techniques aren’t perfect and they have room to grow.

MIT’s Computer Science & Artificial Intelligence Laboratory (CSAIL) recently conducted a study on existing methods to detect neural fake news and some of their findings are eye-opening.

  Limitations of existing techniques to detect Neural Fake News The main upshot of the study is that the existing approach that methods like GLTR, Grover etc.

use to detect neural fake news is incomplete.

This is because just finding whether a piece of text is “machine-generated” or not is not enough, there can be a legitimate piece of news that’s machine-generated with the help of tools like auto-completion, text summarization etc.

For example, the famous writing app Grammarly uses some form of GPT-2 to help correct grammatical mistakes in the text.

Another example of such a case is case study #4 from the previous section of this article, where a program was used to automatically generate sport updates by the Washington Post.

Vice-versa, there can be human written text that is slightly corrupted/modified by the attackers which will be classified as not being neural fake news by the existing methods.

Here is an illustration that summarizes the above dilemma of the detector model: You can clearly notice in the above figure that since the feature space of generated neural fake news and real news is very far, it’s incredibly easy for a model to classify which one is fake.

Additionally, when the model has to classify between true generated news and neural fake news like as in case study #4 that we saw previously the model isn’t able to detect as the feature space is very close for the two.

The same behavior is seen when the model has to differentiate between actual human-written news and the same news that’s modified a bit and is now fake.

I wouldn’t get into specifics but the authors conducted multiple experiments to come to these conclusions, you can read their very interesting paper to know more.

These outcomes made the authors conclude that in order to define/detect neural fake news we have to consider the veracity (truthfulness) rather than provenance (source, whether machine-written or human-written).

And that I think is an eye-opening revelation that we came to.

  What can be the Future Directions of Research One step in the direction of dealing with the issue of neural fake news was when Cambridge University and Amazon released FEVER last year, which is the world’s largest dataset for fact-checking and can be used to train neural networks to detect fake news.

Although when FEVER was analyzed by the same MIT team (Schuster et.

al), they found that the FEVER dataset has certain biases in it that makes it easier for a neural network to detect fake text by just using patterns in the text.

When they corrected some of these biases in the dataset, they saw that the accuracy of the models plunged as expected.

They then open-sourced the corrected dataset Fever Symmetric on GitHub as a benchmark for other researchers to test their models against which I think is a good move for the research community at large that is actively trying to solve the problem of neural fake news.

If you are interested in finding more about their approach and experiments, feel free to read their original paper Towards Debiasing Fact Verification Models.

So creating large scale unbiased datasets I think is a good first step in the direction of future research on how to deal with neural fake news because as the datasets will increase so will the interest of researchers and organizations to build models to better the existing benchmarks.

  This is the same thing that we have seen happen in NLP (GLUE, SQUAD) and CV (ImageNet) in the last few years.

Apart from that, when I think inclusively taking into account most of the research that we have come across, here are some directions that we can explore further: I personally believe that tools like Grover and GLTR are a good starting point to detect neural fake news, they set examples on how can we creatively use our current knowledge to build systems capable of detecting fake news.

So we need to pursue further research in this direction, improve existing tools and validate them more not just against datasets but in real-world settings.

The release of FEVER dataset is a welcome move and it’d benefit us in exploring and building more such datasets with fake news in a variety of settings as this will directly fuel further research.

Finding veracity of text through a model is a challenging problem, yet we need to somehow structure it in such a way that it is easier to create datasets that are helpful in training models capable of authenticating a text on its factualness.

Hence, further research in this direction is welcome.

As rightly mentioned by the authors of both Grover and GLTR, we need to continue the openness in the research community by releasing large language models in future like GPT-2, Grover etc.

responsibly did because we can only build strong defenses if we know how capable our adversary is.

Have you dealt with the problem of Fake News before?.Have you tried building a model to identify Neural Fake News?.Do you think there are other areas that we need to look when considering future directions?.Let me know in the comments below!.You can also read this article on Analytics Vidhyas Android APP Share this:Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Reddit (Opens in new window) Related Articles (adsbygoogle = window.

adsbygoogle || []).

push({});.

. More details

Leave a Reply