Using Data Science To Uncover State-Backed Trolls On Twitter

Spotting and sorting troll tweets manually won’t be a practical matter at scale.

This is where a classifier comes into the picture.

You can build a really complex model to discern the troll/real tweets using a combination of features gleaned from Step 1 as well as the contents of the tweet text.

But that would really complicate the building of the companion web app (more on this later).

It would also make it harder to analyse the individual predictions in Step 3.

For these reasons, I’ve kept the design of my models for this project as simple as possible.

I’m using just 1 predictor —a “clean tweet” column where the original tweet text had been cleaned for punctuation etc — against a target column of “bot_or_not” (0 for real tweets, 1 for troll tweets).

The above chart summarizes the workflow for the Russian troll detector model in notebook 3.

0, as well as the Iranian troll detector model in notebook 3.

2.

It’s a straightforward process involving the standardised cleaning of the raw, English tweets before running them through a pipeline that includes a CountVectorizer, a TFIDF transformer, and a classifier.

In this project, I opted for the more common classifiers— Naive Bayes, Logistic Regression and Random Forest.

You can pick the more complex classifiers, but the time needed to complete the pipeline run could grow exponentially given the size of the training set (50,000 rows).

On balance, the Logistic Regression model emerged as the best of the three models that I tried.

It had the best f1 score — the balance of precision and recall — and the fastest mean fit time.

Let’s start with Model 1, which was trained with Russian troll tweets, and look at how it performed against 3 different unseen test sets of 100 tweets, where the proportion of troll tweets was gradually reduced from 50% to about 10%:L-R: Confusion matrices for Model 1 Vs 3 unseen test sets with 50% Russian troll tweets, 30% troll tweets and 10% troll tweets.

Model 1 was surprisingly good at picking out new Russian troll tweets amid the real ones, even as the proportion of troll tweets got progressively reduced.

Its f1 score stayed at 0.

8-0.

9 throughout the tests.

The model correctly picked out the vast majority of unseen troll tweets, even scoring a perfect recall score of 1.

0 in the 90–10 test set (extreme right, above).

But is it any good against troll tweets by other state operators?.I tested Model 1 against unseen test sets with troll tweets from Iran and Venezuela, with predictably terrible results:L-R: Confusion Matrices for Model 1 Vs unseen test sets with Iranian troll tweets (L) and Venezuelan troll tweets (R).

Model 1’s recall scores tanked in the tests against unseen test sets with Iranian and Venezuelan troll tweets.

While it continued to pick out the real tweets by American users really well, it failed to catch the majority of the Iranian or Venezuelan troll tweets.

These results seem to suggest that the work of each state-backed operator is quite specific.

Instead of trying to build a massive “global” model that might catch all state-operators, it seems to make more sense to build smaller, more nimble models that can better identify specific state operators.

I built a second model to test this argument, this time training the model on Iranian troll tweets in combination with real tweets from verified American and international users.

Here are the results under similar test conditions, with the proportion of Iranian troll tweets gradually reduced from 50% to about 14%:L-R: Confusion matrices for Model 2Vs 3 unseen test sets with 50% Iranian troll tweets, 30% troll tweets and 10% troll tweets.

Model 2 turned in a sterling performance as well, in terms of its ability to pick out new Iranian troll tweets which it had not seen before.

Its f1 score was above 0.

9 for all 3 tests.

And as the 3 confusion matrices above showed, the model was exceedingly good at picking out the troll tweets, scoring a perfect recall score in the 90–10 set where it picked out all 14 troll tweets and misclassified just 1 out of 100 tweets.

The conclusion is obvious (though not entirely apparent at the outset): A model trained on a particular state-backed operator’s tweets won’t generalise well.

To catch the state trolls, you’ll need highly tailored solutions for each market where they are found.

STEP 2.

1: USING A WEB APP FOR QUICK CHECK-INSGo to http://chuachinhon.

pythonanywhere.

com/ to try out the web app.

Catching these state-backed operators requires team effort, and not everyone involved will have the skills to run a large number of suspicious tweets through a machine learning model.

Neither is it efficient to run a model each time you want to check on a few potentially suspicious tweets.

To this end, I built a simple web app — http://chuachinhon.

pythonanywhere.

com/ — where the user only needs to key in the text of a suspicious tweet to get a quick check on whether it could be a Russian troll tweet or not.

The app is simple and you can easily build 10 different versions if you need to put them in the hands of teams in 10 different countries or markets.

It won’t be as accurate as the latest model on the data scientist’s computer, but it serves as a quick diagnostic tool that would complement other tools used in Step 1 to identify a state troll’s digital fingerprint.

STEP 3: ANALYSING PREDICTIONS WITH SHAPThere are at least two ways to further analyse the model’s predictions, such as by examining the predicted probabilities or plotting the frequencies of key words which appear most often.

But neither method offers the level of granularity that SHAP can provide, in terms of shedding light on what features prompted the model to predict whether a tweet is a real or troll tweet.

SHAP can also be used to gain insights into where the model’s predictions went wrong, and what could have caused the incorrect classifications — an essential insight for updates to the model as the trolls update their tactics.

A detailed explanation of SHAP, or SHapley Additive exPlanations, is beyond the scope of this post.

In the context of this project, it is perhaps easier to illustrate how SHAP works with a few examples.

SHAP EXAMPLE 1Here’s a tweet that the Model 1 accurately classified as a real tweet: “An announcement of a summit date is likely to come when Trump meets with Chinese vice premier Liu He at the White House.

”Each model has a unique base value, based on the average model output over the training dataset passed.

In this case, Model 1’s base value is 0.

4327.

Different vectorized features will push the model’s predictions in different directions.

If the eventual output is below the base value, it is classified as a real tweet.

If the output is above the base value, it is considered a troll tweet (in the context of how I’ve labelled the output in this project).

In the example above, we can see that factual words like “chinese”, “summit”, “premier” pushed the model towards classifying the tweet as a real one, while interestingly the words “trump meets” were pushing the model in the opposite direction.

SHAP EXAMPLES 2 AND 3Let’s look at two more tweets, where Model correctly classified as Russian troll tweets: “@HillaryClinton #HillaryForPrison” and “@HillaryClinton Fuck off”.

In the first tweet above, the hashtag “hillaryforprison” was the strongest feature in pushing the model to classify this as a troll tweet.

In the second tweet, the swear word and its combination with Hillary Clinton’s name were the strongest factors in pushing the model towards classifying it as a troll tweet.

While the model has no innate understanding of American politics or news, it has been fed with enough examples of real and troll tweets to be able to make a distinction between the two sets.

The model can be defeated by troll tweets, of course.

Let’s look at some examples where Model 1 got its predictions wrong.

SHAP EXAMPLES 4 AND 5Model 1, the Russian troll detector, classified this tweet wrongly, predicting it as a troll tweet (above base value) when it is in fact a real tweet: “When Clinton got caught with her private email server, most of the folks i knew in the NatSec community were pissed…”The words “pissed”, “private email”, and “caught” pushed Model 1 towards classifying this as a troll tweet — when it was in fact written by Dan Drezner, a Professor at The Fletcher School and a columnist for the Washington Post.

Model 1 also failed on numerous occasions when exposed to Iranian troll tweets, which it was not trained on.

It classified this tweet as real, when it was in fact a troll tweet: “Spain, Italy warn against investing in Israeli settlements.

”Short and factual tweets written like a news headline seem to trip up the machine learning model.

Likewise, the model seems to struggle with slightly more complex tweets like the one involving the email server.

My takeaway from this is a simple one: Effective identification of state-backed disinformation campaigns on social media requires a good combination of human input/analysis with the smart use of machine learning and data analysis tools.

What seems obvious to a human may not be so for a machine with no geopolitical knowledge, while a machine can be far more efficient in spotting patterns which would take a human a long time to sort through manually.

LIMITATIONSThe chart above sums up some of the limitations of my approach to unmasking state-backed trolls on Twitter.

Language is perhaps the trickiest issue to deal with, from the perspective of model building.

On the one hand, you don’t want your troll detector to become a glorified language classifier.

On the other, you are missing out on a key trait of the state-backed trolls by training the model only on English tweets.

Data scientists familiar with deep learning techniques will perhaps have better solutions in this area.

The biggest limitation is the fact that this process is entirely reactive and diagnostic.

There is no way to pre-empt the work of the state trolls, at least not with the tools available in public.

This is my first attempt at applying my nascent data science skills to a complex problem like online disinformation.

Mistakes here and in the notebooks are all mine, and I would welcome feedback from experts in the field, or any corrections in general.

Finally, a big thank you to Benjamin Singleton for his help with this long-suffering project.

Special shoutout also goes out to Susan Li for her excellent NLP and Flask tutorials, which helped me tremendously.

Here again are links to key resources for this project:Github Repo: https://github.

com/chuachinhonThe Mueller Report: “Report On The Investigation Into Russian Interference In The 2016 Presidential Election”.

. More details

Leave a Reply