Analyzing the Potential of Machine Learning in Political Science

Two ethical framings are relevant here.

The first is Utilitarianism, which aims to do the greatest good for the greatest number, here implying that this method is ethical because both stakeholders will benefit: people use Twitter to share and political campaigns analyze those tweets.

On the other hand, Kant’s theory of morality would argue for limiting the access to the data, as people are used as means to an end, their opinions analyzed only for the sake of political campaigns.

These two framings are in conflict, and one should consider the context of the situation, but this can be seen as ethical, since, with the decision to use social media, one consents to the possibility of it being used for various purposes.

Natural language processing (for Sentiment Analysis)After gathering the tweets, we need to analyze the language.

For that, we use Natural Language Processing (NLP), which indicates the ability of a computer to understand human speech in all forms.

NLP researchers aim to understand and manipulate natural languages to perform desired tasks — in our case, to classify the data consisting of people’s opinions and analyze them statistically.

For the purposes discussed here, one needs to examine the sentimental component of the tweets.

Sentiment analysis (SA) investigates people’s opinions towards different matters.

The first step of SA is pre-processing analysis.

It consists of part-of-speech tagging where some nouns, verbs, and adjectives get tagged for future elimination.

Then, words are replaced with their roots, e.

g.

‘city’ and ‘cities’ become the same.

Afterward, prepositions and articles get removed, but negations do not, as they significantly affect the attitude of the user.

Thus, negations are kept together with the word they refer to.

Clauses are also considered because they strengthen or weaken the intensity of the opinion.

Each tweet then gets assigned to a particular category indicating its sentiment or emotion.

Finally, the tweets will be classified into three categories: -1 for a negative sentiment, 0 for a mixed one, and 1 for a positive one.

Machine Learning ModelBefore discussing the machine learning model, we will define what characterizes an efficient model, using 4 criteria created by Beauchamp, an assistant professor at Northwestern University who specializes in political science and machine learning.

Firstly, the success of a model needs to be measured statistically.

The most common measurement is Mean Average Error, which does not take into account the size of the sample.

Alternatively, one could use statistical significance, which minimizes the problem of an unrepresentative sample because it puts the sample into the context of the population by considering the sample size.

Secondly, before the creation of the model, clear benchmarks need to be set.

These will distinguish between successful and unsatisfactory models and help avoid confirmation bias.

In this case, existing polls can be used as a benchmark.

Thirdly, we need to guarantee that the training set for the model is large enough to improve the accuracy of the model.

Finally, “out of sample” analysis needs to be performed on the model to evaluate its performance on future data.

This analysis involves repeating the prediction multiple times.

The model is trained with the data collected before a given day.

Afterwards, the information on the given day is used on the model to make the prediction.

This procedure repeats itself and the errors for every following day are calculated to understand the performance (Beauchamp, 2015).

Having defined the features of a successful model, we will discuss the model itself.

The goal of the model is to predict the percentage of voters who will vote for a certain candidate, the independent variable.

This means that we are dealing with a linear regression problem.

The models need to specify a set of dependent variables through which will give us the independent variable.

Until now two main methods have been used for this prediction — volume-based and sentiments analysis.

The volume-based approach involves measuring the number of tweets that mention a candidate.

The sentiment analysis, on the other hand, uses Natural language processing to classify each tweet as positive, negative or mixed.

To achieve maximum accuracy, the sentiment analysis needs to be added to the volume-based approach.

To achieve this, the share of positive and negative volume for all tweets is calculated in addition to the ratio of positive over negative tweets for each party (Bermingham and Smeaton, 2015).

Based on these variables we fit a regression which can make predictions.

This results in a more accurate model because both volume and sentiment have predictive power.

After implementing the model, it is important to validate it based on the 4 criteria defined in the previous paragraph.

Depending on the exact implementation, the model’s accuracy will vary.

Chandrasekar et al.

managed to reach an accuracy of 80% doing the sentiment analysis by hand (Chandrasekar et al.

, 2012).

Bermingham and Smeaton used volume-based and sentiment analysis and achieved an accuracy of 95% (Bermingham and Smeaton, 2011).

Beauchamp was able to achieve an accuracy of 98% and also validated the model based on his 4 criteria (Beauchamp, 2015).

All of the research shows the tremendous potential of using machine learning to predict the election.

This application would have several advantages over predictive models generated through polling.

Firstly, this would likely be less time consuming and costly than polling.

Once the algorithm is created, it only needs to be ran on the new data.

Additionally, the systemic limitation around conformity that polls struggle with is largely mitigated with this model, as we are now using a data source where people mostly interact with acquaintances instead of strangers, decreasing the gap in conformity (Emam-Djomeh and Lauter, 2016).

Furthermore, Machine learning allows for “live-updates” because new tweets are constantly posted on Twitter.

This model also allows for following the voter turnout for specific regions in the country which is almost impossible to do with polls.

Last but not least, Beauchamp’s research proves that ML can be more accurate than traditional polls (Beauchamp, 2015).

Figure 1: Visualisation of the process our machine learning model goes through.

First, the data is collected and taken through pre-model processing.

Then, the data is analysed through both sentiment analysis and the volume-based approach.

After that, both approaches are ‘weighted’ using a regression method that best fits the current data, with 0.

5 and 0.

8 used here as examples.

Finally, all analyses are counted and added up for a final score.

LimitationsThere are currently some limitations to our model.

Specifically, the fact that it relies on Twitter data hurts our ability to use it in other countries and times.

Currently, Twitter is a very popular social media in the United States, with only four other countries having a higher ratio of Twitter users than the US (Kuwait, the Netherlands, Brunei, and the UK) (Baronchelli et al.

, 2013).

However, in time, the possibility exists that the amount of Twitter users decreases, causing less reliability of this model.

Similarly, Twitter is much less popular in many other countries, particularly non-Western countries or countries with lower GDPs (Baronchelli et al.

, 2013).

This makes it significantly harder to use this model in other countries; there is a possibility for non-representative samples.

Since we cannot control this aspect of the model, we have to find a way to circumvent this problem.

This could be done by adding other social media to our model to make it more accurate in different countries.

Non-representative samples can result even in countries where Twitter is popular.

If one presidential candidate has significantly more followers on Twitter, our sample can become non-representative.

This difference in followers can have different causes, like the followers of one of the candidates using Twitter more.

During the 2012 Presidential election in the US, Barack Obama had twice as many followers on Twitter than Mitt Romney (Chandrasekar et al.

, 2012).

This difference can create a biased sample and affect the accuracy of the model.

Another problem with using social media as a data source is spammers.

If people realize that predictions are made through Twitter, they can purposefully spam tweets to affect the predictions.

These spammers can make the model unreliable and can also bias the sampling.

These limitations need to be addressed before such a model is used in political campaigns.

When doing sentiment analysis, language barriers can also become a problem when using the model in different countries: our current model only considers Еnglish when considering positive and negative emotions.

Thus, the model should be adjusted in each country to ensure the sentiment analysis is in line with the language(s) spoken in that country.

Finally, problems with the nuances of language even when it is known: slang and sarcasm.

Slang can vary among peoples speaking the same language, and it can change rapidly, with new words emerging and becoming popular.

Additionally, political discussions often involve sarcasm.

At this point in technological development, programs are still unable to analyze these fringe cases.

Other Predictive MethodsAn alternative way to predict elections not yet discussed is the 13 keys to the White House method, devised by American historian Allan Lichtman and Russian scientist Vladimir Keilis-Borok (Lichtman et al.

, 1981).

The method makes 13 statements, either about the state of the US (like ‘there is no significant unrest’) and characteristics of the candidates (like ‘the candidate of the incumbent party is charismatic or a national hero’).

Depending on how many questions are answered in a certain way, it predicts an outcome of the election.

This method has been found to be quite predictive of the election results: it has been predictive of every election since the method was devised in 1981.

This data can be useful for our model, too.

Our current model could combine with other predictive data — like the FiveThirtyEight’s predictive poll data or these 13 keys — to create an even stronger prediction.

What else we can get from the dataBesides predicting voting preferences, machine learning can also advise political campaigns on how to persuade voters.

At Northeastern University, Nick Beauchamp is currently developing an algorithm “ that could make it easier for politicians to know exactly what to say to make us love them and hate their enemies,” (Lapowsky, 2015).

He conducted an experiment using data available online on Obamacare to construct several paragraphs explaining the costs and benefits of the healthcare.

He then used the Amazon crowdsourcing community, Mechanical Turk, to ask different people to rate the paragraphs on a scale of 1 to 9, whether they strongly approve or strongly disapprove of Obamacare.

He concluded that some of the paragraphs constructed by his algorithm where much more persuasive than others.

Using algorithms like these, campaigns can combine past speeches, their respective responses, and social media posts to figure out what to and what to avoid in a political speech.

These algorithms can also be used to determine why voters favor the opposition more and what could be said to redirect the support.

But all these potential manipulation tools question the very principles of free choice.

Social media information is public, and politicians have a right to access that information too.

Persuasive techniques have always been used in politics.

But if our politicians continue to find more efficient ways of persuading us, what will happen to our ability to choose freely?.Lapowsky (2015) claims that “if we’re more aware of how easily we can be manipulated, perhaps we will be more willing to question those who are trying to manipulate us,” and in doing so we become more aware of what we choose than ever.

Future PossibilitiesAs discussed, the machine learning model solves most of the problems that the survey method creates.

However, it simultaneously creates new challenges as mentioned in the ‘Limitations’ section of the paper.

Further research can be done on the combination of polls and machine learning because the combination of the two can yield even more accurate results.

To the combine the two methods, polls can either be used as a dependent variable in the machine learning model or used in training the machine learning model.

Even in its current state, Machine Learning algorithms show great potential of making accurate predictions about the outcomes of the elections.

As the field develops, the accuracy of the models will increase, and the use of ML in political campaigns will be further justified.

This paper aims to inspire further research in the field by showing the current exciting possibilities.

Natural Language Processing ExplanationThere are two models used for sentiment analysis.

The first one is called BoW (Bag of Words).

Its goal is to categorize documents by analyzing and classifying different corpus.

BoW usually has a large list of words that carry sentiment and have their value when they are found in the text.

The flaw is that it is always seeing the words as objects and never attempts at understanding the structure of the text beside predefined lexical units.

Another model uses NLP and attempts to understand the text by tagging parts of speech, entities, as well as takes context into account.

Regarding the algorithm, there are three main classification levels of sentiment analysis (SA): document-level, sentence-level, and aspect level.

As an expression of positive or negative opinion, they aim to classify the document itself, each sentence and each entity accordingly.

(Medhat et al.

, 2014).

.

. More details

Leave a Reply