Using Transfer Learning and Pre-trained Language Models to Classify Spam

Using Transfer Learning and Pre-trained Language Models to Classify SpamSteve MutuviBlockedUnblockFollowFollowingJan 31Transfer learning, an approach where a model developed for a task is reused as the starting point for a model on a second task, is an important approach in machine learning.

Prior knowledge from one domain and task is leveraged into a different domain and task.

Transfer learning, therefore, draws inspiration from human beings, who are capable of transferring and leveraging knowledge from what they have learned in the past for tackling a wide variety of tasks.

In computer vision, great advances have been made using transfer learning approach, with pre-trained models being used as a starting point.

This has sped up training and improved the performance of deep learning models.

This is attributed to the availability of huge datasets like ImageNet, that have enabled the development of state-of-the-art pre-trained models used for transfer learning.

Until recently, the natural language processing community was lacking its ImageNet equivalent.

But development of transfer learning techniques in NLP continues to gain traction.

In NLP, transfer learning techniques are mainly based on pre-trained language models, which repurpose and reuse deep learning models trained in high-resource languages and domains.

The pre-trained models are then fine-tuned for downstream tasks, often in low-resource settings.

The downstream tasks include part-of-speech tagging, text classification, and named-entity recognition, among others.

Contextualized EmbeddingsWord embedding plays a critical role in the realization of transfer learning in NLP.

The intuition behind word embeddings is that words are represented as low-dimensional vectors that capture both the syntax and semantics of the text corpus.

Words with similar meanings tend to occur in similar context.

The word representations are learned by exploiting vast amounts of text corpora.

A popular implementation of word embeddings is the Word2Vec model which has two training options—Continuous Bag of Words and the Skip-gram model.

Word embeddings are often used as the first data processing layer in a deep learning model.

One limitation of standard word embedding techniques such as Word2Vec, fasttext, and Glove is that they aren’t able to better disambiguate between the correct sense of a given word.

In other words, each instance of a given word ends up having the same representation regardless of the context in which it appears.

Recently, contextual word embeddings such as Embeddings from Language Models (ELMo) and Bidirectional Encoder Representations from Transformers (BERT) have emerged.

These techniques generate embeddings for a word based on the context in which the word appears, thus generating slightly different embeddings for each of word’s occurrence.

ELMo uses a combination of independently trained left-to-right and right-to-left LSTMs to generate features for downstream tasks.

On the other hand, BERT representations are jointly conditioned on both the left and right context and use the Transformer, a neural network architecture based on a self-attention mechanism.

The Transformer has been shown to have superior performance in modeling long-term dependencies in the text, compared to recurrent neural network architecture.

The integration of the contextual word embeddings into neural architectures has led to consistent improvements in important NLP tasks such as sentiment analysis, question answering, reading comprehension, textual entailment, semantic role labeling, coreference resolution, or dependency parsing.

Language model embeddings can be used as features in a target model or a language model can be fine-tuned on target task data.

Training a model on a large-scale dataset and then fine-tuning the Pre-trained model for a target task (transfer learning, if you’ll recall), can particularly be beneficial to low-resource languages where labeled data is limited.

The Flair LibraryFlair is a library for state-of-the-art NLP developed by Zalando Research.

It’s built in Python on top of the PyTorch framework.

Flair allows for the application of state-of-the-art NLP models to text, such as named entity recognition (NER), part-of-speech tagging (PoS), sense disambiguation, and classification.

It is multilingual and allows you to use and combine different word and document embeddings, including the BERT embeddings, ELMo embeddings, and their proposed Flair embeddings.

In addition, Flair allows you to train your own language model, targeted to your language or domain, and apply it to the downstream task.

Spam Classification using FlairWhile email continues to be the dominant medium for digital communications for both consumer and business uses, unsolicited bulk emails (i.


spam) make up for approximately 53.

5% (as of September 2018) of the global email traffic.

Machine learning-based spam filtering approaches have been applied with success to automatically classify spam and non-spam emails.

A crucial component in such approaches is word embeddings, typically trained over very large collections of unlabeled data to assist learning and generalization.

Contextualized word embeddings have been shown to significantly improve the performance of text classifier because they are able to capture word semantics in context.

This means that the same word can have different embeddings depending on its contextual use, thus disambiguating words and addressing polysemy which affects the accuracy of text classification models.

The following implementation illustrates how to use the Flair library to train a language model and fine-tune it to classify spam.

Getting startedWe begin by installing the Flair library using the pip commandThe required Python libraries are then importedLoading and Pre-processing the DataWe use the SMS Spam Collection, a public dataset of SMS labeled messages that have been collected for mobile phone spam research.

The data is read using pandas and basic preprocessing is done—namely removing duplicates, ensuring the labels are prefixed with __label__, and splitting the dataset into train, dev and test sets using the 80/10/10 split.

Flair’s classification dataset needs to be formatted based on Facebook’s FastText format, which requires labels to be defined at the beginning of each line starting with the prefix __label__.

The next step is to train the model.

All the required libraries and datasets are loaded into a corpus object.

Finally, we load the pre-trained model and use it to predict if a message is spam or not.

Model EvaluationThe model achieved a F-score of 0.

9845 after 10 epochs, using default parameters.

F-score calculates metrics globally by counting the total true positives, false negatives, and false positives.

It is a measure of a test’s accuracy that considers both the precision and the recall of the test to compute the score.

Precision is the fraction of relevant instances among the retrieved instances, while recall is the fraction of relevant instances that have been retrieved over the total amount of relevant instances.

In classification tasks for which every test case is guaranteed to be assigned to exactly one class, micro-F is equivalent to accuracy.

This won’t be the case in multi-label classification.

Baseline ModelThe logistic regression baseline model achieved a f-score of 0.

9668, which was marginally lower than that of Flair model above.

DiscussionMachine learning models are data intensive and require access to large annotated data to train good predictive NLP models.

The required annotated data will in most cases not be available beforehand for many domains or language.

Annotation of such datasets is a time-consuming, expensive exercise and a challenging task.

But sufficient and accurately labeled is a key determinant of model’s prediction accuracy.

In view of these challenges, transfer learning via pre-trained language models is a promising approach to addressing these challenges.

Editor’s Note: Ready to dive into some code?.Check out Fritz on GitHub.

You’ll find open source, mobile-friendly implementations of the popular machine and deep learning models along with training scripts, project templates, and tools for building your own ML-powered iOS and Android apps.

Join us on Slack for help with technical problems, to share what you’re working on, or just chat with us about mobile development and machine learning.

And follow us on Twitter and LinkedIn for the all the latest content, news, and more from the mobile machine learning world.

Discuss this post on Hacker News and Reddit.

.. More details

Leave a Reply