An Essential Guide to Pretrained Word Embeddings for NLP Practitioners

Overview Understand the importance of pretrained word embeddings Learn about the two popular types of pretrained word embeddings – Word2Vec and GloVe Compare the performance of pretrained word embeddings and learning embeddings from scratch   Introduction How do we make machines understand text data? We know that machines are supremely adept at dealing and working with numerical data but they become sputtering instruments if we feed raw text data to them.

The idea is to create a representation of words that capture their meanings, semantic relationships and the different types of contexts they are used in.

That’s what word embeddings are – the numerical representation of a text.

And pretrained word embeddings are a key cog in today’s Natural Language Processing (NLP) space.

But, the question still remains – do pretrained word embeddings give an extra edge to our NLP model? That’s an important point you should know the answer to.

So in this article, I will shed some light on the importance of pretrained word embeddings.

We’ll also compare the performance of pretrained word embeddings and learning embeddings from scratch for a sentiment analysis problem.

(adsbygoogle = window.

adsbygoogle || []).

push({}); Make sure you check out Analytics Vidhya’s comprehensive and popular Natural Language Processing (NLP) using Python course!   Table of Contents What are Pretrained Word Embeddings? Why do we need Pretrained Word Embeddings? What are the Different Pretrained Word Embeddings? Google’s Word2vec Stanford’s GloVe Case Study: Learning Embeddings from Scratch vs Pretrained Word Embeddings   What are Pretrained Word Embeddings? Let’s answer the big question straightaway – what exactly are pretrained word embeddings? Pretrained Word Embeddings are the embeddings learned in one task that are used for solving another similar task.

These embeddings are trained on large datasets, saved, and then used for solving other tasks.

That’s why pretrained word embeddings are a form of Transfer Learning.

  Transfer learning, as the name suggests, is about transferring the learnings of one task to another.

Learnings could be either weights or embeddings.

In our case here, learnings are the embeddings.

Hence, this concept is known as pretrained word embeddings.

In the case of weights, the concept is known as a pretrained model.

You can learn more about Transfer Learning in the below in-depth article: Transfer learning and the Art of using Pretrained Models in Deep Learning But, why do we need pretrained word embeddings in the first place? Why can’t we learn our own embeddings from scratch? I will answer these questions in the next section.

  Why do we need Pretrained Word Embeddings? Pretrained word embeddings capture the semantic and syntactic meaning of a word as they are trained on large datasets.

They are capable of boosting the performance of a Natural Language Processing (NLP) model.

These word embeddings come in handy during hackathons and of course, in real-world problems as well.

But why should we not learn our own embeddings? Well, learning word embeddings from scratch is a challenging problem due to two primary reasons: (adsbygoogle = window.

adsbygoogle || []).

push({}); Sparsity of training data Large number of trainable parameters   Sparsity of training data One of the primary reasons for not doing this is the Sparsity of Training Data.

Most real-world problems contain a small dataset that has a large volume of rare words.

The embeddings learned from these datasets cannot arrive at the right representation of the word.

In order to achieve this, the dataset must contain a rich vocabulary.

Frequently occurring words build just such a rich vocabulary.

  Large number of trainable parameters Secondly, the number of Trainable Parameters increases while learning embeddings from scratch.

This results in a slower training process.

Learning embeddings from scratch might also leave you in an unclear state about the representation of the words.

So, the solution to all the above problems is pretrained word embeddings.

Let us discuss different pretrained word embeddings in the coming section.

  What are the Different Pretrained Word Embeddings? I would broadly divide the embeddings into 2 classes: Word-level and Character-level embeddings.

ELMo and Flair embeddings are examples of Character-level embeddings.

In this article, we are going to cover two popular word-level pretrained word embeddings: Gooogle’s Word2Vec Stanford’s GloVe Let’s understand the working of Word2Vec and GloVe.

  Google’s Word2vec Pretrained Word Embedding Word2Vec is one of the most popular pretrained word embeddings developed by Google.

Word2Vec is trained on the Google News dataset (about 100 billion words).

It has several use cases such as Recommendation Engines, Word Similarity, and different Text Classification problems as well.

The architecture of Word2Vec is really simple.

It’s a feed-forward neural network with just one hidden layer.

Hence, it is sometimes referred to as a Shallow Neural Network architecture.

Depending on the way the embeddings are learned, Word2Vec is classified into two approaches: Continuous Bag-of-Words (CBOW) Skip-gram model Continuous Bag-of-Words (CBOW) model learns the focus word given the neighboring words whereas the Skip-gram model learns the neighboring words given the focus word.

That’s why: Continous Bag Of Words and Skip-gram are inverses of each other.

For example, consider the sentence: “I have failed at times but I never stopped trying”.

  Let’s say we want to learn the embedding of the word “failed”.

So, here the focus word is “failed”.

The first step is to define a context window.

A context window refers to the number of words appearing on the left and right of a focus word.

The words appearing in the context window are known as neighboring words (or context).

Let’s fix the context window to 2 and then input and output pairs for both approaches: Continuous Bag-of-Words: Input = [ I, have, at, times ],  Output = failed Skip-gram: Input = failed, Output = [I, have, at, times ] As you can see here, CBOW accepts multiple words as input and produces a single word as output whereas Skip-gram accepts a single word as input and produces multiple words as output.

So, let us define the architecture according to the input and output.

But keep in mind that each word is fed into a model as a one-hot vector: Stanford’s GloVe Pretrained Word Embedding The basic idea behind the GloVe word embedding is to derive the relationship between the words from Global Statistics.

But how can statistics represent meaning? Let me explain.

One of the simplest ways is to look at the co-occurrence matrix.

A co-occurrence matrix tells us how often a particular pair of words occur together.

Each value in a co-occurrence matrix is a count of a pair of words occurring together.

For example, consider a corpus: “I play cricket, I love cricket and I love football”.

The co-occurrence matrix for the corpus looks like this:   Now, we can easily compute the probabilities of a pair of words.

Just to keep it simple, let’s focus on the word “cricket”: p(cricket/play)=1 p(cricket/love)=0.

5 Next, let’s compute the ratio of probabilities: p(cricket/play) / p(cricket/love) = 2 As the ratio > 1, we can infer that the most relevant word to cricket is “play” as compared to “love”.

Similarly, if the ratio is close to 1, then both words are relevant to cricket.

We are able to derive the relationship between the words using simple statistics.

This the idea behind the GloVe pretrained word embedding.

GloVe learns to encode the information of the probability ratio in the form of word vectors.

The most general form of the model is given by:   Case Study: Learning Embeddings from Scratch vs.

Pretrained Word Embeddings Let’s take a case study to compare the performance of learning our own embeddings from scratch versus pretrained word embeddings.

We’ll also aim to understand whether the use of pretrained word embeddings increases the performance of the NLP model or not? So, let’s pick a text classification problem – Sentiment Analysis on movie reviews.

Download the movie reviews dataset from here.

Loading the dataset into our Jupyter notebook: View the code on Gist.

Preparing the data: View the code on Gist.

Let’s take a glance at the number of unique words in the training data: size_of_vocabulary=len(tokenizer.

word_index) + 1 #+1 for padding print(size_of_vocabulary) Output: 112204 We will build two different NLP models of the same architecture.

The first model learns embeddings from scratch and the second model uses pretrained word embeddings.

Defining the architecture – Learning Embeddings from scratch: View the code on Gist.

Output: The total number of trainable parameters in the model is 33,889,169.

Out of this, the Embedding layer contributes to 33,661,200 parameters.

That’s huge! Training the model: history = model.

fit(np.

array(x_tr_seq),np.

array(y_tr),batch_size=128,epochs=10,validation_data=(np.

array(x_val_seq),np.

array(y_val)),verbose=1,callbacks=[es,mc]) Evaluating the performance of the model: View the code on Gist.

Output: 0.

865 Now, it’s time to build version II using GloVe pretrained word embeddings.

Let us load the GloVe embeddings into our environment: View the code on Gist.

Output: Loaded 400,000 word vectors.

Create an embedding matrix by assigning the vocabulary with the pretrained word embeddings: View the code on Gist.

Defining the Architecture – Pretrained embeddings: View the code on Gist.

Output: As you can see here, the number of trainable parameters is just 227,969.

That’s a huge drop compared to the embedding layer.

Training the model: history = model.

fit(np.

array(x_tr_seq),np.

array(y_tr),batch_size=128,epochs=10,validation_data=(np.

array(x_val_seq),np.

array(y_val)),verbose=1,callbacks=[es,mc]) Evaluating the performance of the model: View the code on Gist.

Output: 88.

49 Wow – that’s pretty awesome! The performance has increased using pretrained word embeddings compared to learning embeddings from scratch.

  End Notes Pretrained word embeddings are the most powerful way of representing a text as they tend to capture the semantic and syntactic meaning of a word.

This brings us to the end of the article.

In this article, we have learned the importance of pretrained word embeddings and discussed 2 popular pretrained word embeddings – Word2Vec and gloVe.

Kindly leave your queries/feedback in the comments sections, I will reach out to you! See you in the next article.

You can also read this article on Analytics Vidhyas Android APP Share this:Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Reddit (Opens in new window) Related Articles (adsbygoogle = window.

adsbygoogle || []).

push({});.

Leave a Reply