Chars2vec: character-based language model for handling real world texts with spelling errors and human slang

Chars2vec: character-based language model for handling real world texts with spelling errors and human slangIntuition EngineeringBlockedUnblockFollowFollowingFeb 27This paper describes our open source character-based language model chars2vec.

This model was developed with Keras library (TensorFlow backend) and now is avaliable for Python 2.

7 and 3.

0+.

IntroductionCreating and using word embeddings is the mainstream approach for handling most of the NLP tasks.

Each word is matched with a numeric vector wich is then used in some way if the word appears in text.

Some simple models use one-hot word embeddings or initialise words with random vectors or with integer numbers.

The drawback of such models is obvious – such word vectorisation methods do not represent any semantic connections between words.

There are other language models, called semantic, collating words related by sense with the affine embedding vectors.

In fact these models represent contextual proximity of various words: such models are trained on a large corpora of existing texts such as encyclopaedias, news texts or literature.

The result is that the words appearing in similar contexts in these texts are represented by proximal vectors.

The classic examples of semantic language models are Word2Vec and GloVe.

Some of the more modern semantic language models (ULMFiT, ELMo) are based on the recurrent neural networks (RNNs) and on other neural network architectures.

Semantic models contain some information about the sense of words extracted while training on large texts corpora but they operate with a fixed vocabulary (usually lacking some rarely used words).

This is a significant drawback for solving some of NLP problems.

If a number of important words in a text does not belong to the vocabulary of a semantic language model then it is non-efficient in solving some kinds of NLP tasks — the model won’t be able to interpret these words.

Such a case could arise if we try to process texts written by humans (these could be replies, comments, applications, documents or posts in web).

These texts could contain slang, some rare words from specific areas, or given names which are not presented in language vocabulary and, accordingly, in semantic models.

Typos also produce “new” words without any collated embedding.

A good example is the task of processing user’s comments to movies while building a movie recommender system based on MovieLens dataset.

Words “like Tarantino” could often be found among peoples comments; errors or typos in director’s surname “Tarantino” produce a number of “new” words like “Taratino”, “Torantino”, “Tarrantino”, “Torontino”, etc.

The ability to extract this “Tarantino” feature from users comments would obviously improve movies relevance metric and recommendations quality as well as extraction of various other features represented by non-vocabulary words, either very specific like surnames, or produced by typos and erroneous spelling.

To solve the problem we have described it would be appropriate to use a language model that would create word embeddings based only on their spelling and would collate similar vectors to similarly spelled words.

About chars2vec modelWe have developed the chars2vec language model based on symbolic embeddings of words.

The model represents each sequence of symbols of arbitrary length with a fixed length vector, the similarity in words spelling is represented by the distance metric between vectors.

This model is not dictionary based (it does not store a fixed dictionary of words with corresponding vector representations), that is why it’s initialisation and usage procedures do not require significant computational resources.

Chars2vec library could be installed via pip:>>> pip install chars2vecThe following code snippet creates chars2vec embeddings (with dimension 50) and projects these embedding vectors on a plane with the help of PCA, outputting an image depicting the geometrical meaning of the described model:Execution of this code produces the following image:We can see that the similarly spelled words are represented by the neighbouring vectors.

Yet the model is based on a recurrent neural network accepting a sequence of symbols of a word, not on analysis of the facts of letters or patterns presence in a word.

The more changes such as additions, deletions or replacements we make in a word the further its embedding would stand from the original.

Application of character-based modelsThe idea of text analysis on character level is not new — there are models creating embeddings for each symbol and then creating symbolic word embedding with the help of an averaging procedure.

The averaging procedure is the bottleneck — such models provide some sort of solutions of the problems stated above, unfortunately, not the optimal ones, because the search of the proper form of each word’s embedding creation from the symbols embedding vectors requires additional training if we want to encode some information about the symbols relative positioning and symbolic patterns besides the mere fact of symbols presence in a word.

One of the pioneering NLP models processing texts on the character level is karpathy/char-rnn.

This model takes some text as an input and trains a recurrent neural network (RNN) to predict the next symbol of a given sequence of characters.

There are other RNN based character-level language models such as Character-Level Deep Learning in Sentiment Analysis.

In some cases the convolutional neural networks (CNNs) are used to process a sequence of characters, check out Character-Aware Neural Language Models, an example of CNNs usage for texts classification could be found in the paper Character-level Convolutional Networks for Text Classification.

A prime example of a character-based language model is the model implemented in the FastText library by Facebook.

The FastText model creates symbolic word embeddings and solves texts classification problem based on their symbolic representation.

This technology is based on the analysis of multiple n-grams forming words instead of RNNs which makes this model very sensitive to typos and spelling errors as far as they can significantly change the range of n-grams forming a word.

Yet this model provides embeddings of words missing from the language vocabulary.

Model architectureEach chars2vec model has a fixed list of characters used for words vectorisation: the characters from this list are represented by various one-hot vectors which are fed to the model when the character appears in the text; the characters absent in the list are ignored in the vectorisation procedure.

We trained models designed to process English texts; these models use a list of the most popular of ASCII characters — all English letters, digits and the most popular punctuation:The model is not case sensitive, it converts any symbol’s case to the lower one.

Chars2vec is implemented with the help of Keras library based on TensorFlow.

The neural network creating words embeddings has the following architecture:A sequence of one-hot vectors of an arbitrary length representing the sequence of characters in a word, passes through two LSTM layers, the output is the embedding vector of the word.

To train a model we have used an extended neural network including chars2vec model as a part.

To be more specific, we are dealing with a neural network taking two sequences of one-hot vectors representing two different words as an input, creating their embeddings with one chars2vec model, calculating the norm of the difference between these embedding vectors and feeding it into the last layer of the network with the sigmoid activation function.

The output of this neural network is a number ranging from 0 to 1.

The extended neural network is trained on pairs of words, in training data a pair of “similar” words is labeled with 0 value, and a pair of “not similar” with 1.

In fact we define the “similarity” metric as a number of substitutions, additions and removals of characters that have to be performed to make one word from the other.

This brings us to the way of obtaining such training data — it can be generated by taking a multitude of words and then performing various changes upon them to obtain a new set of words.

A subset of new words produced by changes in one original word would naturally be similar to this original word and such pairs of words would have label 0.

Words from different subsets would obviously have more differences and should be labeled with 1.

The size of the initial words set, the number of words in each subset, the maximum number of changing operations upon a word within one subset define the result of model training and model’s vectorisation rigidity.

The optimal values of these parameters should be chosen regarding the language specifics and the particular task.

The other important point is keeping the whole training set balanced (none of the two classes should prevail).

The second part of the extended neural network has only one edge which weight could be adjusted during training process; this part of the network applies monotonous function to the embedding vectors difference norm.

The training dataset dictates that this second part should output 0 for “similar” word pairs and 1 for “not similar”, so while training the extended model the inner chars2vec model learns to form neighbouring embedding vectors for the “similar” word pairs and distant ones for the “not similar” pairs.

We have trained chars2vec models for English language with embedding dimensions of 50, 100 and 150.

The project source code is available in the repo along with the model training and usage examples (we’ve trained the model on our dataset for a new language).

Train your own chars2vec modelThe following code snippet shows the way to train your own chars2vec model instance.

A list of characters model_chars that the model would use for words vectorisation should be defined as well as the model’s dimension dim and path to the model storage folder path_to_model.

Training set (X_train, y_train) consists of pairs of “similar” and “not similar” words.

X_train is a list of word pairs, and y_train is a list of their binary similarity metric scores (0 or 1).

It is important that all the characters from the model_chars list should be present in the set of words of the training dataset — if a character is absent or appears very seldom then unstable model behaviour could be observed once this character is met in the test dataset.

The reason is that the corresponding one-hot vector has been rarely fed into the model’s input layer and some weights of the model’s first layer have always been multiplied by zero meaning their tuning has never been performed.

Another advantage of chars2vec model is the ability to solve various NLP problems for an arbitrary language lacking open pertained language models.

This model could provide better solutions of some texts classification or clustering problems than classic models if you are processing texts with specific vocabularies.

.. More details

Leave a Reply