Code on Github

Natural language processing is, let’s be honest, seriously biased towards English and if your customer happens to be in a cultural region where it’s not the main language you have an issue. Or an opportunity to articulate something new. In any case, you have a few challenges:

  • finding relevant data: preferably free to use and pre-tagged. A ton of pdf’s will not do.
  • some language understanding: stop-words, sentence structure and such.
  • framework understanding: if your favorite NLP framework doesn’t contain pre-trained models for the language you have to dig a bit deeper to build things yourself.

In this article I’ll focus on Dutch since this happens to be one of the common languages here in Belgium. While this isn’t spoken by many (compared to, say, French or Spanish) it nevertheless

  • happens to have some support in frameworks like Gensim, SpaCy and NLTK
  • isn’t as challenging as Chinse, Greek or Bulgarian: Dutch uses the standard Latin alphabet
  • is not wildly different from English when considering things like lemmatization or sentence splitting.

The availability of (trained) data is a real issue however and Dutch is in this respect the prototypical situation: it’s difficult to find quality (tagged) data. The emphasis here is on tagged and quality:

  • of course you have things like Gutenberg but books alone won’t get you very far if you are interested in extracting information out of data. You need labels, tags or something which relates to what you’re after. Machine (and human) learning is based on patterns in relationships.
  • unstructured texts means you’ll spend first an eternity cleaning data before effectively turning to the task at hand. A customer handing over a million of pdf’s and expecting you to extract magically all they need out of it is a common misunderstanding.

Now, assuming the data is present you can start exploring various NLP tasks. For marketing purposes one can look at sentiment analysis in function of product placement or customer propensity. If you need to classify documents you can look at summarization or keyword extraction. Note that if your corpus is large you might benefit from things like Lucene rather than an homemade engine. In function of a realtime NLP engine (e.g. processing telephone calls) you can look at entity extraction aka named entity recognition (NER).

Every NLP process is really based on the following recipe:

  • gather data, preferably tons of it. As many docs as possible.
  • clean the data: remove stop words, noise, irrelevant bits, whatnot
  • chunk the data in a way it suits your aims
  • label the parts in a way the learning step can use it: if you are interested in sentiments label paragraphs or words with some emotional coefficient, if you are after part-of-speech label words with POS tags etc.
  • find a way to convert words and labels to numbers: machine learning does not handle words or characters but numbers. Use anything which makes it happen: word-to-vectors, feature functions, counting (bag of words, matrix…)
  • figure out what algorithm works best, use whatever framework to train a model
  • define test data or a way to test the accuracy of the model
  • optimize the accuracy throuhg gridsearch or whatever works best
  • wrap the model in a consumable, say a REST service or server-less micro-service
  • keep trying to improve the model in whatever way you can

Sometimes some of the steps can be skipped. For example, if you find tagged POS data you don’t need to clean/tag things yourself. Similarly, often you have various algorithms at your disposal in an NLP framework which fit perfectly. The devil is in the details, of course. Maybe NLTK is great for a small corpus but will not do if you need a Spark cluster to deal with petabytes. For a medium corpus you might need to develop your own out-of-memory algorithm. Plenty of subtleties indeed and the road to a good model is never linear.

In what follows I’ll focus on NER in Dutch input and show how you can train your own NER model. The recipe describe above translates to:

  • the NLP conference of 2002 in Taiwan produced the so-called CoNLL2002 corpus with a mix of Spanish and Dutch tagged data. This corpus sits in the NLTK framework and is easily accessible. A much larger corpus is available but is not free and demands an out-of-memory approach. The so-called SoNaR corpus is a 60 GB compressed corpus with around 500 million words but requires a different approach than the one we outline below.
  • cleaning the data is not necessary in this case because POS and IOB tags are present. If this is not the case in your project you have a major logistic challenge. Annotating text data is in many ways a hurdle.
  • the NLTK framework knows about Dutch stop-words and tokenization. This makes it easy to chunk the raw text into sentences and words.
  • the CoNLL2002 corpus contains IOB and POS labels. It doesn’t mean that the info can be used as such however. Feature extraction and engineering is in any machine learning (ML) task an art on its own.
  • we will use conditional random fields (CRF) to create a NER model. A CRF sits in between basic standard NML algorithms (say, SVM) and non-linear algorithms like LSTM neural networks in terms of complexity. Just like an LSTM network a CRF has knowledge about how bits of data are related. It also has similarities with hidden Markov models (HMM) but generalizes the notion of dependency. Using LSTM would engender a whole playful process of optimizing layers and transition functions. Note, I’m telling you a GRU or LSTM would not perform better than CRF, only that it make it more complex to optimize and to describe.
  • using scikit-learn’s grid search the accuracy is optimized across a hyperparameter domain
  • we’ll wrap the model in a simple REST service using Flask. This can be deployed on AWS or as a docker container. In fact, you can find in the github repo the dockerfile for dockerization.

 

Conditional random fields in a nutshell

For a thorough overview see An introduction to conditional random fields by Charles Sutton and Andrew McCallum in Machine Learning Vol. 4, No. 4 (2011) 267–373. Here I’ll only skim the surface to give you an idea how things function.

When dealing with textual data you need to find a way to convert text to numbers. A common approach is to use embeddings like word2vec or doc2vec. Another or complementary approach is to use feature functions; ad-hoc mappings from words to numbers. Say you have sentences latexs with word labels latex(wi,λi). This could be POS or IOB tags. A feature function f_j maps the (w_i, \lambda_i)\mapsto f_j(w_i, \lambda_i) to some number. For example, a feature function could emphasize adjective and assign the number one when an adjective is found and zero otherwise. In general a feature function can take a window into account. The feature function could look at the previous word or the next-next-word as well. This is where it differentiates from LSTM or HMM. An hidden MArkov model only takes the current state into account. An LSTM tries to remember things in function of the defined windows. So, the feature function could be f_j(w_i, w{i-1}, \lambda_i) if the previous word is included.
For a given sentence you can have multiple feature function. One to pick up names of locations, names of organizations and so on. Each feature function is now weighted and for one sentence you get the sum

S(s, \lambda) = \sum_{ij} \rho_j f_j(w_i, w_{i-1}, \lambda_i)

specific to the labeling and the sentence under consideration. Now to turn this into a probability you use the usual softmax and get

p(s, \lambda) = \frac{1}{Z}\exp - S(s, \lambda)

with Z the partition function or normalization to ensure that the probabilities sum up to one. Now the machine learning consists of optimizing the \lambda‘s to maximize the probabilities. This happens by means of gradient ascend and is similar to training a neural network and alike. Of course, you need lots of sentences and feature function which effectively return what you are looking for.

Assuming the training returns optimal weights one can use (polynomial-time) dynamic programming algorithms to find the optimal labels, similar to the Viterbi algorithm for HMMs.

NLTK in a nutshell

The natural language toolkit is ideal for experimenting with NLP. Probably it’s not the tool you’d use for large scale text processing, but that’s another topic. It contains all you need to experiment with text and there are many subtletites you need to be aware of when looking at something else than English.

  • splitting documents into useful paragraphs or blocks is usually something outside the scope of NLTK. If you want to split stanzas in poems you will have to look at line separations. If you want to extract addresses out of Word documents you will have to find appropriate ways to delete obsolete parts of look at markers which define begin/end of the blocks.
  • splitting paragraphs in sentences is language dependent. This might be a surprise since you naively could assume that splitting at the ‘.?!’ is all you need to do. Things like ‘Ph.D.’ (English) and ‘dhr.’ (Dutch) spoil the fun however. Language specific sentence splitting is not too difficult using the NLTK trainer and we’ll hightlight the procedure below.
  • splitting sentences in words is also language dependent. The Dutch ‘s avonds is one word but in English the ‘s will be considered as the word ‘is’ and hence a word on its own. Word tokenizing is hence something which has to be trained as well. Here again, there are tools and open source projects which can help you. The problem is usually finding quality data to train the tokenizer.
  • removing punctuation is the easy part and can often even be done with regular expressions
  • removing stop-words is also easy since Dutch, like English, has a limited set of stop words and NLTK actually contains them as a resource
  • verbs and tenses: the proliferation of the same thing in many shapes. The process of normalizing words to a common root shape is called lemmatization and stemming (the difference is subtle). NLTK can help you with this but like anything else: maybe you need to train your own model. For example, using the 1604 King James Bible with the current English stemmer will not give the expected result. Dialects (the difference between Flemish and Dutch for instance) can also inject mistakes.

At the end of this article you will find a collection of code snippets which show you how NLTK deals with the aspects enumerated above.

 

Named entities

With all of this contextual info out of the way we can focus on the training a model for entity recognition.

Named entities are recognized through POS tags and so-called IOB (aka BIO) tags. The IOB tags indicate whether a word is of a particular type (organization, person etc.). The NLTK conll2002 corpus has all you need:

The ‘B-ORG’ indicates that the word ‘Floralux’ is an organization. With this type of info one can train a model to recognize unseen sentences. But if recognition is based on IOB tags then how can you use a normal (i.e. untagged) sentences with the model? You need to train a model which learns how to attach tags? To this end you need to clone the NLTK trainer project on Github which dixit ‘…trains NLTK objects with zero code…’.

This training is a machine learning project on its own but if you are in a hurry, all you need to do is run:

This will create a pickled model which tags arbitrary Dutch sentences with IOB tags. This tagged array can thereafter be used with the NER model we will build.

If you are not in hurry you should replace the ‘NaiveBayes’ classifier in the instruction above with ‘DecisionTree’. It will take around 15min more but your tagger will be 4% more accurate (something like 98% accuracy). Beside DecisionTree you also experiment with Maxent, GIS, IIS, MEGAM and TADM. See the docs for more.

The NER training and testing data is easily extracted from the NLTK resources like so:

If you’d rather train a Spanish NER model you can replace ‘ned’ with ‘spa’ above.

Referring to the random fields above, we observed that one can use a window (aka n-gram) for the feature functions. In the feature mapping below you can see how we use a 5-gram and collect various bits of info as input for the CRF:

With this feature extraction we assemble the actual data for the training and testing:

This type of manipulations are fairly standard if you have used scikit-learn before.

Training then is as simple as this:

giving

This takes very little time but it’s not optimized. The optimization consists of tuning the hyperparameters of the algorithm. In the case of CRF, the c1 and c2 params.
The sklearn framework has this wonderful gridsearch mechanics which allows you to automatically figure out which parameters maximize a metric. To use it you need:

  • to define the intervals or enumerations inside which the optimization has to search (the hyperparameter space)
  • the metric which tells the optimization what ‘better’ means

You can find plenty of docs and info around this topic.

This will take a while. The ‘crf’ model now contains an optimized NER detection model which can be used independently of the training and everything we’ve done above.

Once the training returns you can save the model via:

One important remark is in order. You can see that the whole training and testing data is loaded in memory. This approach obviously does not work with large datasets and there are so-called out-of-core algorithms which can help. This means however that you need to figure out how training can happen incrementally. Or you need a different approach, for example using MLlib with a Spark cluster which eventually demands some experience with Scala or PySpark.

RESTifying the model

The trained model can be reused in a Python pipeline but it’s very likely that your architecture is heterogenous and your consumer is not based on Python. One way to deploy the model is by means of a server-less AWS service.

Another way is to create a docker container with a REST service calling the model. Creating a REST service can be done via Bottle, Django, Flask or whatever your favorite framework is. In the repo you will find a Flask service:

As advertized earlier, the only thing happening on this level is transforming natural language to a format the model expects. The NER seervice really is just a call to the ‘predict’ method.

In order to dockerize all this you need a very basic Linux with Python image and some YML files. You can find all of this in the repo. The only thing you need to do is to call

in the directory where the YML files reside. See the docker composer docs for more info.

NLTK snippets

Regarding Dutch there are a few key-resources

  • the CoNLL2002 corpus included with NLTK. It’s however a mixture of Spanish and Dutch so it’s vital to filter out the Dutch sentences only.
  • the Groningen Meaning Bank
  • the SoNaR corpus which seems to be the most complete one can find (500 million words!).

The conll2002 corpus contains both Spanish and Dutch so you need to filter out only the Dutch part, for example

To default sent_tokenize method uses by default English so you need to override this:

Which is not at all what you want. So you need to train your own tokernizer and add explicitly the stuff you considere as not splitting to a sentence.

The PunktTrainer class is an unsupervised learner which can be used for this purpose:

The abbreviations which are considered as not splitting a sentence can be obtained from

Let’s look at an example:

Not what you want. You can ask NLTK how splitting decisions are made:

Adding your own non-splitting tokens is now as simple as

which now correctly interpretes the whole string as one sentence.

Word tokenization is similar to sentence splitting. Maybe you need to train your own tokenizer, maybe not. The default approach works sometimes:

The REPP parser can help if you need to run your own.

Removing punctuation can be implemented with regular expressions:

You can add whatever language specific char to the ignored_punctuation variable above.

Removing stop-words can be based on the predefined ones:

So is normalization:

If you assemble the above snippets you can go from raw text to clean arrays of arrays of words. Each array representing one sentence.

At this point you can start converting the arrays to numbers via word2vec, doc2vec and alike. Once the words have become numbers you can use TensorFlow, Gensim or MXNet to learn from the data.