Coreference resolution is the task of finding all expressions that refer to the same (named) entity in a text. It’s an important step for higher level NLP tasks that involve natural language understanding such as document summarization, question answering, and information extraction. For human beings it’s a no-brainer to understand who’s being referred to whenever one speaks of ‘his’, ‘her’, ‘she’ and so on. From an NLP point of view it’s a challenge to properly designate the corresponding subject/object.

In the example below you can see an example of (positive and negative) coreferences in the sentence ‘My mother’s name is Sasha, she likes dogs.‘. In this, the token ‘she’ refers to ‘Sasha’ and not ‘dogs’. It takes a lot of (annotated) data and neural training to make this happen. Nowadays a precision of around 75% can be achieved in English while other languages are slowly catching up.

There are lots of linguistic subtleties when dealing with coreferences (pleonasms, anaphora, cataphora etc.) but in function of consumer AI and chatbots coreference is a  marvelous thing. As long as the given text is not too big things work well. In contrast, imagine giving twenty pages of Shakespeare and asking about ‘him’ referred to on the last page. The span between coreferences cannot be too long. Short inputs from chatbot transcripts, emergency calls, messaging apps are however ideal.

The coreferences can be part of a larger semantic graph (and semantic reasoner) representing the essence of some context (say, an emergency call). In essence, without coreference every input(sentence) is a disconnected sub-graph but the references create a connected graph. A bit like substances creating weak bindings through cohesion.

Implementation and solutions

 

NeuralCoref is based on a neural network model, production-ready, integrated in spaCy’s NLP pipeline and easily extensible to new training datasets. It’s open source Python and speedy. If your codebase is built atop Spacy, look no further.

Cort is another open source Python kit with the addition that it has an extensive set of tools to analyze coreference errors. It’s not neural but based on latent variables (hidden Markov models).

Opener is a very large set of projects, one of which focuses on coreference resolution.