There are many resources dealing with how to create word embeddings but it’s in general quite an undertaking in terms of resources. You need serious GPU power and datasets to make it happen (in a timely fashion at least). Google has pretrained models on huge collections of newgroup messages but it not as easy as the GloVe approach. GloVe’s pretrained models are based on Wikipedia, common crawls or Twitter and are more manageable than Google’s. If you are interested in simple experiments with embeddings it’s a great stepping stone.

Below you can find such a simple experiment. Given a word and a threshold it returns words in the vicinity of the given one.

Note that is a different things than synonyms or the approach taken by Microsoft with concept graphs. See the article Microsoft Concept Graph in Neo4j , for example. Ultimately someone will come up with a unified approach at some point.