# Natural Language Processing
## Word Embeddings
Word embeddings are fascinating aspects of deep learning that make natural language processing fun for the beginner. The basic idea is that each word is represented by an $n$ dimensional vector that is leaned from a large corpus of text. Word embeddings have interesting properties.
- Take a T-SNE or PCA of the word embedding and you will find that similar words are grouped together
- Similar words have same direction this can be assessed by doing a *cosine similarity* between embeddings.
- Difference between two word embeddings can be useful for finding analogies. For example man:woman :: king:queen
- Word embeddings can be transferred from a problem with a large corpus to a problem with a smaller corpus and fine tuned