2019-09-05 –, Track 1 (Mitxelena)
Neural Embeddings are a powerful tool of turning categorical into numerical values. Given reasonable training data semantics present in the categories can be preserved in the numerical representation.
Symbols, words, categories etc. need to be converted into numbers before they can be processed by neural networks or used into other ML methods like clustering or outlier detection.
It is desirable to have the converted numbers represent semantics of the encoded categories. That means, numbers close to each other indicate similar semantics.
In this session you will learn what you need to train a neural network for such embeddings. I will bring a complete example including code that I will share using TensorFlow 2 functional API and the Colab service.
I will also share some tricks how to stabilize embeddings when either the model changes or you get more training data.
Understand the magical powers of neural embeddings turning categories into numbers while preserving semantics. Useful for an abundance of applications
Python Skill Level –basic
Domain Expertise –none
Domains –Machine Learning
Oliver Zeigermann is a developer and consultant from Hamburg, Germany. He has written several books and has recently published the "Deep Learning Crash Course" with Manning. More on http://zeigermann.eu/