This talk is an attempt at explaining the power of the Gaussian[tm] by stepping up the ladder from Naive Bayes to Mixtures to Neural Mixtures to Gaussian Processes.
When you're a machine learning professional you might feel like you need to learn so many algorithms that it can be hard to keep up. It can be very demotivating. This talk is not about downplaying this feeling but it is about demonstrating a lovely hack; understanding a mother algorithm.
It turns out that if you appreciate what the gaussian distribution can do then there are lot's of algorithms that are much easier to grasp. This talk is an attempt at explaining the power of the Gaussian[tm] by stepping up the ladder of complexity of algorithms:
- Naive Bayes
- Mixture Naive Bayes
- Gaussian Mixture Models
- Outlier Detectors
- Neural Mixture Models
- Gaussian Auto Embeddings
- Gaussian Processes
The talk will contain maths, but they will all be (more than) compensated with xkcd-style images. The goal is to appreciate the intuition, not the details.
basic
Abstract as a tweet:gaussian progress. it's meta, but also the most normal conference title this year!
Public link to supporting material:https://scikit-lego.readthedocs.io/en/latest/mixture-methods.html
Domains:Artificial Intelligence, Algorithms, Data Science, IDEs/ Jupyter, Machine Learning, Statistics
Domain Expertise:some