Validating and monitoring the performance of your ML Applications (v2)
Niño R. Eclarin
With all the hype about applications that uses machine learning, I think there is one key aspect that developers tend to forget: "Performance check and monitoring".
ML and AI services have become very accessible and can be integrated into any application you can think of. But what do you do after you integrated your ML models to your application? How do you know that the output of the ML models are correct and up to standard? What are the signs that the model's performance is changing and how do you take action on such changes?
Typically, these problems are just discussed on a theoretical and research level. But how can we carry over these techniques and apply it to our application? Not just that, how can we make it so that monitoring and performance check is as simple as writing a unit test (or not).
In this session, we will learn some simple but effective ways on model performance monitoring as well as look at some python implementation and architecture consideration. We will also check some best practices and some real life scenario on how model monitoring works.