PyConLT 2021

Fine-tuning large NLP models
09-03, 12:00–12:30 (Europe/Vilnius), Main

Fine-tuning became a standard for doing transfer learning in natural language processing. Over the last 3 years, this technique was used in many NLP tasks like summarization, translation, natural language inference, etc., and help achieved incredible results. This transfer learning technique could help you in various NLP and deep learning tasks. If you want to know more participate in the PyCon Lithuania conference.


NLP models become larger and larger every year. Over the last 3 years, NLP model parameters increased more than 1000 times, starting with 2018 BERT with 110 million parameters to GPT-3 in 2020 with 175 billion parameters. To train GPT-3 in the cloud could costs you around 12 million dollars. So how can we use these models for our benefits with a small budget? The answer is transfer learning and fine-tuning. Transfer learning is when a model developed for one task is reused to work on a second similar task. Fine-tuning is one of the transfer learning techniques that could help you to achieve good performance results at a small cost.


What topics define your talk the best?

Machine Learning, Deep Learning, Scientific Research, First-time speaker

I have over 5 years of experience working in the data field. Currently, I work as the Data Scientist at IBM, where I build machine learning and deep learning models that are helping companies achieve their AI goals! I'm deeply passionate about Speech recognition and Computer vision applications. In my free time I creating Automatic Speech Recognition for the Lithuanian language.
Previously, I worked at INVL Asset Management as a Data Analyst, improving products and services for our customers by using advanced analytics, setting up data analytical tools (i.e. Qlik Sence), creating and maintaining models, and onboarding new data sets.