2023-04-17 –, A1
In this talk, I'll show how large language models such as GPT-3 complement rather than replace existing machine learning workflows. Initial annotations are gathered from the OpenAI API via zero- or few-shot learning, and then corrected by a human decision maker using an annotation tool. The resulting annotations can then be used to train and evaluate models as normal. This process results in higher accuracy than can be achieved from the OpenAI API alone, with the added benefit that you'll own and control the model for runtime.
Software engineering is all about getting computers to do what we want them to do. As machine learning methods have improved, they've introduced a new way to specify the desired behaviour. Instead of writing code, you can prepare example data. Large language models are now starting to introduce a third option: instead of example data, you can provide a natural language prompt.
Writing a prompt is far quicker than building a good set of training examples, but it's also a much less precise way to get the behaviour you want. There's also no reliable way to incrementally improve the results, even if better performance would be very valuable to you. Essentially, this new approach has a high floor, but a low ceiling.
In this talk, I'll show how large language models such as GPT3 complement rather than replace existing machine learning workflows. Initial annotations are gathered from the OpenAI API via zero- or few-shot learning, and then corrected by a human decision maker using the Prodigy annotation tool. The resulting annotations can then be used to train and evaluate models as normal. This process results in higher accuracy than can be achieved from the OpenAI API alone, with the added benefit that you'll own and control the model for runtime.
Advanced
Expected audience expertise: Python –Intermediate
Abstract as a tweet –Large language models like @OpenAI GPT-3 can complement existing machine learning workflows really well. You can get initial annotations from GPT-3, quickly fix them with an annotation tool like https://prodi.gy , and train a cheaper and better model.
Ines Montani is a developer specializing in tools for AI and NLP technology. She’s a Fellow of the Python Software Foundation, the co-founder and CEO of Explosion and a core developer of spaCy, one of the leading open-source libraries for Natural Language Processing in Python, and Prodigy, a modern annotation tool for creating training data for machine learning models.