2026-05-06 –, Main Stage
Could AI code generation replace Learning to Rank? AI coding tools can generate rankers, but only up to a point. What techniques matter when building an agent coded ranker? And where do traditional search techniques still work?
Why can’t I just go to Claude Code and say:
build a function that returns the most relevant results possible.
In this talk we’ll give an AI coding agent a few basic primitives - BM25 retrieval, vector retrieval, query to category similarity. Then we let the agent code search functions building on these tools. We measure whether relevance improves on a test and holdout and continue to iterate
Informed by data, we'll discuss the best techniques found on several open datasets. We'll see the promise and limitations of agentic rerankers. Where does traditional search experience still matter? Where does it fall apart? Can an approach like this replace learning to rank?
We'll see where code generation stops being vibe coding and evolves to become actual model training - with code as the model.
In 2012, Doug got bit by the search bug and he's still trying to keep up. From full-text search, to Learning to Rank models, to search agents that generate their own code, he knows the overwhelming landscape first hand. Yet Doug still works to deeply understand the what / how / why. He help teams use these technologies practically, distinguishing hype from reality.
He’s led search at Reddit, Shopify, and Wikipedia, authored Relevant Search and AI Powered Search, and advised 100+ organizations over the years - all in pursuit of the same question: how does search actually work?