Serhii Sokolenko
Serhii Sokolenko is a co-founder of Tower, a Pythonic platform for data flows and agents running on top of open analytical storage. Prior to founding Tower, Serhii worked at Databricks, Snowflake and Google on data processing and databases.
Sessions
Code-generating LLMs have matured to the point where they can reliably scaffold data pipelines and data agents, when used in a supervised, engineering-first workflow. This tutorial demonstrates how to combine modern AI coding assistants with a production-ready Python deployment platform (Tower.dev) to build and operate real data systems.
Participants will learn how to structure collaborative Human/AI Assistant development loops, where engineers provide architecture, domain knowledge, and review, while AI accelerates implementation. We will build a data pipeline and a lightweight data agent, iterating with an AI assistant to generate, test, and improve code.
The session also covers critical operational concerns such as:
- Security
- Scaling
- Observability
- Debugging
You will also see how production feedback can be looped back into the assistant to continuously improve generated code.
This is not about “vibe coding” a website. It is about disciplined, review-driven AI collaboration that meaningfully improves productivity for data practitioners at all levels.
Software engineering is changing fast. With AI now writing and reasoning about code, does it still make sense to learn Python or any language at all?
Is this the evolution of our craft, a true revolution, or just hype from those who benefit most? Join us to debate the future of Python, the risks of AI-driven development, and what skills will actually matter next.
The AI world is buzzing with claims about “agentic intelligence” and autonomous reasoning. Behind the hype, however, a quieter shift is taking place: Small Language Models (SLMs) are proving capable of many reasoning tasks once assumed to require massive LLMs. When paired with fresh business data from modern lakehouses and accessed through tool calling, these models can power surprisingly capable agents.
In this talk, we cut through the noise around “agents” and examine what actually works today. You’ll see how compact models such as Phi-2 or xLAM-2 can reason and invoke tools effectively, and how to run them on development laptops or modest clusters for fast iteration.
By grounding agents in business facts stored in Iceberg tables, hallucinations are reduced, while Iceberg’s read scalability enables thousands of agents to operate in parallel on a shared source of truth.
Attendees will leave with a practical understanding of data agent architectures, SLM capabilities, Iceberg integration, and a realistic path to deploying useful data agents - without a GPU farm.