Large Language Models (LLMs) are evolving from text predictors into logical reasoners with the potential to revolutionize how we engage with knowledge, design systems, and create digital tools. Yet their power is constrained by the limits of context—short-term memory and engineering workarounds that restrict their ability to sustain deep reasoning across tasks and time.
This session features Vince Trost, co-founder of Plastic Labs, whose work explores extending LLM memory and building systems that transcend context engineering. Drawing from his experiments and real-world applications, Vince will outline a new design paradigm for reasoning systems—one that enables continuity, resilience, and human–AI collaboration at scale.