2025-07-24 –, Main Room 2
Today, most AI applications send data to LLM cloud providers, raising privacy concerns. This talk introduces a new way to build AI applications that keep everything local on your computer. By running LLMs locally with Ollama powered by a Julia Client Script and managing data with open source vector databases, we avoid transmitting sensitive information to external cloud providers. We will also highlight LangChain's ability to create versatile agents capable of handling tasks autonomously.
In this talk, we’ll see
- Introduction to Local AI (5 minutes): Overview of cloud-based AI privacy issues and the importance of local AI.
-
Exploring Ollama, Open Source Vector Databases like QrandantDB , LangChain: Detailed insights into each tool, generating embeddings with Ollama for vector searches in vector databases like QrandantDB and demonstrating how LangChain agents can perform tasks such as document summarization and API interactions, all while maintaining data privacy (10 minutes)
-
Live Demo and Applications (15 minutes): A practical demonstration of these tools and discussion of real-world use cases by implementing Ollama Using a simple Julia client scripts to connect to Ollama ( local LLM ) and generates Julia code
Shivay Lamba is a software developer specializing in DevOps, Machine Learning and Full Stack Development.
He is an Open Source Enthusiast and has been part of various programs like Google Code In and Google Summer of Code as a Mentor and has also been a MLH Fellow.
He is actively involved in community work as well. He is a TensorflowJS SIG member, Mentor in OpenMined and CNCF Service Mesh Community, SODA Foundation and has given talks at various conferences like Github Satellite, Voice Global, Fossasia Tech Summit, TensorflowJS Show & Tell.