Siddharth Shankar
Siddharth Shankar is a Machine Learning Engineer working at Mphais.AI. His current work focuses on multimodal fine-tuning for mortgage and investment banking. Before entering financial AI, he worked on optimization modeling for aviation operations and developed MLOps pipelines that enabled scalable, reproducible machine learning deployment across complex systems.
He earned his Master’s in Computer Science and Information Systems from the University of Maryland, where his research interests lied in the intersection between Machine Learning and Human Computer Interaction.
Siddharth is passionate about designing AI systems that are not just accurate or efficient, but also trustworthy, compliant, and production-ready.
Session
As organizations move from prototyping LLMs to deploying them in production, the biggest challenges are no longer about model accuracy - they’re about trust, security, and control. How do we monitor model behavior, prevent prompt injection, track drift, and enforce governance across environments?
This talk presents a real-world view of how to design secure and governed LLM pipelines, grounded in open-source tooling and reproducible architectures. We’ll discuss how multi-environment setups (sandbox, runner, production) can isolate experimentation from deployment, how to detect drift and hallucination using observability metrics, and how to safeguard against prompt injection, data leakage, and bias propagation.
Attendees will gain insight into how tools like MLflow, Ray, and TensorFlow Data Validation can be combined for ** version tracking, monitoring, and auditability**, without turning your workflow into a black box. By the end of the session, you’ll walk away with a practical roadmap on what makes an LLMOps stack resilient: reproducibility by design, continuous evaluation, and responsible governance across the LLM lifecycle.