Security Bsides Las Vegas 2024

JIT Happens: How Instacart Uses AI to Keep Doors Open and Risks Closed
2024-08-07 , Florentine A

Instacart has been on a journey to migrate employees from long-lived access to just-in-time (JIT) access to our most critical systems. However, we quickly discovered that if the request workflow is inefficient, JIT won’t be adopted widely enough to be useful. How could we satisfy two parties with completely different priorities: employees who want access and want it right now, and auditors who want assurance, control, and oversight? How could we avoid slipping back into old habits of long-lived access and quarterly access reviews?

In this demo-driven technical talk, we’ll show how Instacart’s developed an LLM-powered AI bot that satisfies these seemingly competing priorities and deliver true, fully-automated JIT access. This talk will be informative for anyone curious about how AI bots can be leveraged to automate workflows securely. We’ll step through how to best utilize LLMs for developing or enhancing internal security tooling by demonstrating what works, what doesn’t, and what pitfalls to watch for. Our goal is to share tactics that others can use to inform their own AI bot development, increase organizational efficiency, and inspire LLM-powered use cases for security teams beyond access controls.


Instacart has a fairly small security team with a wide mandate. To be effective, we have to be scrappy and somewhat startup-minded in the ways that we operate, especially when it comes to processes like access requests or access reviews. When preparing to IPO in 2023, we knew we needed to tighten up access to financial data. That meant either locking things down (and impacting product and engineering teams in a major way) or embracing more efficient access control methods. After discussions with our internal audit team, we found common ground in embracing just-in-time (JIT) access, with manager approvals required for some high-impact roles. Within a week of our JIT roll-out, it was clear that manager approvals weren’t going to work; managers were not seeing the approval requests or were often out of office, leaving time-sensitive requests to sit for days. Being the engineering-minded team that we are, we set out to develop a process we could trust to automate JIT approvals instantly by leveraging an LLM to do what it does best: parse context from natural language.

During this demo-driven talk, we will cover how to best utilize LLMs for developing internal security tooling (what works, what doesn’t, what pitfalls to watch out for). We assume the audience has used solutions like ChatGPT or have even dabbled with the GPT-4 API but hasn’t yet taken the leap into building production-grade solutions. Live demos will step users through real examples with real-world(ish) data in an iterative way, showing the process of refining the LLM’s output into production-grade output. Our goal is to educate the audience that building security tools that incorporate LLMs is actually quite easy, and that they too can find novel ways to empower users without sacrificing security. Finally, by showing a fully-operationalized, end-to-end solution, we hope we can dispel some of the FUD around LLMs and also demonstrate novel ways to increase organizational efficiency.

All demos will be live, but we’ll have pre-recorded versions as well, just in case.

Tools used:
- Python 3 on AWS Lambda
- OpenAI GPT-4-Turbo

Talk Outline:
Intro – 5 Minutes
- Who we are
- How the project started: our obsession with efficiency

IAM and AI: Our vision – 5 Minutes
- The access request model is fundamentally broken yet auditors make us do it anyway
- How AI can be applied to make IAM and access controls more efficient and effective
- Potential security impacts and outcomes

Inputs and outputs: How we get consistent AI data and how we’re actually using it – 25 Minutes
- Data discussion: What we needed to get out of the AI bot, and so what we needed to feed into it
- LIVE DEMO: How we built the bot
-- Preparing the data
-- Prompting the LLM to achieve consistent outputs
-- Dealing with LLM hallucinations and other problems
- Integration: A brief summary of how we incorporated the bot with our access control stack
- LIVE DEMO: Real-world applications
-- Feeding real(ish) data into the bot and observing the prompt generation and LLM outputs “in slow motion” (so, with sleeps) to see and talk through the generation of outputs
-- JIT access approval via the bot
-- End-user experience optimization (abstracting away significant complexity for great UX)

What’s next - 5 minutes
- What we’ll implement next (dynamic updates of role capabilities and more)
- How we’ll measure impacts/success

Conclusion - 5 minutes
- Where you can follow the us/the project and get more info (we will be open sourcing the code)
- Questions?

Matthew Sullivan leads the infrastructure and identity security functions at Instacart, where he manages a talented team of individual contributors responsible for all cloud platform security controls across all three major cloud providers. Prior to joining Instacart, Matthew spent ten years at Workiva, where he helped establish and mature the company’s security program as a security engineer, infrastructure architect, and then finally as the lead product manager for IAM and security features. He is also the founder of BugAlert.org, a non-profit service that alerts the security community about time-sensitive, high-impact vulnerabilities.

Dom is a New York City-based Senior Security Engineer at Instacart, where he specializes in Cloud Security, Infrastructure, and Identity. His current focus is on developing scalable internal tooling and enhancing automation processes. Before joining Instacart, Dominic led the Security Engineering team at Latch, where he was instrumental in establishing foundational security protocols, emphasizing hardware-based controls, and Public Key Infrastructure (PKI). Before moving into security-focused roles, he also served as a Backend Engineer at Microsoft.