
Omkar Joshi, Lead Security Engineer, Coupa Software
Pallavi Deshmukh, Cloud Security Manager, Coupa Software
Omkar Joshi: Over 14 years of experience in the security domain, specializing in Penetration Testing, Application Security, Cloud Security, Architecture and Forensics Investigation.
Leading an Offensive Security (OffSec) and Security Architecture team with a passion for Red Teaming and Security Research.
Reported multiple vulnerabilities in products and applications, recognized with CVEs
Holds prestigious certifications including GIAC Cloud Penetration Tester (GCPN), Offensive Security Certified Professional (OSCP), Offensive Security Wireless Professional (OSWP), Certified Red Team Operator (CRTO), among others
Presented at prominent conferences such as Bsides Budapest, Bsides Milano, Hacktivity, VulnCon 2024, Hacker Halted, CyberSec Asia, Identity Shield, Microsoft BlueHat 2025, PHDays 2025 and VulnCon 2025, OWASP AppSec Days 2025, Hacker Halted 2025
Pallavi Deshmukh: Pallavi is a Cloud Security Manager, overseeing cloud security operations and IAM, with 15 years of experience in cybersecurity. Passionate about application security, she excels in navigating complex security challenges, consistently working to strengthen defenses against emerging threats. With deep expertise in penetration testing, Pallavi focuses on identifying vulnerabilities and strengthening defenses in complex and challenging environments. She has spoken at multiple industry-leading conferences like HackerHalted, Vulncon, Identity Shield and BlueHat and continues sharing her knowledge and expertise in cybersecurity.
talks & Q&A
When AI Agents Become Insiders: Hidden Access Risks in Agentic Systems
Description
AI agents are rapidly evolving into high-privilege orchestrators—reading sensitive knowledge bases, invoking cloud tools, generating presigned URLs, executing workflow actions, and processing user uploads. These interactions create new attack surfaces never anticipated in traditional threat models. Modern agent frameworks blur boundaries between reasoning, automation, and infrastructure control, enabling attackers to leverage the agent’s privileged environment instead of directly exploiting the underlying system.
This talk presents a forward-looking offensive analysis—rooted in recent research, red-team findings, and cross-industry incidents—on how AI agents can unintentionally expose access pathways that escalate into major compromise. We examine failure modes where agents leak internal embeddings, produce overly-privileged presigned URLs, incorrectly validate uploaded content, hallucinate cloud instructions, or chain tools in unsafe sequences. Importantly, we do not provide exploit code; instead we analyze structural weaknesses and how attackers might conceptually exploit them.
Through simulated adversarial scenarios, we show how subtle prompt manipulation, crafted file uploads, or poisoned knowledge-base content could influence an AI agent to reveal sensitive data, execute unintended actions, or collapse internal trust boundaries. We introduce a practical framework for modeling “AI-induced access risks” and demonstrate why these agent-driven pathways are fundamentally different from classical attack chains.
Finally, we deliver actionable defensive strategies: permission-scoped tool invocation, agent behavioral monitoring, safe URL governance, isolation for upload pipelines, KB ingestion attestation, and AI-aware detection patterns.
This session is not about exploiting AI—it's about preventing AI from accidentally exploiting you.

