Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake

The MLSecOps Podcast - A podcast by MLSecOps.com

Categories:

Send us a text This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now. In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial att...