EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You

Cloud Security Podcast by Google - A podcast by Anton Chuvakin - Mondays

Categories:

Guest:  David LaBianca, Senior Engineering Director, Google  Topics: The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it? The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term? Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about? How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards? AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape? What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group? We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them?  In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it? An off-the-cuff question, how to do AI governance well?  Resources: CoSAI site, CoSAI 3 projects SAIF main site Gen AI governance: 10 tips to level up your AI program “Securing AI: Similar or Different?” paper Our Security of AI Papers and Blogs Explained