Declarative Computing in an AI World with Jeff Chou, Co-founder & CEO at Sync Computing
Data Radicals - A podcast by Alation - Wednesdays

Categories:
Cloud costs are skyrocketing, and for data teams running AI inference, Spark jobs, and big data workloads, optimization is no easy task. Tuning these workloads for efficiency without disrupting production is a major challenge—but what if there was a better way? In this episode of Data Radicals, Satyen Sangani sits down with Jeff Chou, CEO and co-founder of Sync Computing, to explore a revolutionary approach to cloud optimization. Sync’s closed-loop tuning engine continuously fine-tunes workloads in real-time—without manual adjustments. The result? 50-60% cost savings on Spark jobs and massive efficiency gains for AI workloads. Listen to his episode to learn: Why declarative computing is the future—letting engineers define their desired outcomes instead of manually configuring infrastructure. How Sync Computing slashes cloud costs by dynamically adjusting resources in production, ensuring efficiency without sacrificing reliability. The game-changing impact of Sync’s partnership with NVIDIA to optimize GPU workloads, where the stakes—and costs—are even higher. If you’re managing cloud workloads, this conversation is a must-listen. Discover how cutting-edge AI-powered optimization is reshaping efficiency for Databricks, AI inference, Spark, and beyond.