Harri Valpola: System 2 AI and Planning in Model-Based Reinforcement Learning
Machine Learning Street Talk (MLST) - A podcast by Machine Learning Street Talk (MLST)
Categories:
In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten interviewed Harri Valpola, CEO and Founder of Curious AI. We continued our discussion of System 1 and System 2 thinking in Deep Learning, as well as miscellaneous topics around Model-based Reinforcement Learning. Dr. Valpola describes some of the challenges of modelling industrial control processes such as water sewage filters and paper mills with the use of model-based RL. Dr. Valpola and his collaborators recently published “Regularizing Trajectory Optimization with Denoising Autoencoders” that addresses some of the concerns of planning algorithms that exploit inaccuracies in their world models! 00:00:00 Intro to Harri and Curious AI System1/System 2 00:04:50 Background on model-based RL challenges from Tim 00:06:26 Other interesting research papers on model-based RL from Connor 00:08:36 Intro to Curious AI recent NeurIPS paper on model-based RL and denoising autoencoders from Yannic 00:21:00 Main show kick off, system 1/2 00:31:50 Where does the simulator come from? 00:33:59 Evolutionary priors 00:37:17 Consciousness 00:40:37 How does one build a company like Curious AI? 00:46:42 Deep Q Networks 00:49:04 Planning and Model based RL 00:53:04 Learning good representations 00:55:55 Typical problem Curious AI might solve in industry 01:00:56 Exploration 01:08:00 Their paper - regularizing trajectory optimization with denoising 01:13:47 What is Epistemic uncertainty 01:16:44 How would Curious develop these models 01:18:00 Explainability and simulations 01:22:33 How system 2 works in humans 01:26:11 Planning 01:27:04 Advice for starting an AI company 01:31:31 Real world implementation of planning models 01:33:49 Publishing research and openness We really hope you enjoy this episode, please subscribe! Regularizing Trajectory Optimization with Denoising Autoencoders: https://papers.nips.cc/paper/8552-regularizing-trajectory-optimization-with-denoising-autoencoders.pdf Pulp, Paper & Packaging: A Future Transformed through Deep Learning: https://thecuriousaicompany.com/pulp-paper-packaging-a-future-transformed-through-deep-learning/ Curious AI: https://thecuriousaicompany.com/ Harri Valpola Publications: https://scholar.google.com/citations?user=1uT7-84AAAAJ&hl=en&oi=ao Some interesting papers around Model-Based RL: GameGAN: https://cdn.arstechnica.net/wp-content/uploads/2020/05/Nvidia_GameGAN_Research.pdf Plan2Explore: https://ramanans1.github.io/plan2explore/ World Models: https://worldmodels.github.io/ MuZero: https://arxiv.org/pdf/1911.08265.pdf PlaNet: A Deep Planning Network for RL: https://ai.googleblog.com/2019/02/introducing-planet-deep-planning.html Dreamer: Scalable RL using World Models: https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html Model Based RL for Atari: https://arxiv.org/pdf/1903.00374.pdf