BI NMA 04: Deep Learning Basics Panel

Brain Inspired - A podcast by Paul Middlebrooks

Categories:

BI NMA 04: Deep Learning Basics Panel This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization. Guests Amita Kapoor Lyle Ungar @LyleUngar Surya Ganguli @SuryaGanguli The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Second panel, about linear systems, real neurons, and dynamic networks. Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs). Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.   Timestamps: