Attention in Neural Nets
Linear Digressions - A podcast by Ben Jaffe and Katie Malone
![](https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/39/d1/c7/39d1c707-9487-55b8-5225-e9ae54262292/mza_7129723090907127604.jpg/300x300bb-75.jpg)
Categories:
There’s been a lot of interest lately in the attention mechanism in neural nets—it’s got a colloquial name (who’s not familiar with the idea of “attention”?) but it’s more like a technical trick that’s been pivotal to some recent advances in computer vision and especially word embeddings. It’s an interesting example of trying out human-cognitive-ish ideas (like focusing consideration more on some inputs than others) in neural nets, and one of the more high-profile recent successes in playing around with neural net architectures for fun and profit.