Instructors: Alona Fyshe & Alex Murphy
YouTube Class Recording Playlist 📺
❗Post-class Write Up Blog Post: COMING SHORTLY
Course Description: Recent work has shown that the representations learned by machine learning models (and in particular neural network models) bear a remarkable similarity to the representations we can detect in the human brain via brain imaging. This class explores that fact through a project, and paper readings on language, vision and reinforcement learning.
Course Prerequisites: Strong programming skills, some exposure to machine learning will be advantageous but not required.
Course Objectives and Expected Learning Outcomes:
- Understand the basics of machine learning and neuroscience
- Understand what a representation is, and how we can compare representational spaces.
- Learn how to read and present a paper, and how to lead an engaging discussion about that paper.
- Get hands-on experience in machine learning and neuroscience as part of a group project. Write a paper about the project, coming as close to a submittable paper as possible by the end of the semester.
See Course Introduction video for a short overview of each paper's motivation.
- Yamins et al. (2014) - Performance-optimized hierarchical models predict neural responses in higher visual cortex
- Horikawa & Kamitani (2017) - Generic decoding of seen and imagined objects using hierarchical visual features
- Konkle & Alvarez (2022) - A self-supervised domain-general learning framework for human ventral stream representation
- Dobs et al. (2022) - Brain-like functional specialization emerges spontaneously in deep neural networks
- Bashivan et al. (2019) - Neural population control via deep image synthesis
- Wehbe et al. (2014) - Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses
- Hollenstein et al. (2021) - Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
- Jain & Huth (2018) - Incorporating Context into Language Encoding Models for fMRI
- Caucheteux & King (2022) - Brains and algorithms partially converge in natural language processing
- Toneva & Wehbe (2019) - Interpreting and improving NLP ( in machines ) with NLP (in the brain)
- Tuckute et al. (2023) - Driving and suppressing the human language network using large language models
- Glascher et al. (2011) - States versus Rewards: Dissociable neural prediction error signals underlying model-based and model-free RL
- Banino et al. (2018) - Vector-based navigation using grid-like representations in artificial agents
- Stachenfeld et al. (2017) - The hippocampus as a predictive map
- Wang et al. (2018) - Prefrontal cortex as a meta-reinforcement learning system
- Cross et al. (2021) - Using deep RL to reveal how the brain encodes abstract state-space representations in high-dimensional environments
We hope by making these resources available online that a wider range of people who are interested in Machine Learning and the Brain are able to get up to speed with current methods in this emerging and exciting interdisciplinary field.
Alex Murphy