Understanding the visual system in health and disease is a key issue for neuroscience and neuroengineering applications such as visual prostheses.
Galen Pogoncheff is a Computer Science PhD candidate researching how behavioral and processing biases in biological neural systems can inform the development of human-centered deep learning systems. He brings industry experience researching and developing machine learning models for neural interfaces, integrating multimodal signal data for real-time decoding of motor intent and cognitive state.
Prior to this work, Galen completed his B.S. and M.S. in Computer Science at the University of Colorado, specializing in Data Science and Engineering.
Outside of the lab, you can find Galen in the mountains or at the gym.
PhD in Computer Science, 2027 (expected)
University of California, Santa Barbara
MS in Computer Science, 2020
University of Colorado, Boulder
BS in Computer Science, 2018
University of Colorado, Boulder
Understanding the visual system in health and disease is a key issue for neuroscience and neuroengineering applications such as visual prostheses.
Rather than aiming to one day restore natural vision, we might be better off thinking about how to create practical and useful artificial vision now.
We introduce BIRD (Behavior Induction via Representation-structure Distillation), a flexible framework for transferring aligned behavior by matching the internal representation structure of a student model to that of a teacher.
Galen Pogoncheff, Michael Beyeler arXiv:2505.23933
We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.
Jacob Granley, Galen Pogoncheff, Alfonso Rodil, Leili Soo, Lily M. Turkstra, Lucas Nadolskis, Arantxa Alfaro Saez, Cristina Soto Sanchez, Eduardo Fernandez Jover, Michael Beyeler Workshop on Representational Alignment (Re-Align), ICLR ‘24
(Note: JG and GP contributed equally to this work.)
We present explainable artificial intelligence (XAI) models fit on a large longitudinal dataset that can predict perceptual thresholds on individual Argus II electrodes over time.
Galen Pogoncheff, Zuying Hu, Ariel Rokem, Michael Beyeler Journal of Neural Engineering
We systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that comprehensively explain neural activity in V1.
Galen Pogoncheff, Jacob Granley, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23