We systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that comprehensively explain neural activity in V1.
We are an interdisciplinary group interested in the computational modeling of human, animal, computer, and prosthetic vision to elucidate the science behind bionic technologies that may one day restore useful vision to people living with incurable blindness.
Our group combines expertise in computer science/engineering, neuroscience, and psychology. All our team members are computationally minded and have a keen interest in vision and medical applications. Our research projects range from predicting neurophysiological data with deep learning to building biophysical models of electrical brain stimulation, and from studying perception in people with visual impairment to developing prototypes of novel visual accessibility aids using virtual and augmented reality.
Importantly, our ongoing collaborations with several visual prosthesis manufacturers put our laboratory in a unique position to empirically validate our theoretical findings across multiple bionic eye technologies.
We systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that comprehensively explain neural activity in V1.
Galen Pogoncheff, Jacob Granley, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We propose a personalized stimulus encoding strategy that combines state-of-the-art deep stimulus encoding with preferential Bayesian optimization.
Jacob Granley, Tristan Fauvel, Matthew Chalk, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice.
Aiwen Xu, Yuchen Hou, Cristopher M. Niell, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We present a biophysically detailed in silico model of retinal degeneration that simulates the network-level response to both light and electrical stimulation as a function of disease progression.
Aiwen Xu, Michael Beyeler Frontiers in Neuroscience: Special Issue “Rising Stars in Visual Neuroscience”
We present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility.
Justin Kasowski, Byron A. Johnson, Ryan Neydavood, Anvitha Akkaraju, Michael Beyeler Journal of Vision 23(5):5, 1–24
(Note: JK and BAJ are co-first authors.)
We present a SNN model that uses spike-latency coding and winner-take-all inhibition to efficiently represent visual objects with as little as 15 spikes per neuron.
Melani Sanchez-Garcia, Tushar Chauhan, Benoit R. Cottereau, Michael Beyeler Biological Cybernetics
(Note: MSG and TC are co-first authors. BRC and MB are co-last authors.)
We show that a neurologically-inspired decoding of CNN activations produces qualitatively accurate phosphenes, comparable to phosphenes reported by real patients.
Jacob Granley, Alexander Riedel, Michael Beyeler Shared Visual Representations in Human & Machine Intelligence (SVRHM) Workshop, NeurIPS ‘22
We used a neurobiologically inspired model of simulated prosthetic vision in an immersive virtual reality environment to test the relative importance of semantic edges and relative depth cues to support the ability to avoid obstacles and identify objects.
Alex Rasla, Michael Beyeler 28th ACM Symposium on Virtual Reality Software and Technology (VRST) ‘22
We present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to ‘see through the eyes’ of a bionic eye user.
Justin Kasowski, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler ACM Augmented Humans (AHs) ‘21