We present a SNN model that uses spike-latency coding and winner-take-all inhibition to efficiently represent visual objects with as little as 15 spikes per neuron.
We are an interdisciplinary group interested in the computational modeling of human, animal, computer, and prosthetic vision to elucidate the science behind bionic technologies that may one day restore useful vision to people living with incurable blindness.
Our group combines expertise in computer science/engineering, neuroscience, and psychology. All our team members are computationally minded and have a keen interest in vision and medical applications. Our research projects range from predicting neurophysiological data with deep learning to building biophysical models of electrical brain stimulation, and from studying perception in people with visual impairment to developing prototypes of novel visual accessibility aids using virtual and augmented reality.
Importantly, our ongoing collaborations with several visual prosthesis manufacturers put our laboratory in a unique position to empirically validate our theoretical findings across multiple bionic eye technologies.
We present a SNN model that uses spike-latency coding and winner-take-all inhibition to efficiently represent visual objects with as little as 15 spikes per neuron.
Melani Sanchez-Garcia, Tushar Chauhan, Benoit R. Cottereau, Michael Beyeler Biological Cybernetics
(Note: MSG and TC are co-first authors. BRC and MB are co-last authors.)
We show that a neurologically-inspired decoding of CNN activations produces qualitatively accurate phosphenes, comparable to phosphenes reported by real patients.
Jacob Granley, Alexander Riedel, Michael Beyeler Shared Visual Representations in Human & Machine Intelligence (SVRHM) Workshop, NeurIPS ‘22
We used a neurobiologically inspired model of simulated prosthetic vision in an immersive virtual reality environment to test the relative importance of semantic edges and relative depth cues to support the ability to avoid obstacles and identify objects.
Alex Rasla, Michael Beyeler 28th ACM Symposium on Virtual Reality Software and Technology (VRST) ‘22
What is the required stimulus to produce a desired percept? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
Jacob Granley, Lucas Relic, Michael Beyeler 36th Conference on Neural Information Processing Systems (NeurIPS) ‘22
We optimize electrode arrangement of epiretinal implants to maximize visual subfield coverage.
Ashley Bruce, Michael Beyeler Medical Image Computing and Computer Assisted Intervention (MICCAI) ‘22
We explored the causes of high thresholds and poor spatial resolution within the Argus II epiretinal implant.
Ezgi I. Yücel, Roksana Sadeghi, Arathy Kartha, Sandra R. Montezuma, Gislin Dagnelie, Ariel Rokem, Geoffrey M. Boynton, Ione Fine, Michael Beyeler Frontiers in Neuroscience
We propose a perceptual stimulus encoder based on convolutional neural networks that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.
Lucas Relic, Bowen Zhang, Yi-Lin Tuan, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to ‘see through the eyes’ of a bionic eye user.
Justin Kasowski, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler ACM Augmented Humans (AHs) ‘21