We present a SNN model that uses spike-latency coding and winner-take-all inhibition to efficiently represent visual stimuli from the Fashion MNIST dataset.
We are an interdisciplinary group interested in the computational modeling of human, animal, computer, and prosthetic vision to elucidate the science behind bionic technologies that may one day restore useful vision to people living with incurable blindness.
Our group combines expertise in computer science/engineering, neuroscience, and psychology. All our team members are computationally minded and have a keen interest in vision and medical applications. Our research projects range from predicting neurophysiological data with deep learning to building biophysical models of electrical brain stimulation, and from studying perception in people with visual impairment to developing prototypes of novel visual accessibility aids using virtual and augmented reality.
Importantly, our ongoing collaborations with several visual prosthesis manufacturers put our laboratory in a unique position to empirically validate our theoretical findings across multiple bionic eye technologies.
We present a SNN model that uses spike-latency coding and winner-take-all inhibition to efficiently represent visual stimuli from the Fashion MNIST dataset.
Melani Sanchez Garcia, Tushar Chauhan, Benoit R. Cottereau, Michael Beyeler NeuroVision Workshop, IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) ‘22
Rather than aiming to represent the visual scene as naturally as possible, a Smart Bionic Eye could provide visual augmentations through the means of artificial intelligence–based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind.
Michael Beyeler, Melani Sanchez Garcia OSF Preprints
What is the required stimulus to produce a desired percept? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
Jacob Granley, Lucas Relic, Michael Beyeler arXiv:2205.13623
We propose a perceptual stimulus encoder based on convolutional neural networks that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.
Lucas Relic, Bowen Zhang, Yi-Lin Tuan, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to ‘see through the eyes’ of a bionic eye user.
Justin Kasowski, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler ACM Augmented Humans (AHs) ‘21