Who We Are

We are an interdisciplinary group interested in exploring the mysteries of human, animal, and artificial vision. Our passion lies in unraveling the science behind bionic technologies that may one day restore useful vision to people living with incurable blindness.

At the heart of our lab is a diverse team that integrates computer science and engineering with neuroscience and psychology. What unites us is a shared fascination with the intricacies of vision and its potential public health applications. However, we are not just about algorithms and data; our research projects range from trying to understand perception in individuals with visual impairments to crafting biophysical models of brain activity and engaging in the transformative world of virtual and augmented reality to create novel visual accessibility tools.

What sets our lab apart is our connection to the community of implant developers and bionic eye recipients. We don't just theorize; we are committed to transforming our ideas into practical solutions that are rigorously tested across different bionic eye technologies. Our goal is to enhance not just scientific understanding, but to foster a greater sense of independence in the lives of those with visual impairments.

Award Winning Papers

We propose a perceptual stimulus encoder based on convolutional neural networks that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.

We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.

In the Spotlight

We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.

We present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility.

We developed a spiking neural network model that showed MSTd-like response properties can emerge from evolving spike-timing dependent plasticity with homeostatic synaptic scaling (STDP-H) parameters of the connections between area MT and MSTd.

In the News

What is the required stimulus to produce a desired percept? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.

We optimize electrode arrangement of epiretinal implants to maximize visual subfield coverage.

We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.

Contact