Predicting Visual Outcomes for Visual Prostheses

A major outstanding challenge is predicting what people “see” when they use their devices.

Instead of seeing focal spots of light, current visual implant users perceive highly distorted percepts, which vary in shape not just across subjects but also across electrodes and often fail to assemble into more complex percepts. Furthermore, phosphenes appear fundamentally different depending on whether they are generated with retinal or cortical implants.

The goal of this project is thus to combine psychophysical and neuroanatomical data that can inform phosphene models capable of linking electrical stimulation directly to perception.

Project Team

Project Leads:

Jacob Granley

PhD Candidate

Hannah Stone

PhD Student

Project Affiliates:

Yuchen Hou

PhD Student

Magnolia Saur

UC LEADS Scholar

Principal Investigator:

Michael Beyeler

Assistant Professor

Collaborators:

Gislin Dagnelie

Professor
Johns Hopkins University

James D. Weiland

Professor
University of Michigan, Ann Arbor

Sandra Rocio Montezuma

Professor
University of Minnesota

Eduardo Fernández Jover

Professor
Universidad Miguel Hernández, Spain

Consultant:

Project Funding

R00EY029329: Virtual prototyping for retinal prosthesis patients
PI: Michael Beyeler (UCSB)

September 2020 - August 2023
National Eye Institute (NEI)
National Institutes of Health (NIH)

Publications

We introduce two computational models designed to accurately predict phosphene fading and persistence under varying stimulus conditions, cross-validated on behavioral data reported by nine users of the Argus II Retinal Prosthesis System.

We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.

We retrospectively analyzed phosphene shape data collected form three Argus II patients to investigate which neuroanatomical and stimulus parameters predict paired-phosphene appearance and whether phospehenes add up linearly.

We present explainable artificial intelligence (XAI) models fit on a large longitudinal dataset that can predict perceptual thresholds on individual Argus II electrodes over time.

We show that a neurologically-inspired decoding of CNN activations produces qualitatively accurate phosphenes, comparable to phosphenes reported by real patients.

We optimize electrode arrangement of epiretinal implants to maximize visual subfield coverage.

We explored the causes of high thresholds and poor spatial resolution within the Argus II epiretinal implant.

We show that sighted individuals can learn to adapt to the unnatural on- and off-cell population responses produced by electronic and optogenetic sight recovery technologies.

We present a phenomenological model that predicts phosphene appearance as a function of stimulus amplitude, frequency, and pulse duration.

We present an explainable artificial intelligence (XAI) model fit on a large longitudinal dataset that can predict electrode deactivation in Argus II.

We systematically explored the space of possible implant configurations to make recommendations for optimal intraocular positioning of Argus II.

We show that the perceptual experience of retinal implant users can be accurately predicted using a computational model that simulates each individual patient’s retinal ganglion axon pathways.

The goal of this review is to summarize the vast basic science literature on developmental and adult cortical plasticity with an emphasis on how this literature might relate to the field of prosthetic vision.

pulse2percept is an open-source Python simulation framework used to predict the perceptual experience of retinal prosthesis patients across a wide range of implant configurations.

Back to all research projects