Towards a Smart Bionic Eye

Rather than aiming to one day restore natural vision (which may remain elusive until we fully understand the neural code of vision), we might be better off thinking about how to create practical and useful artificial vision now. Specifically, a visual prosthesis has the potential to provide visual augmentations through the means of artificial intelligence (AI) based scene understanding (e.g., by highlighting important objects), tailored to specific real-world tasks that are known to affect the quality of life of people who are blind (e.g., face recognition, outdoor navigation, self-care).

In the future, these visual augmentations could be combined with GPS to give directions, warn users of impending dangers in their immediate surroundings, or even extend the range of visible light with the use of an infrared sensor (think bionic night-time vision). Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue.

Smart Bionic Eye concept

Project Team

Project Leads:

Project Affiliates:

Sangita Kunapuli

Research Assistant

Eyob Teshome

SEEDS Fellow

Kanav Arora

Research Assistant

Principal Investigator:

Michael Beyeler

Assistant Professor

Collaborators:

Eduardo Fernández Jover

Professor
Universidad Miguel Hernández, Spain

Gislin Dagnelie

Associate Professor
Johns Hopkins University

James D. Weiland

Professor
University of Michigan, Ann Arbor

Sandra Rocio Montezuma

Associate Professor
University of Minnesota

Project Funding

DP2-LM014268: Towards a Smart Bionic Eye: AI-Powered Artificial Vision for the Treatment of Incurable Blindness
PI: Michael Beyeler (UCSB)

September 2022 - August 2027
Common Fund, Office of the Director (OD); National Library of Medicine (NLM)
National Institutes of Health (NIH)

Publications

We present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility.

We used a neurobiologically inspired model of simulated prosthetic vision in an immersive virtual reality environment to test the relative importance of semantic edges and relative depth cues to support the ability to avoid obstacles and identify objects.

Rather than aiming to represent the visual scene as naturally as possible, a Smart Bionic Eye could provide visual augmentations through the means of artificial intelligence–based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind.

We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.

Back to all research projects