Topic: ML/AI

Research Projects

Rather than aiming to one day restore natural vision, we might be better off thinking about how to create practical and useful artificial vision now.

Rather than predicting perceptual distortions, one needs to solve the inverse problem: What is the best stimulus to generate a desired visual percept?

How does the brain extract relevant visual features from the rich, dynamic visual input that typifies active exploration, and how does the neural representation of these features support visual navigation?

Neuromorphic event-based vision sensors may soon power low vision aids and retinal implants, where the visual scene has to be processed quickly and efficiently before it is displayed.

Researchers Interested in This Topic

Avatar
CS

Kimia Afshari
MS Student

Avatar
CS

Sriya Aluru
Research Assistant

Avatar

Michael Beyeler
Assistant Professor

Avatar
CS

Cheng Han (Johnson) Chan
Research Assistant

Avatar
CS

Justin Chung
Research Assistant

Avatar
CS

Jacob Granley
PhD Candidate

Avatar
CS

Isaac Hoffman
Research Assistant

Avatar
PSTAT, ECON

Nandini Iyer
SEEDS Fellow

Avatar
CS

Ethan Meade
DIMAP Student

Avatar
CCS

Julia Novick
Research Assistant

Avatar
CS

Galen Pogoncheff
PhD Student

Avatar
CCS

Laya Pullela
Research Assistant

Avatar
CS

Shivani Sista
Research Assistant

Avatar
CS

Madori Spiker
MS Student

Avatar
PSTAT, MATH

Jiaxin Su
Research Assistant

Avatar
PSTAT, ECON

Dharynka Tapia
SEEDS Fellow

Avatar
CS

Eyob Teshome
SEEDS Fellow

Avatar
CS

Aiwen Xu
PhD Candidate

Avatar
CS

Luke Yoffe
Research Assistant