We compare two complementary approaches to semantic preprocessing in immersive virtual reality: SemanticEdges, which highlights all relevant objects at once, and SemanticRaster, which staggers object categories over time to reduce visual clutter.
Rethinking sight restoration through models, data, and lived experience.
We are an interdisciplinary group exploring the science of human, animal, and artificial vision. Our mission is twofold: to understand how vision works, and to use those insights to build the next generation of visual neurotechnologies for people living with incurable blindness. This means working at the intersection of neuroscience, psychology, and computer science, where questions about how the brain sees meet advances in AI and extended reality (XR).
Our work spans the full spectrum from behavior to computation. We study how people with visual impairment perceive and navigate the world, using psychophysics, VR/AR, and ambulatory head/eye/body tracking. We probe visual system function with EEG, TMS, and physiological sensing. And we design biophysical and machine learning models to simulate, evaluate, and optimize visual prostheses, often embedding these models directly into real-time XR environments. This blend of approaches lets us connect brain, behavior, and technology in ways no single discipline can achieve alone.
What sets our lab apart is our close collaboration with both implant developers and bionic eye recipients. We aim to unify efforts across the field by creating open-source tools and standardized evaluation methods that can be used across devices and patient populations. Our ultimate goal is to reshape how vision restoration technologies are conceptualized, tested, and translated (while also pushing the frontiers of AI and XR) so that people with vision loss can live more independent and connected lives.
We compare two complementary approaches to semantic preprocessing in immersive virtual reality: SemanticEdges, which highlights all relevant objects at once, and SemanticRaster, which staggers object categories over time to reduce visual clutter.
Justin M. Kasowski, Apurv Varshney, Michael Beyeler 31st ACM Symposium on Virtual Reality Software and Technology (VRST) ‘25
We propose the Mouse vs. AI: Robust Foraging Competition at NeurIPS ‘25, a novel bioinspired visual robustness benchmark to test generalization in reinforcement learning (RL) agents trained to navigate a virtual environment toward a visually cued target.
Marius Schneider, Joe Canzano, Jing Peng, Yuchen Hou, Spencer LaVere Smith, Michael Beyeler arXiv:2509.14446
We propose a Gaussian Process Regression (GPR) framework to predict perceptual thresholds at unsampled locations while leveraging uncertainty estimates to guide adaptive sampling.
Roksana Sadeghi, Michael Beyeler IEEE EMBC ‘25
We evaluate HILO using sighted participants viewing simulated prosthetic vision to assess its ability to optimize stimulation strategies under realistic conditions.
Eirini Schoinas, Adyah Rastogi, Anissa Carter, Jacob Granley, Michael Beyeler IEEE EMBC ‘25
We present insights from 16 semi-structured interviews with individuals who are either legally or completely blind, highlighting both the current use and potential future applications of technologies for home-based iADLs.
Lily M. Turkstra, Tanya Bhatia, Alexa Van Os, Michael Beyeler Scientific Reports
We propose a perceptual stimulus encoder based on convolutional neural networks that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.
Lucas Relic, Bowen Zhang, Yi-Lin Tuan, Michael Beyeler ACM Augmented Humans (AHs) ‘22
We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler ACM Augmented Humans (AHs) ‘21
We propose a Gaussian Process Regression (GPR) framework to predict perceptual thresholds at unsampled locations while leveraging uncertainty estimates to guide adaptive sampling.
Roksana Sadeghi, Michael Beyeler IEEE EMBC ‘25
We evaluate HILO using sighted participants viewing simulated prosthetic vision to assess its ability to optimize stimulation strategies under realistic conditions.
Eirini Schoinas, Adyah Rastogi, Anissa Carter, Jacob Granley, Michael Beyeler IEEE EMBC ‘25
We introduce two computational models designed to accurately predict phosphene fading and persistence under varying stimulus conditions, cross-validated on behavioral data reported by nine users of the Argus II Retinal Prosthesis System.
Yuchen Hou, Laya Pullela, Jiaxin Su, Sriya Aluru, Shivani Sista, Xiankun Lu, Michael Beyeler IEEE EMBC ‘24
(Note: YH and LP contributed equally to this work.)
We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.
Jacob Granley, Galen Pogoncheff, Alfonso Rodil, Leili Soo, Lily M. Turkstra, Lucas Nadolskis, Arantxa Alfaro Saez, Cristina Soto Sanchez, Eduardo Fernandez Jover, Michael Beyeler Workshop on Representational Alignment (Re-Align), ICLR ‘24
(Note: JG and GP contributed equally to this work.)
We present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility.
Justin Kasowski, Byron A. Johnson, Ryan Neydavood, Anvitha Akkaraju, Michael Beyeler Journal of Vision 23(5):5, 1–24
(Note: JK and BAJ are co-first authors.)
We present insights from 16 semi-structured interviews with individuals who are either legally or completely blind, highlighting both the current use and potential future applications of technologies for home-based iADLs.
Lily M. Turkstra, Tanya Bhatia, Alexa Van Os, Michael Beyeler Scientific Reports
What is the required stimulus to produce a desired percept? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
Jacob Granley, Lucas Relic, Michael Beyeler 36th Conference on Neural Information Processing Systems (NeurIPS) ‘22
We optimize electrode arrangement of epiretinal implants to maximize visual subfield coverage.
Ashley Bruce, Michael Beyeler Medical Image Computing and Computer Assisted Intervention (MICCAI) ‘22