We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.
We are an interdisciplinary group interested in exploring the mysteries of human, animal, and artificial vision. Our passion lies in unraveling the science behind bionic technologies that may one day restore useful vision to people living with incurable blindness.
At the heart of our lab is a diverse team, blending the realms of computer science and engineering with the insightful perspectives of neuroscience and psychology. What unites us is a shared fascination with the intricacies of vision and its potential medical applications. However, we are not just about algorithms and data; our research projects range from trying to understand perception in individuals with visual impairments to crafting biophysical models of brain activity and engaging in the transformative world of virtual and augmented reality to create novel visual accessibility tools.
What sets our lab apart is our connection to the community of implant developers and bionic eye recipients. We don't just theorize; we are committed to transforming our ideas into practical solutions that are rigorously tested across different bionic eye technologies. Our goal is to enhance not just scientific understanding, but to foster a greater sense of independence in the lives of those with visual impairments.
We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.
Galen Pogoncheff, Jacob Granley, Alfonso Rodil, Leili Soo, Lily M. Turkstra, Lucas Gil Nadolskis, Arantxa Alfaro Saez, Cristina Soto Sanchez, Eduardo Fernandez Jover, Michael Beyeler Workshop on Representational Alignment (Re-Align), ICLR ‘24
Our interview study found a significant gap between researcher expectations and implantee experiences with visual prostheses, underscoring the importance of focusing future research on usability and real-world application.
Lucas Gil Nadolskis, Lily M. Turkstra, Ebenezer Larnyo, Michael Beyeler medRxiv
(Note: LGN and LMT contributed equally to this work.)
We used immersive virtual reality to develop a novel behavioral paradigm to examine navigation under dynamically changing, high-stress situations.
Apurv Varshney, Mitchell Munns, Justin Kasowski, Mantong Zhou, Chuanxiuyue He, Scott Grafton, Barry Giesbrecht, Mary Hegarty, Michael Beyeler Scientific Reports
(Note: AV and MM contributed equally to this work.)
We retrospectively analyzed phosphene shape data collected form three Argus II patients to investigate which neuroanatomical and stimulus parameters predict paired-phosphene appearance and whether phospehenes add up linearly.
Yuchen Hou, Devyani Nanduri, Jacob Granley, James D. Weiland, Michael Beyeler Journal of Neural Engineering
We systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that comprehensively explain neural activity in V1.
Galen Pogoncheff, Jacob Granley, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We propose a personalized stimulus encoding strategy that combines state-of-the-art deep stimulus encoding with preferential Bayesian optimization.
Jacob Granley, Tristan Fauvel, Matthew Chalk, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice.
Aiwen Xu, Yuchen Hou, Cristopher M. Niell, Michael Beyeler 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
We present a mixed-methods approach that combines semi-structured interviews with a follow-up behavioral study to understand current and potential future use of technologies for daily activities around the home, especially for cooking.
Lily M. Turkstra, Lexie Van Os, Tanya Bhatia, Michael Beyeler arXiv:2305.03019
We present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility.
Justin Kasowski, Byron A. Johnson, Ryan Neydavood, Anvitha Akkaraju, Michael Beyeler Journal of Vision 23(5):5, 1–24
(Note: JK and BAJ are co-first authors.)
We combined deep learning-based scene simplification strategies with a psychophysically validated computational model of the retina to generate realistic predictions of simulated prosthetic vision.
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler ACM Augmented Humans (AHs) ‘21