6

Percept-aware surgical planning for visual cortical prostheses with vascular avoidance

We present a percept-aware framework for surgical planning of cortical visual prostheses that formulates electrode placement as a constrained optimization problem in anatomical space.

SymbolSight: Minimizing inter-symbol interference for reading with prosthetic vision

We present SymbolSight, a computational framework that selects symbol-to-letter mappings to minimize confusion among frequently adjacent letters. Using simulated prosthetic vision (SPV) and a neural proxy observer, we estimate pairwise symbol confusability and optimize assignments using language-specific bigram statistics.

Network-adaptive cloud preprocessing for visual neuroprostheses

We present a network-adaptive pipeline for cloud-assisted visual preprocessing of artificial vision, where real-time round-trip-time (RTT) feedback is used to dynamically modulate image resolution, compression, and transmission rate, explicitly prioritizing temporal continuity under adverse network conditions.

Gamification enhances user engagement and task performance in prosthetic vision testing

We found that gamification can influence measured performance and user experience in prosthetic vision testing, but benefits are not universal and depend on task demands and cognitive load.

Look, predict, intercept: Visual exposure seeds model-based control in moving-target interception

When intercepting disappearing moving targets, we found that humans use a two-stage, effector-invariant interception strategy in which brief visual exposure seeds a predictive controller that allows action to continue when visual information is lost.

Deep learning-based control of electrically evoked activity in human visual cortex

We developed a data-driven neural control framework for a visual cortical prosthesis in a blind human, showing that deep learning can synthesize efficient, stable stimulation patterns that reliably evoke percepts and outperform conventional calibration methods.

Mouse vs. AI: A neuroethological benchmark for visual robustness and neural alignment

We propose the Mouse vs. AI: Robust Foraging Competition at NeurIPS '25, a novel bioinspired visual robustness benchmark to test generalization in reinforcement learning (RL) agents trained to navigate a virtual environment toward a visually cued target.

Distinct roles of central and peripheral vision in rapid scene understanding

We used a real-time, gaze-contingent simulation to examine how central vision loss and peripheral vision loss alter eye movements and scene understanding.

A deep learning framework for predicting functional visual performance in bionic eye users

We introduce a computational virtual patient (CVP) pipeline that integrates anatomically grounded phosphene simulation with task-optimized deep neural networks to forecast patient perceptual capabilities across diverse prosthetic designs and tasks.

Beyond physical reach: Comparing head- and cane-mounted cameras for last-mile navigation by blind users

We evaluate head- and cane-mounted cameras for blind navigation and show that combining both yields superior spatial perception, guiding the design of hybrid, user-aligned assistive systems.