2

BIRD: Behavior induction via representation-structure distillation

We introduce BIRD (Behavior Induction via Representation-structure Distillation), a flexible framework for transferring aligned behavior by matching the internal representation structure of a student model to that of a teacher.

Static or temporal? Semantic scene simplification to aid wayfinding in immersive simulations of bionic vision

We compare two complementary approaches to semantic preprocessing in immersive virtual reality: *SemanticEdges*, which highlights all relevant objects at once, and *SemanticRaster*, which staggers object categories over time to reduce visual clutter.

Efficient spatial estimation of perceptual thresholds for retinal implants via Gaussian process regression

We propose a Gaussian Process Regression (GPR) framework to predict perceptual thresholds at unsampled locations while leveraging uncertainty estimates to guide adaptive sampling.

Evaluating deep human-in-the-loop optimization for retinal implants using sighted participants

We evaluate HILO using sighted participants viewing simulated prosthetic vision to assess its ability to optimize stimulation strategies under realistic conditions.

Single spike artificial neural networks

We propose a novel temporal-digital architecture that encodes ANN weights as delays and activations as signal arrival times, enabling full ANN execution with temporal reuse, noise-tolerant summation, and hybrid memory, achieving up to 11× energy and 4× latency improvements over SNNs, and 3.5× energy savings over 8-bit digital systolic arrays.

VisionAI - Shopping Assistance for People with Vision Impairments

We introduce VisionAI, a mobile application designed to enhance the in-store shopping experience for individuals with vision impairments.

Predicting the temporal dynamics of prosthetic vision

We introduce two computational models designed to accurately predict phosphene fading and persistence under varying stimulus conditions, cross-validated on behavioral data reported by nine users of the Argus II Retinal Prosthesis System.

Eye tracking performance in mobile mixed reality

We conducted user studies evaluating eye tracking on the Magic Leap One, the HoloLens 2, and the Meta Quest Pro to show how locomotion influences eye tracking performance in these headsets.

Explaining V1 properties with a biologically constrained deep learning architecture

We systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that comprehensively explain neural activity in V1.

Human-in-the-loop optimization for deep stimulus encoding in visual prostheses

We propose a personalized stimulus encoding strategy that combines state-of-the-art deep stimulus encoding with preferential Bayesian optimization.