Mouse vs. AI: A neuroethological benchmark for visual robustness and neural alignment

Marius Schneider, Joe Canzano, Jing Peng, Yuchen Hou, Spencer LaVere Smith, Michael Beyeler arXiv:2509.14446

Abstract

Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environmental changes, maintaining stable performance even under degraded visual input with minimal exposure. Inspired by this gap, we propose the Mouse vs. AI: Robust Foraging Competition, a novel bioinspired visual robustness benchmark to test generalization in reinforcement learning (RL) agents trained to navigate a virtual environment toward a visually cued target. Participants train agents to perform a visually guided foraging task in a naturalistic 3D Unity environment and are evaluated on their ability to generalize to unseen, ecologically realistic visual perturbations. What sets this challenge apart is its biological grounding: real mice performed the same task, and participants receive both behavioral performance data and large-scale neural recordings (over 19,000 neurons across visual cortex) for benchmarking. The competition features two tracks: (1) Visual Robustness, assessing generalization across held-out visual perturbations; and (2) Neural Alignment, evaluating how well agents' internal representations predict mouse visual cortical activity via a linear readout. We provide the full Unity environment, a fog-perturbed training condition for validation, baseline proximal policy optimization (PPO) agents, and a rich multimodal dataset. By bridging reinforcement learning, computer vision, and neuroscience through a shared, behaviorally grounded task, this challenge advances the development of robust, generalizable, and biologically inspired AI.

NeuroAI Machine Learning Computational Neuroscience
All publications