Look, predict, intercept: Visual exposure seeds model-based control in moving-target interception

Lauren T. Eckhardt, Anvitha Akkaraju, Tori N. LeVier, Justin M. Kasowski, Michael Beyeler PsyArXiv 56jsn_v1

Abstract

Humans frequently pursue moving objects that temporarily disappear from view, yet the control strategies underlying successful interception remain debated. While some theories emphasize on-line guidance via optic flow, others propose that interception is maintained through internal models of target motion. We tested a two-stage hybrid control hypothesis in which individuals initially rely on optic flow to guide movement but switch to model-based prediction when visual input is withdrawn. In two immersive virtual reality experiments, participants intercepted a moving target (a bunny or hare) that disappeared partway through its trajectory. In Experiment 1 (N = 18), participants walked to intercept a cartoon bunny under varying path (linear or zigzag) and visibility conditions (fully visible, disappear-reappear, or disappear). Interception performance decreased following disappearance but remained above chance, and curved walking trajectories only emerged when participants began moving before the target appeared, challenging classic optic-flow–based models. In Experiment 2 (N = 41), participants attempted to “shoot” a hare after it disappeared, with visible time and distance factorially manipulated. Capture success increased linearly with visible time but plateaued with visible distance, suggesting that temporal exposure, not spatial extent, determines the quality of predictive control. Together, the findings support a two-stage, effector-invariant interception strategy in which brief visual exposure (~1–1.5 s) seeds a predictive controller that allows action to continue when visual information is lost.

All publications