(Note: PM, JG, and FG are co-first authors. SL, MB, and EF are co-last authors.)
Visual cortical prostheses offer a promising path to sight restoration, but current systems elicit crude, variable percepts and rely on manual electrode-by-electrode calibration that does not scale. This work introduces an automated data-driven neural control method for a visual neuroprosthesis using a deep learning framework to generate optimal multi-electrode stimulation patterns that evoke targeted neural responses. Using a 96-channel Utah electrode array implanted in the occipital cortex of a blind participant, we trained a deep neural network to predict single-trial evoked responses. The network was used in two complementary control strategies: a learned inverse network for real-time stimulation synthesis and a gradient-based optimizer for precise targeting of desired neural responses. Both approaches significantly outperformed conventional methods in controlling neural activity, required lower stimulation currents, and adapted stimulation parameters to resting state data, reliably evoking more stable percepts. Crucially, recorded neural responses better predicted perceptual outcomes than stimulation parameters alone, underscoring the value of our neural population control framework. This work demonstrates the feasibility of data-driven neural control in a human implant and offers a foundation for next-generation, model-driven neuroprosthetic systems, capable of enhancing sensory restoration across a range of clinical applications.