The lab had 3 papers accepted at NeurIPS ‘23:
PhD students Aiwen Xu and Yuchen Hou developed a multimodal recurrent neural net that well describes V1 activity in freely moving mice, revealing how some neurons lack pronounced visual RFs and that most neurons exhibit mixed selectivity:
A Xu, Y Hou, CM Niell, M Beyeler (2023). Multimodal deep learning model unveils behavioral dynamics of V1 activity in freely moving mice. 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
The latest work by PhD students Galen Pogoncheff and Jacob Granley enriches ResNet50 (the previously best V1-aligned deep net) with layers that simulate the processing hallmarks of the early visual system and assesses how they affect model-brain alignment:
G Pogoncheff, J Granley, M Beyeler (2023). Explaining V1 properties with a biologically constrained deep learning architecture. 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23
And last but not least, Jacob Granley (in collab w/ Tristan Fauvel & Matthew Chalk from Sorbonne University) combined deep stimulus encoding with preferential Bayesian optimization to develop personalized stimulation strategies for neural prostheses:
J Granley, T Fauvel, M Chalk, M Beyeler (2023). Human-in-the-loop optimization for deep stimulus encoding in visual prostheses. 37th Conference on Neural Information Processing Systems (NeurIPS) ‘23