A nuanced understanding of the strategies that people who are blind or visually impaired employ to perform different instrumental activities of daily living (iADLs) is essential to the success of future visual accessibility aids.
Lily Turkstra is a PhD student in the Bionic Vision Lab at UC Santa Barbara.
She has extensive research experience with human psychophysics, has worked with clinical populations, and is well versed in statistical software analysis and programming.
Before joining the PBS department as a graduate student, Lily served as lab manager from Fall ‘22 to Summer ‘23. Before that, she was a Behavioral Health and Performance Intern at NASA and a Software Quality Assurance specialist at Tapestry Solutions. She also worked as a member of the Multisensory Perception Lab at Cal Poly and as a children’s behavioral therapist with California PsychCare.
PhD in Psychological & Brain Sciences, 2028 (expected)
University of California, Santa Barbara
BS in Research Psychology, 2022
California Polytechnic State University (CalPoly), San Luis Obispo, CA
A nuanced understanding of the strategies that people who are blind or visually impaired employ to perform different instrumental activities of daily living (iADLs) is essential to the success of future visual accessibility aids.
We present insights from 16 semi-structured interviews with individuals who are either legally or completely blind, highlighting both the current use and potential future applications of technologies for home-based iADLs.
Lily M. Turkstra, Tanya Bhatia, Lexie Van Os, Michael Beyeler Scientific Reports
Our interview study found a significant gap between researcher expectations and implantee experiences with visual prostheses, underscoring the importance of focusing future research on usability and real-world application.
Lucas Nadolskis, Lily M. Turkstra, Ebenezer Larnyo, Michael Beyeler Translational Vision Science & Technology (TVST) 13(28)
(Note: LN and LMT contributed equally to this work.)
We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks.
Jacob Granley, Galen Pogoncheff, Alfonso Rodil, Leili Soo, Lily M. Turkstra, Lucas Nadolskis, Arantxa Alfaro Saez, Cristina Soto Sanchez, Eduardo Fernandez Jover, Michael Beyeler Workshop on Representational Alignment (Re-Align), ICLR ‘24
(Note: JG and GP contributed equally to this work.)