What we do

Modeling eye movements in scene viewing

Visual attention controls most of the human behavior, being able to predict movements of attention would allow applications to anticipate the behaviors made in response to image content.  We take a synergistic computational and behavioral approach for modeling the movements of human attention and predict the movements of human attention in various visual tasks: categorical visual search and free-viewing.

  • Modeling eye movements in a categorical search task
    • Zelinsky, G., Yang, Z., Huang, L., Chen, Y., Ahn, S., Wei, Z., … & Hoai, M. (2019). Benchmarking Gaze Prediction for Categorical Visual Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (paper)
    • Adeli, H., & Zelinsky, G. (2018). Deep-BCN: Deep Networks Meet Biased Competition to Create a Brain-Inspired Model of Attention Control. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRw). pp. 1932-1942.(paper)
    • Adeli, H., Vitu, F., & Zelinsky, G. J. (2017). A model of the superior colliculus predicts fixation locations during scene viewing and visual search. Journal of Neuroscience, 37(6), 1453-1467. (Paper)
  • Modeling eye movements in a free-viewing
    • Chen, Y., &Zelinsky, G. J. (2019). Is there a shape to the attention spotlight? Computing saliency over proto-objects predicts fixations during scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 45(1), 139-154. (paper)
    • Chen, Y., &Zelinsky, G. J., Multiple-object Control Predicts Movements of Attention During Free Viewing, Vision Sciences Society Meeting (VSS) 2019, FL USA


Modeling eye movements in web viewing

Web surfing is the most common activity to gain information these days. By leveraging gaze information during webpage viewing, we aim to systematically improve the Web user experience and accessibility.

    • Conor Kelton, Zijun Wei, Seoyoung Ahn, Aruna Balasubramanian, Samir R. Das, Dimitris Samaras, and Gregory Zelinsky (2019) Reading Detection in Real-time. In 2019 Symposium on Eye Tracking Research and Applications (ETRA ’19), https://doi.org/10.1145/3314111.3319916 (paper)


Modeling eye movements in reading

A computational model of attention in the Superior Colliculus reading without a lexicon: Most models of eye-movement control during reading assume that saccadic behavior primarily reflects ongoing word-identification processing. We show, in contradiction with this view, that an image-based model of saccade programming in the superior colliculus (SC) can predict the highly stereotyped saccadic behavior observed during reading, simply by averaging early visual signals

    • Vitu, F., Adeli, H., Zelinsky, G., Reading without a lexicon: An illiterate model of saccade programming in the superior colliculus predicts where readers move their eyes!, Vision Sciences Society Meeting (VSS) 2016, FL USA

Predicting cognitive/mental states from eye movements during reading: As the readers of a language, we all agree to move our eyes in about the same way. Yet might there be hidden within all this self-similar behavior subtle clues as to how a reader is engaging the material being read? We attempt to decode a reader’s eye movements using machine learning techniques to reveal their cognitive/mental states. e.g., level of text comprehension, difficulty and sleepiness.

    • Ahn, S., Zelinsky, G., Predicting Mental States from Eye Movements During Reading, Vision Sciences Society Meeting (VSS) 2019, FL USA

Others

  • The interplay between visual representation and semantics
  • Object and Non-object visual search in Autism Spectrum Disorder Adults.

Previous Projects …