Publications

2023

Adeli, H., Ahn, S., Kriegeskorte, N., & Zelinsky, G. J. (August 2023). Self-supervised transformers predict dynamics of object-based attention in humans.  In Proceedings of the Cognitive Computational Neuroscience (CCN 2023) conference (pp. 1-3) (paper)

Ahn, S., Adeli, H., & Zelinsky, G. J. (August 2023). Using generated object reconstructions to study object-based attention. In Proceedings of the Cognitive Computational Neuroscience (CCN 2023) conference (pp. 1-3) (paper)

Adeli, H., Ahn, S., & Zelinsky, G. (2023). A brain-inspired object-based attention network for multi-object recognition and visual reasoning. Journal of Vision, 23(5), 16-16. (paper) 

Mondal, S., Yang, Z., Ahn, S., Samaras, D., Zelinsky, G., & Hoai, M. (2023). Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1441-1450).  (paper) (supplemental)

2022

Yang, Z., Mondal, S., Ahn, S., Zelinsky, G., Hoai, M., & Samaras, D. (2022). Target-absent Human Attention. In European Conference on Computer Vision (pp. 52-68). (paper)

Chakraborty, S., Gupta, R., Ma, K., Govind, D., Sarder, P., Choi, W. T., … & Samaras, D. (2022). Predicting the Visual Attention of Pathologists Evaluating Whole Slide Images of Cancer. In International Workshop on Medical Optical Imaging and Virtual Microscopy Image Analysis (pp. 11-21). (paper) (supplemental)

Ahn, S., Adeli, H., & Zelinsky, G. Reconstruction-guided attention improves the robustness and shape processing of neural networks. In SVRHM 2022 Workshop@ NeurIPS. (paper)

Chakraborty, S., Wei, Z., Kelton, C., Ahn, S., Balasubramanian, A., Zelinsky, G. J., & Samaras, D. (2022). Predicting visual attention in graphic design documents. IEEE Transactions on Multimedia(paper)

Miuccio, M. T., Zelinsky, G. J., & Schmidt, J. (2022). Are all real‐world objects created equal? Estimating the “set‐size” of the search target in visual working memory. Psychophysiology59(4), e13998. (paper)

Chakraborty, S., Ma, K., Gupta, R., Knudsen, B., Zelinsky, G. J., Saltz, J. H., & Samaras, D. (2022). Visual attention analysis of pathologists examining whole slide images of Prostate cancer. In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE. (paper)

Chakraborty, S., Samaras, D., & Zelinsky, G. J. (2022). Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty. Journal of vision22(4), 13-13. (paper) (supplemental)

Chen, Y., Yang, Z., Chakraborty, S., Mondal, S., Ahn, S., Samaras, D., … & Zelinsky, G. (2022). Characterizing Target-Absent Human Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR workshops). (paper) (supplemental)

 

2021

Ahn, S., Zelinsky, G. J., & Lupyan, G. (2021). Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks. Journal of vision21(13), 13-13. (paper)

Chen, Y., Yang, Z., Ahn, S., Samaras, D., Hoai, M., & Zelinsky, G. (2021). COCO-Search18 fixation dataset for predicting goal-directed attention control. Scientific reports11(1), 1-11. (paper) (supplemental)

Zelinsky, G., Chen, Y., Ahn, S., Adeli, H., Yang, Z., Huang, L., Samaras, D. & Hoai, M. (2021). Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning. Neurons, Behavior, Data analysis, and Theory. (paper)

 

2020 

Zelinsky, G., Chen, Y., Ahn, S., & Adeli, H. (2020). Changing perspectives on goal-directed attention control: The past, present, and future of modeling fixations during visual search. Psychology of Learning and Motivation, pp. 231-286. Elsevier. (paper)

Chen, Y., Yang, Z., Ahn, S., Samaras, D., Hoai, M., & Zelinsky, G. (2020). COCO-Search18: A dataset for predicting goal-directed attention control. bioRxiv(paper)

Chen, Y., & Zelinsky, G. (2020). Predicting Goal-directed Attention Control Using Inverse Reinforcement Learning and COCO-Search18. Journal of Vision, 20(11):1632. (abstract)

Ahn, S., Zelinsky, G., Lupyan, G. (2020). Exploring the effects of linguistic labels on learned visual representations using convolutional neural networks. Journal of Vision, 20(11):612. (abstract).

Ahn, S., Kelton, C., Balasubramanian, A., & Zelinsky, G. (2020, June). Towards Predicting Reading Comprehension From Gaze Behavior. In ACM Symposium on Eye Tracking Research and Applications (pp. 1-5). (paper)

Yang, Z., Huang, L., Chen, Y., Wei, Z., Ahn, S., Zelinsky, G., … & Hoai, M. (2020). Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 193-202). (paper)

2019 

Gregory J. Zelinsky (2019) Using Inverse Reinforcement Learning to Predict Goal-directed Shifts of Attention, 2019 Conference on Cognitive Computational Neuroscience(CCN 2019), 13-16 September 2019, Berlin, Germany, https://doi.org/10.32470/CCN.2019. (paper)

Alexander, R. G., Nahvi, R., J., & Zelinsky, G. J. (in press) Specifying the Precision of Guiding Features for Visual Search. Journal of Experimental Psychology: Human Perception and Performance, https://doi.org/10.1037/xhp0000668. (paper)

Chen-Ping Yu, Huidong Liu, Dimitrios Samaras & Gregory J. Zelinsky (2019) Modelling attention control using a convolutional neural network designed after the ventral visual pathway, Visual Cognition, DOI:10.1080/13506285.2019.1661927 (paper)

Zelinsky, G., Yang, Z., Huang, L., Chen, Y., Ahn, S., Wei, Z., … & Hoai, M. (2019). Benchmarking Gaze Prediction for Categorical Visual Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (paper)

Chen, Y., &Zelinsky, G. J. (2019). Is there a shape to the attention spotlight? Computing saliency over proto-objects predicts fixations during scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 45(1), 139-154. (paper)

Conor Kelton, Zijun Wei, Seoyoung Ahn, Aruna Balasubramanian, Samir R. Das, Dimitris Samaras, and Gregory Zelinsky (2019) Reading Detection in Real-time. In 2019 Symposium on Eye Tracking Research and Applications (ETRA ’19), https://doi.org/10.1145/3314111.3319916 (paper)

2018

Alexander, R., & Zelinsky, G. J. (2018). Occluded information is restored at preview but not during visual search. Journal of Vision, 18(11):4, 1-16. (paper)

Adeli, H., & Zelinsky, G. (2018). Deep-BCN: Deep Networks Meet Biased Competition to Create a Brain-Inspired Model of Attention Control. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRw). pp. 1932-1942.(paper)

Adeli, H.,  Zelinsky, G. J. (2018). Learning to attend in a brain-inspired deep neural network. InProceedings of the Cognitive Computational Neuroscience (CCN) conference (pp. 1-4). (paper)

Chen, Y., & Zelinsky, G. (2018). A CNN Model of “Objectness” Predicts Fixations During Free Viewing. Journal of Vision. 2018;18(10): 314. doi:10.1167/18.10.314. (abstract)

Zelinsky, G., Adeli H. (2018). Emergence of visuospatial attention in a brain-inspired deep neural network. Journal of Vision 2018;18(10):898. doi: 10.1167/18.10.898.(abstract)

2017

Vitu, F., Casteau, S., Adeli, H., Zelinsky, G. J., Castet, E. (2017). The magnification factor
accounts for the greater hypometria and imprecision of larger saccades: Evidence from a
parametric human-behavioral study. Journal of Vision, 17(4):2, 1–38. (Paper)

Adeli, H., Vitu, F., & Zelinsky, G. J. (2017). A model of the superior colliculus predicts
fixation locations during scene viewing and visual search. Journal of Neuroscience, 37(6),
1453-1467. (Paper)

Chen, Y., & Zelinsky, G. (2017). Computing Saliency over Proto-Objects Predicts Fixations During Scene Viewing. Journal of Vision, 17(10), 209-209. (Abstract)

Le, H., Yu, C. P., Samaras, D., & Zelinsky, G. (2017). Object detection and localization for free from category-consistent CNN features. Journal of Vision, 17(10), 1248-1248. (Abstract)

Wei, Z., Adeli, H., Hoai, M., Zelinsky, G., & Samaras, D. (2017). Predicting Scanpath Agreement during Scene Viewing using Deep Neural Networks. Journal of Vision, 17(10), 749-749. (Abstract)

Zelinsky, G., & Yu, C. P. (2017). Modeling categorical search guidance using a convolutional neural network designed after the ventral visual pathway. Journal of Vision, 17(10), 88-88. (Abstract)

2016

Wei*, Z., Adeli*, H., Zelinsky, G., Samaras, D., Hoai, M. (2016) Learned region sparsity and
diversity also predicts visual attention. In Neural Information Processing Systems (NIPS)
(pp. 1894-1902). *Equal Contribution (Paper)

Yu, C. P., Maxfield, J. T., & Zelinsky, G. J. (2016). Searching for category-consistent features: A computational approach to understanding visual category representation. Psychological science, 27(6), 870-884. (Paper)

Ryoo, J., Yun, K., Samaras, D., Das, S. R., & Zelinsky, G. (2016). Design and evaluation of a foveated video streaming service for commodity client devices. Proceedings of the 7th International Conference on Multimedia Systems (p. 6). ACM. (Paper)

Adeli, H., & Zelinsky, G. (2016). A Computational Biased Competition Model of Visual Attention using Deep Neural Networks. Journal of Vision16(12), 193-193. (Abstract)

Chen, Y., Yu, C. P., & Zelinsky, G. (2016). Adding Shape to Saliency: A Proto-object Saliency Map for Predicting Fixations during Scene Viewing. Journal of Vision16(12), 1309-1309. (Abstract)

Hout, M. C., Maxfield, J., Robbins, A., & Zelinsky, G. (2016, August). Object categorization performance modeled using multidimensional scaling and category-consistent features.Journal of Vision16(12), 250-250. (Abstract)  (Poster)

Maxfield, J., Yu, C. P., & Zelinsky, G. (2016). Predicting Categorical Search Behavior on Individual Trials Using Category-Consistent Features. Journal of Vision16(12), 404-404. (Abstract)

van der Linden, L., Zelinsky, G., & Vitu, F. (2016). Viewing-position effects in meaningless object viewing. Journal of Vision16(12), 847-847. (Abstract)

Vitu, F., Adeli, H., & Zelinsky, G. (2016). Reading without a lexicon: An illiterate model of saccade programming in the superior colliculus predicts where readers move their eyes!. Journal of Vision16(12), 933-933. (Abstract)

Yu, C. P., Maxfield, J., & Zelinsky, G. (2016). Generating the features for category representation using a deep convolutional neural network. Journal of Vision16(12), 251-251. (Abstract) (Poster)

Zelinsky, G., Adeli, H., & Vitu, F. (2016). The new best model of visual search can be found in the brain. Journal of Vision16(12), 996-996. (Abstract) 

2015

Yu, C. P., Le, H., Zelinsky, G., & Samaras, D. (2015). Efficient video segmentation using parametric graph partitioning. In Proceedings of the IEEE International Conference on Computer Vision (pp. 3155-3163). (Paper)

Zelinsky, G. J., & Yu, C. P. (2015). Clutter perception is invariant to image size. Vision research, 116, 142-151. (Paper)

Ge, G., Yun, K., Samaras, D., & Zelinsky, G. J. (2015). Action classification in still images using human eye movements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 16-23). (Paper)

Zelinsky, G. J., & Bisley, J. W. (2015). The what, where, and why of priority maps and their interactions with visual working memory. Annals of the new York Academy of Sciences, 1339(1), 154-164. (Paper)

Adeli, H., Vitu, F., & Zelinsky, G. (2015). A model of saccade programming during scene viewing based on population averaging in the superior colliculus. Journal of Vision, 15(12), 365-365. (Abstract)

Linden, L., Zelinsky, G., & Vitu, F. (2015). On saccade programming as a function of stimulus complexity: Estimating the population-averaging window for simple shapes, textured discs and natural objects. Journal of Vision, 15(12), 605-605. (Abstract)

Maxfield, J., Yu, C. P., & Gregory, Z. (2015). Searching through the hierarchy: Modeling categorical search using class-consistent features. Journal of Vision, 15(12), 9-9. (Abstract)

Yu, C. P., & Zelinsky, G. J. (2011). Effects of Image Size on Clutter Perception: More Evidence for Proto-Object Segmentation. Journal of Vision, 15(12),120-120. (Abstract)

Yun, K., Ge, G., Samaras, D., & Zelinsky, G. (2015). How we look tells us what we do: Action recognition using human gaze. Journal of Vision, 15(12), 121-121. (Abstract)

Zelinsky, G., Sherman, A., & Yago, T. (2015). Simulating multiple object tracking performance using a Kalman filter model. Journal of Vision, 15(12), 465-465. (Abstract)

Zelinsky, G. (2015). Combining behavioral and computational tools to study mid-level vision in a complex world. Journal of Vision, 15(12), 1396-1396. (Abstract)

2014

Alexander, R. G., Schmidt, J., & Zelinsky, G. J. (2014). Are summary statistics enough? Evidence for the importance of shape in guiding visual search. VisualCognition, 22:3-4, 595-609.(Paper)

Maxfield, J. T., Stadler, W. D., & Zelinsky, G. J. (2014). Effects of target typicality on categorical search. Journal of Vision, 14(12):1, 1-11.(Paper)

Yu, C-P, Samaras, D., & Zelinsky, G. J. (2014). Modeling visual clutter perception using proto-object segmentation. Journal of Vision, 14(7): 4, 1-16.(Paper)

Schmidt, J., MacNamara, A., Proudfit, G. H., Zelinsky, G. J. (2014). More target features in visual working memory leads to poorer search guidance: Evidencefrom contralateral delay activity. Journal of Vision, 14(3):8, 1-19.(Paper)

Adeli, H., Casteau, S., Vitu, F., & Zelinsky, G. J. (2014). An image-based population model of human saccade programming in the Superior Colliculus. Journal of Vision, 14(10): 1215.(Abstract)

Alexander, R. A., & Zelinsky, G. J. (2014). The dominance of color in guiding visual search: Evidence for mismatch effects. Journal of Vision, 14(10): 218.(Abstract)

Maxfield, J. T., Yu, C-P., & Zelinsky, G. J. (2014). Searching through the hierarchy: A behavioral and computational approach to understanding categoricalsearch. Journal of Vision, 14(10): 940.(Abstract)

Sherman, A. M., Yago Vicente, T. F., Zelinsky, G. J. (2014). Replacing the spotlight with a Kalman filter: A prediction error model of multiple object tracking.Journal of Vision, 14(10): 358.(Abstract)

Yu, C-P., Samaras, D., & Zelinsky, G. J. (2014). Modeling visual clutter perception using proto-object segmentation. Journal of Vision, 14(7): 4.(Abstract)

Zelinsky, G. J., & Adeli, H. (2014). Does and interaction catch the eye? Decoding movements to predict scene understanding. Journal of Vision, 14(10): 763.(Abstract)

2013

Zelinsky, G. J., Peng, Y., & Samaras, D. (2013). Eye can read your mind: Decoding gaze fixations to reveal categorical search targets. Journal of vision, 13(14), 10-10. (Paper)

Zelinsky, G. J. (2013). Understanding scene understanding. Frontiers in Psychology, 4, 954. (Paper) 

Yu, C. P., Hua, W. Y., Samaras, D., & Zelinsky, G. (2013). Modeling clutter perception using parametric proto-object partitioning. In Advances in Neural Information Processing Systems (pp. 118-126) (Paper)

Zelinsky, G. J., Peng, Y., Berg, A. C., & Samaras, D. (2013). Modeling guidance and recognition in categorical search: Bridging human and computer object detection. Journal of Vision, 13(3), 30-30. (Paper)

Zelinsky, G. J., Adeli, H., Peng, Y., & Samaras, D. (2013). Modelling eye movements in a categorical search task. Phil. Trans. R. Soc. B, 368(1628), 20130058. (Paper)

Yun, K., Peng, Y., Samaras, D., Zelinsky, G. J., & Berg, T. L. (2013). Studying relationships between human gaze, description, and computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 739-746). (Paper)

Yun, K., Peng, Y., Samaras, D., Zelinsky, G. J., & Berg, T. L. (2013). Exploring the role of gaze behavior and object detection in scene understanding. Frontiers in psychology, 4. (Paper)

Dickinson, C. A., & Zelinsky, G. J. (2013). New evidence for strategic differences between static and dynamic search tasks: an individual observer analysis of eye movements. Frontiers in psychology, 4. (Paper)

Alexander, R., & Zelinsky, G. (2013). The P300 is an electrophysiological correlate of semantic similarity. Journal of Vision, 13(9), 501-501.

Maxfield, J. T., Stalder, W. D., & Zelinsky, G. J. (2014). Effects of target typicality on categorical search. Journal of vision, 14(12), 1-1. (Paper)

Sherman, A. M., Horowitz, T. S., Shitrit, H. B., & Zelinsky, G. J. (2013). Are basketball players just dots? Comparing multiple object tracking in real and simple contexts. Journal of Vision, 13(9), 1286-1286.

Yun, K., Peng, Y., Adeli, H., Berg, T., Samaras, D., & Zelinsky, G. (2013). Specifying the relationships between objects, gaze, and descriptions for scene understanding. Journal of Vision, 13(9), 1309-1309 (Poster)

2012

Maxfield, J. T., & Zelinsky, G. J. (2012). Searching through the hierarchy: How level of target categorization affects visual search. Visual Cognition, 20(10), 1153-1163. (Paper)

MacNamara, A., Schmidt, J., Zelinsky, G. J., & Hajcak, G. (2012). Electrocortical and ocular indices of attention to fearful and neutral faces presented under high and low working memory load. Biological psychology, 91(3), 349-356. (Paper)

Alexander, R. G., & Zelinsky, G. J. (2012). Effects of part-based similarity on visual search: The Frankenbear experiment. Vision research, 54, 20-30. (Paper)

Brennan, S. E., Hanna, J. E., Zelinsky, G. J., & Savietta, K. J. (2012). Eye gaze cues for coordination in collaborative tasks. In DUET 2012 Workshop: Dual eye tracking in CSCE. 2012 ACM Conference on Computer Supported Cooperative Work (Vol. 9). (Paper)

Zelinsky, G. J. (2012). TAM: Explaining off-object fixations and central fixation tendencies as effects of population averaging during search. Visual Cognition, 20(4-5), 515-545. (Paper)

Alexander, R., & Zelinsky, G. (2012). Hide and Seek: Amodal Completion During Visual Search. Journal of Vision, 12(9), 736-736.

Huang, A., Schmidt, J., Yang, H., Oh, H., Leung, H. C., & Zelinsky, G. (2012). Search is guided by two targets: Evidence from a combined fMRI and eye movements study. Journal of Vision, 12(9), 737-737.

Maxfield, J., & Zelinsky, G. (2012). Searching Through the Hierarchy: How a Target’s Categorization Level Affects Categorical Search. Journal of Vision, 12(9), 738-738. (Paper)

Schmidt, J., MacNamara, A., Hajcak, G., & Zelinsky, G. (2012). A neural marker of the representation used to guide visual search. Journal of Vision, 12(9), 729-729.

2011

Neider, M. B., & Zelinsky, G. J. (2011). Cutting through the clutter: Searching for targets in evolving complex scenes. Journal of Vision, 11(14):7, 1-16. (Paper)

Alexander, R. G., & Zelinsky, G. J. (2011). Visual similarity effects in categorical search. Journal of Vision, 11(8):9, 1-15. (Paper)

Schmidt, J. & Zelinsky, G. J. (2011). Visual search guidance is best after a short delay. Vision Research, 51, 535-545. (Paper)

Zelinsky, G. J., Loschky, L. C., & Dickinson, C. A. (2011). Do object refixations during scene viewing indicate rehearsal in visual working memory? Memory & Cognition, 39, 600-613. (Paper)

Alexander, R. G., & Zelinsky, G. J. (2011). Searching for target parts. Journal of Vision, 11(11): 1321a. (Abstract)

Chavaillaz, A., & Zelinsky G. J. (2011). Redundancy gains using real-world objects. Journal of Vision, 11(11): 1332a. (Abstract)

Schmidt, J., & MacNamara, A., Hajcak, G., & Zelinsky, G. J. (2011). ERP correlates of the target representation used to guide search. Journal of Vision, 11(11): 1345a. (Abstract)

Zelinsky, G. J., & Neider, M. B. (2011). The evolution of clutter effects in visual search. Journal of Vision, 11(11): 1331a. (Abstract)

2010

Alexander, R. G., Zhang, W., & Zelinsky, G. J. (2010). Visual similarity effects in categorical search. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1222-1227). Austin, TX: Cognitive Science Society. (Abstract)

Neider, M. B., Chen, X., Dickinson, C. A., Brennan, S. E. & Zelinsky, G. J. (2010). Coordinating spatial referencing using shared gaze. Psychonomic Bulletin & Review, 17(5), 718-724. (Paper)

Neider, M. B., & Zelinsky, G. J. (2010). Exploring the perceptual causes of search set-size effects in complex scenes. Perception, 39, 780-794. (Paper)

Zelinsky, G. J., & Todor, A. (2010). The role of “rescue saccades” in tracking objects through occlusions. Journal of Vision, 10(7), 1316a. (Paper)

Alexander, R. G., & Zelinsky, G. J. (2010). Visual similarity predicts categorical search guidance. Journal of Vision, 10(7), 1316a. (Abstract)

Schmidt, J., & Zelinsky, G. J. (2010). Searching for two objects: Does knowing their relational alignment produce greater search guidance? Journal of Vision, 10(7), 1310a. (Abstract)

Zelinsky, G. J., & Todor, A. (2010). The role of “rescue saccades” in tracking objects through occlusions. Journal of Vision, 10(7), 132a. (Paper)                                                                              

2009

Schmidt, J., & Zelinsky, G. J. (2009). Search guidance is proportional to the categorical specificity of a target cue. Quarterly Journal of Experimental Psychology, 62 (10), 1904-1914. (Paper)

Yang, H., Chen, X., & Zelinsky, G. J. (2009). A new look at novelty effects: Guiding search away from old distractors. Attention, Perception, & Psychophysics, 71 (3), 554-564. (Paper)

Yang, H., & Zelinsky, G. J. (2009). Visual search is guided to categorically-defined targets. Vision Research, 49, 2095-2103. (Paper)

Zelinsky, G. J., & Loschky, L. C. (2009). Using eye movements to study working memory rehearsal for objects in scenes. In N. A. Taatgen & H. van Rijin (Eds.), Proceedings of the 31st Conference of the Cognitive Science Society (pp. 1312-1317). Austin, TX: Cognitive Science Society. (Paper)

Zelinsky, G. J., & Schmidt, J. (2009). An effect of referential scene constraint on search implies scene segmentation. Visual Cognition 17(6), 1004-1028. (Paper)

Alexander, R., & Zelinsky, G. J. (2009). The Frankenbear Experiment: Looking for part-based similarity effects on search guidance with complex objects. Journal of Vision, 9(7), 273a. (Abstract)

Schmidt, J., & Zelinsky, G. J. (2009). Visual search guidance is best shortly after target preview offset.  Journal of Vision, 9(7), 272a. (Abstract)

Zelinsky, G. J., & Schmidt, J. (2009). Searching aerial images: Evidence for scene constraints in the absence of global context. Journal of Vision, 9(7), 160a. (Abstract)

2008

Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B., & Zelinsky, G. J. (2008). Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition, 106, 1465-1477. (Paper)

Neider, M.B., & Zelinsky, G. J. (2008). Exploring set size effects in scenes: Identifying the objects of search. Visual Cognition, 16(1), 1-10. (Paper)

Zelinsky, G. J. (2008). A theory of eye movements during target acquisition. Psychological Review, 115(4), 787-835. (Paper)

Zelinsky, G. J., & Neider, M. B. (2008). An eye movement analysis of multiple objects tracking in a realistic environment. Visual Cognition, 16(5), 553-566. (Paper)

Zhang, W., Samaras, D., & Zelinsky, G. J. (2008). Classifying objects based on their visual similarity to target categories. Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 1856-1861). (Paper)

Schmidt, J., & Zelinsky, G. J. (2008). Visual search guidance increases with a delay between target cue and search. Journal of Vision, 8(6), 317a. (Paper)

Yang, H.,Oh, H., Leung, H, C., & Zelinsky , G. J. (2008). An effect of WM load on visual search guidance: Evidence from eye movements and functional brain imaging. Journal of Vision, 8(6), 316a. (Paper)

Zelinsky, G. J., Zhang, W., & Samaras, D. (2008). Eye can read your mind: Decoding eye movements to reveal the targets of categorical search tasks. Journal of Vision, 8(6), 380a. (Paper)

2007

Dickinson, C. A., & Zelinsky, G. J. (2007). Memory for the search path: Evidence for a high-capacity representation of search history. Vision Research, 47, 1745-1755. (Paper)

Zhang, W., Zelinsky, G. J., & Samaras, D. (2007). Real-time accurate object detection using multiple resolutions. Proceedings of the IEEE International Conference on Computer Vision (pp. 1-8). (Paper)

Chen, X., & Zelinsky, G. J. (2007). Dividing the labor of search: It’s not just space anymore. Journal of Vision, 7(9), 1055a. (Paper)

Neider, M. B., Brotzen, S., & Zelinsky, G. J. (2007). Cutting through the clutter: Searching for targets in evolving realistic scenes. Journal of Vision, 7(9), 1056a. (Paper)

Schmidt, J., & Zelinsky, G. J. (2007). Manipulating the availability of visual information in search. Journal of Vision, 7(9), 715a. (Paper)

Yang, H. & Zelinsky, G. J. (2007). Visual memory of visual features coded verbally? An effect of working memory load on guidance during visual search. Journal of Vision, 7(9), 686a. (Paper)

Zelinsky, G. J., Neider, M. B., & Todor, A. (2007). Multi-object tracking in a realistic 3D environment. Journal of Vision, 7(9), 895a. (Paper)

2006

Brennan, S. E., Mueller, K., Zelinsky, G. J., Ramakrishnan, I.V., Warren, D. S., & Kaufman, A. (2006). Toward a Multi-Analyst, Collaborative Framework for Visual Analytics. IEEE Symposium on Visual Analytics Science and Technology (VAST) (pp. 129-136). Baltimore, MD. (Paper)

Chen, X., & Zelinsky, G. J. (2006). Real-world visual search is dominated by top-down guidance. Vision Research, 46, 4118-4133. (Paper)

Neider, M. B., & Zelinsky, G. J. (2006a). Scene context guides eye movements during visual search. Vision Research, 46, 614-621. (Paper)

Neider, M. B., & Zelinsky, G. J. (2006b). Searching for camouflaged targets: Effects of target-background similarity on visual search. Vision Research, 46, 2217-2235. (Paper)

Neider, M. B., & Zelinsky, G. J. (2006). Exploring set size effects in realistic scenes. Journal of Vision, 6(6), 448a. (Paper)

Schmidt, J., & Zelinsky, G. J. (2006). How is eye gaze affected by cognitive load and visual complexity? Journal of Vision, 6(6), 363a. (Paper)

Yang, H., & Zelinsky, G. J. (2006). Evidence for guidance in categorical visual search. Journal of Vision, 6(6), 449a. (Paper)

Zelinsky, G. J., Zhang, W., u, B., Chen, X., & Samaras, D. (2006). The role of top-down and bottom-up processes in guiding eye movements during visual search. In Y. Weiss, B. Scholkopf, & J. Platt (Eds.), Advances in Neural Information Processing Systems Vol. 18 (pp. 1569-1576). Cambridge, MA: MIT Press (Paper)

Zhang, W., Yang, H., Samaras, D., & Zelinsky, G. J. (2006). A computational model of eye movements during object class detection. In Y. Weiss, B. Scholkopf, & J. Platt (Eds.), Advances in Neural Information Processing Systems Vol. 18 (pp. 1609-1616). Cambridge, MA: MIT Press (Paper)

2005

Dickinson, C. A., & Zelinsky, G. J. (2005). Marking rejected distractors: A gaze-contingent technique for measuring memory during search. Psychonomic Bulletin & Review, 12(6), 1120-1126. (Paper)

Neider, M. B., & Zelinsky, G. J. (2005). Effects of scene-based contextual guidance on search. Journal of Vision, 5(8), 414-414. (Abstract)

Zelinsky, G. J., Dickinson, C. A., Chen, X., Neider, M. B., & Brennan, S. E. (2005). Collaborative search using shared eye gaze. Journal of Vision, 5(8), 700-700. (Abstract)

Zelinsky, G. J., & Loschky, L. C. (2005). Eye movements serialize memory for objects in scenes. Attention, Perception, & Psychophysics, 67(4), 676-690. (Paper)

Zhang, W., Yu, B., Zelinsky, G. J., & Samaras, D. (2005, June). Object class recognition using multiple layer boosting with heterogeneous features. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 2, pp. 323-330). IEEE. (Paper)

2004

Chen, X., & Zelinsky, G. J. (2004). Monitoring the use of target memory during visual search. Journal of Vision, 4(8), 683-683. (Abstract)

Dickinson, C. A., Chen, X., & Zelinsky, G. J. (2004). Is memory during search memory for where we’ve been?. Journal of Vision, 4(8), 681-681. (Abstract)

Neider, Mark B., and Gregory J. Zelinsky. “Searching for Camouflaged Real-World Objects.” Journal of Vision 4.8 (2004): 335-335. (Abstract)

2003

Chen, X., & Zelinsky, G. J. (2003). How “visual” is visual search? Dissociating visual from categorical factors in a search task. Journal of Vision, 3(9), 625-625. (Abstract)

Dickinson, C. A., Chen, X., & Zelinsky, G. J. (2003). Explicitly marking rejected distractors in an overt visual search task. Journal of Vision, 3(9), 158-158. (Abstract)

Shimozaki, S. S., Hayhoe, M. M., Zelinsky, G. J., Weinstein, A., Merigan, W. H., & Ballard, D. H. (2003). Effect of parietal lobe lesions on saccade targeting and spatial memory in a naturalistic visual search task. Neuropsychologia, 41(10), 1365-1386. (Paper)

Zelinsky, G. J. (2003). Detecting changes between real-world objects using spatiochromatic filters. Psychonomic Bulletin & Review, 10(3), 533-555. (Paper)

Zelinsky, G. J., & Loschky, L. C. (2003). Fuzzy object file theory: A framework for understanding recency effects for objects in scenes. Journal of Vision, 3(9), 644-644. (Abstract)

2002

Aks, D. J., Zelinsky, G. J., & Sprott, J. C. (2002). Memory across eye-movements: 1/f dynamic in visual search. Nonlinear dynamics, psychology, and life sciences, 6(1), 1-25. (Paper)

Dickinson, C. A., & Zelinsky, G. J. (2002). The “flicker” search task: A paradigm for investigating memory in visual search. Journal of Vision, 2(7), 539-539. (Abstract)

Irwin, D. E., & Zelinsky, G. J. (2002). Eye movements and scene perception: Memory for things observed. Attention, Perception, & Psychophysics, 64(6), 882-895. (Paper)

Rao, R. P., Zelinsky, G. J., Hayhoe, M. M., & Ballard, D. H. (2002). Eye movements in iconic visual search. Vision research, 42(11), 1447-1463. (Paper)

Zelinsky, G. J. (2002). A theory of gaze behavior during real-world visual search. Journal of Vision, 2(7), 726-726. (Abstract)

2001

Aks, D. J., Zelinsky, G., & Sprott, J. C. (2001). Memory across eye-movements: 1/f Dynamic in visual search. Journal of Vision, 1(3), 230-230. (Abstract)

Dickinson, C. A., & Zelinsky, G. J. (2001, March). Is search amnesic or schizophrenic?. In INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE (Vol. 42, No. 4, pp. S867-S867). 9650 ROCKVILLE PIKE, BETHESDA, MD 20814-3998 USA: ASSOC RESEARCH VISION OPHTHALMOLOGY INC.

Zelinsky, G. J. (2001, March). Visual priming contributes to set size effects. In Investigative Ophthalmology & Visual Science (Vol. 42, No. 4, pp. S927-S927). 9650 ROCKVILLE PIKE, BETHESDA, MD 20814-3998 USA: ASSOC RESEARCH VISION OPHTHALMOLOGY INC.

Zelinsky, G. J. (2001). Eye movements during change detection: Implications for search constraints, memory limitations, and scanning strategies. Attention, Perception, & Psychophysics, 63(2), 209-225. (Paper)

2000

Zelinsky, G. J. (2000, March). Set size effects without visual search. In INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE (Vol. 41, No. 4, pp. S759-S759). 9650 ROCKVILLE PIKE, BETHESDA, MD 20814-3998 USA: ASSOC RESEARCH VISION OPHTHALMOLOGY INC.

Zelinsky, G. J., & Murphy, G. L. (2000). Synchronizing visual and language processing: An effect of object name length on eye movements. Psychological Science, 11(2), 125-131. (Paper)

From last millennium

Kramer, A. F., Hahn, S., Irwin, D. E., & Zelinsky, G. J. (1999). Influence of attentional capture on oculomotor control. Journal of Experimental Psychology, 25(6), 1595-1608. (Paper)

Zelinsky, G. J. (1999). Exorcising the devil: Adding details to a descriptive account of oculomotor control. Behavioral and Brain Sciences, 22(4), 703-704. (Paper)

Zelinsky, G. J. (1999). Precuing target location in a variable set size ‘nonsearch’ task: Dissociating search-based and interference-based explanations for set size effects. Journal Of Experimental Psychology: Human Perception And Performance, 25(4), 875-903. (Paper)

Zelinsky, G. J., & Sheinberg, D. L. (1997). Eye movements during parallel–serial visual search. Journal of Experimental Psychology: Human Perception and Performance, 23(1), 244. (Paper)

Zelinsky, G. J., Rao, R. P. N., Hayhoe, M. M., & Ballard, D. H. (1997). Eye movements reveal the spatiotemporal dynamics of visual search. Psychological Science, 8(6), 448-453. (Paper)

Karn, K. S., & Zelinsky, G. J. (1996). Driving and dish-washing: Failure of the correspondence metaphor for memory. Behavioral and Brain Sciences, 19(2), 198-198. (Paper)

Rao, R. P., Zelinsky, G. J., Hayhoe, M. M., & Ballard, D. H. (1996). Modeling saccadic targeting in visual search. In Advances in neural information processing systems (pp. 830-836). (Paper)

Zelinsky, G. J. (1996). Using eye saccades to assess the selectivity of search movements. Vision research, 36(14), 2177-2187. (Paper)

Zelinsky, G., & Sheinberg, D. (1995). Why some search tasks take longer than others: Using eye movements to redefine reaction times. Studies in visual information processing, 6, 325-336. (Paper)

Sheinberg, D. L., & Zelinsky, G. J. (1993). A cortico–collicular model of saccadic target selection. Perception and cognition: Advances in eye movement research, 4, 333-348. (Paper)