Who we are

@Our VSS lab dinner 2023


Principal Investigator

Greg Zelinsky, Ph.D., Brown University, 1994
Professor, Cognitive Science
Associate Professor, Computer Science
gregory.zelinsky@stonybrook.edu

My goal is to better understand visual cognition by following two interrelated research paths. First, we monitor and analyze how people move their eyes as they perform various visual search and visual working memory tasks. We do this in order to obtain an on-line and directly observable measure of how a behavior intimately associated with the selection and accumulation of information (i.e., eye movements) changes in space and time during a task. Second, we attempt to describe this oculomotor behavior in the context of image-based neurocomputational models. These models perform the same task and “see” the same stimuli as human observers, and output a sequence of simulated eye movements that can be compared to human behavior. These comparisons are then used to generate new hypotheses to further test the representations and processes underlying task performance.

 

Graduate Students

Ritik Raina, Ph.D. Student in Cognitive Science
ritik.raina@stonybrook.edu

My research interests are in advancing multimodal generative models that encompass both bottom-up and top-down processing for perceptual reasoning, scene understanding, and generating visual content grounded in human-centric tasks and behaviors.

Souradeep Chakraborty, Ph.D. Student in Computer Science
souradeep.chakraborty@stonybrook.edu

I am a PhD student in the Computer Science department, Stony Brook University. My research interest lies in the area of visual attention modeling. My current research focus is on understanding how users allocate their attention while browsing webpages. Also, I am investigating how visual eccentricity impacts object recognition and search ability of humans.

 

Abe Leite, Ph.D. Student in Cognitive Science and Computer Science
abraham.leite@stonybrook.edu

I study how neural systems encode information and how they process information over time to perform adaptive behaviors. I believe that deciding to take actions, without being compelled by stimuli from the environment, is key to what makes us alive, and I use simple computational models to study this ability on a theoretical level. While there are a number of areas my work connects with, I anticipate that one impact of my work will be in robotics. My computational work in attention and action will enable the creation of robots that can ignore distracting stimuli and prioritize objects according to their relevance to the robot’s specified goals. It will also allow engineers to understand what visual objects a robot was paying attention to when it took a certain action. This will ultimately lead to more reliable and understandable robots, and a safer world for the rest of us.


Lab Alumni

   Seoyoung Ahn, Ph.D., 2023

Yupei Chen, Ph.D., 2021

Hossein AdeliPh.D., 2018

   Justin Maxfield, Ph.D., 2017

Chen-Ping Yu, Ph.D., 2016 


  Robert Alexander, Ph.D., 2013                  

   Joseph Schmidt, Ph.D., 2012          

   Hyejin Yang,Ph.D., 2010

   Xin Chen, Ph.D., 2007             

Mark Neider, Ph.D., 2006


   Chris Dickinson,Ph.D., 2004


Research Assistant

Jessica Lau
James May
Alima Hossain
Alex Feldewerth