Saccade endpoints reflect attentional templates in visualsearch: Evidence from feature distribution learning

In visual search, our gaze is guided by mental representations of stimulus features, known as attentional templates. These templates are thought to be probabilistic, shaped by environmental regularities. For example, participants can learn to distinguish between the shapes of different distractor color distributions in visual search. The present study assessed whether such subtle differences in distractor color distributions (Gaussian vs. uniform) are reflected in saccade endpoints. We conducted two experiments, each consisting of learning trials, designed to prime a specific distractor color distribution, and test trials, where target color varied in its distance from the mean of previously presented distractor distributions. Saccade endpoint deviations were observed through the global effect, where the saccades tended to land between two nearby stimuli. The experiments differed in difficulty, with test trials in Experiment 2 involving more distractors and colors. During test trials, reaction times and saccade endpoints were affected by target distance from the mean of the preceding distractor distribution. The farther the target color was from this mean, the less the saccade deviated from the target and the lower the reaction times. However, saccade endpoints did not reflect the shape of distractor color distributions, an effect that was observed only on reaction times in Experiment 2. Overall, color priming affects both reaction times and saccade deviations, but distractor feature distribution learning depends on search difficulty and response measures, with saccade endpoints less sensitive to subtle differences in the shape of color distributions.

Voir les publications