Category-specific learned attentional bias to object parts.

Chua KW, Gauthier I
Atten Percept Psychophys. 2016 78 (1): 44-51

PMID: 26715512 · PMCID: PMC5034354 · DOI:10.3758/s13414-015-1040-0

Humans can selectively attend to information in visual scenes. Learning from previous experiences plays a role in how visual attention is subsequently deployed. For example, visual search times are faster in areas that are statistically more likely to contain a target (Jiang and Swallow in Cognition, 126(3), 378-390, 2013). Here, we examined whether similar attentional biases can be created for different locations on complex objects as a function of their category, based on a history of these locations containing a target. Subjects performed a visual search task in the context of novel objects called Greebles. The target appeared in one half (e.g., top) of the Greebles 89 % of the time and in the other half (e.g., bottom) 11 % of the time. We found a reaction time advantage when the target was located in a "target-rich" region, even after target location probabilities were equated. This indicates that attentional biases can be associated not only with regions of space but also with specific object features, or at least with locations in an object-based frame of reference.

MeSH Terms (11)

Adolescent Attention Cognition Female Humans Learning Male Photic Stimulation Probability Reaction Time Young Adult

Connections (1)

This publication is referenced by other Labnodes entities:

Links