The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
Face inversion effects are used as evidence that faces are processed differently from objects. Nevertheless, there is debate about whether processing differences between upright and inverted faces are qualitative or quantitative. We present two experiments comparing holistic processing of upright and inverted faces within the composite task, which requires participants to match one half of a test face while ignoring irrelevant variation in the other half of the test face. Inversion reduced overall performance but led to the same qualitative pattern of results as observed for upright faces (Experiment 1). However, longer presentation times were required to observe holistic effects for inverted compared to upright faces (Experiment 2). These results suggest that both upright and inverted faces are processed holistically, but inversion reduces overall processing efficiency.
Copyright © 2010 Elsevier Ltd. All rights reserved.
An influential theory suggests that integrated objects, rather than individual features, are the fundamental units that limit our capacity to temporarily store visual information (S. J. Luck & E. K. Vogel, 1997). Using a paradigm that independently estimates the number and precision of items stored in working memory (W. Zhang & S. J. Luck, 2008), here we show that the storage of features is not cost-free. The precision and number of objects held in working memory was estimated when observers had to remember either the color, the orientation, or both the color and orientation of simple objects. We found that while the quantity of stored objects was largely unaffected by increasing the number of features, the precision of these representations dramatically decreased. Moreover, this selective deterioration in object precision depended on the multiple features being contained within the same objects. Such fidelity costs were even observed with change detection paradigms when those paradigms placed demands on the precision of the stored visual representations. Taken together, these findings not only demonstrate that the maintenance of integrated features is costly; they also suggest that objects and features affect visual working memory capacity differently.
Saccade stop signal and target step tasks are used to investigate the mechanisms of cognitive control. Performance of these tasks can be explained as the outcome of a race between stochastic go and stop processes. The race model analyses assume that response times (RTs) measured throughout an experimental session are independent samples from stationary stochastic processes. This article demonstrates that RTs are neither independent nor stationary for humans and monkeys performing saccade stopping and target-step tasks. We investigate the consequences that this has on analyses of these data. Nonindependent and nonstationary RTs artificially flatten inhibition functions and account for some of the systematic differences in RTs following different types of trials. However, nonindependent and nonstationary RTs do not bias the estimation of the stop signal RT. These results demonstrate the robustness of the race model to some aspects of nonindependence and nonstationarity and point to useful extensions of the model.
Both domain-specific and expertise accounts of category specialization assume that generalization occurs within a domain but not between domains. Yet it is often difficult to define the boundaries and critical features of object domains. Differences in how categories are defined make it difficult to adjudicate between accounts of category specificity and may lead to contradictory results. For example, evidence for whether car experts recruit the fusiform face area is mixed, and this inconsistency may be due to the inclusion of antique cars in one of those previous studies (e.g., Grill-Spector, Knouf, & Kanwisher, 2004). The present study tested the generalization of expertise from modern to antique cars and found that modern-car experts showed expert discrimination and holistic processing of modern cars but not of antique cars. These findings suggest that the neural specialization underlying perceptual expertise is highly specific and may not generalize to distinct subclasses, even when they share some degree of perceptual and conceptual features.
There is no shortage of evidence to suggest that faces constitute a special category in human perception. Surprisingly little consensus exists, however, regarding the interpretation of these results. The question persists: what makes faces special? We address this issue via one hallmark of face perception-its striking sensitivity to low-level image format-and present evidence in favor of an expertise account of the specialization of face perception. In accordance with earlier work (I. Biederman & P. Kalocsai, 1997), we find that manipulating one image into two versions that are complementary in spatial frequency (SF) and orientation information disproportionately impairs face matching relative to object matching. Here, we demonstrate that this characteristic of face processing is also found for cars, with its magnitude predicted by the observers' level of expertise with cars. We argue that the bar needs to be raised for what constitutes proper evidence that face perception is special in a manner that is not related to our expertise in this domain.
Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure's orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated "globetrotter" was sufficient to bias the bistable walker's direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations.
Although orientation columns are less than a millimeter in width, recent neuroimaging studies indicate that viewed orientations can be decoded from cortical activity patterns sampled at relatively coarse resolutions of several millimeters. One proposal is that these differential signals arise from random spatial irregularities in the columnar map. However, direct support for this hypothesis has yet to be obtained. Here, we used high-field, high-resolution functional magnetic resonance imaging (fMRI) and multivariate pattern analysis to determine the spatial scales at which orientation-selective information can be found in the primary visual cortex (V1) of cats and humans. We applied a multiscale pattern analysis approach in which fine- and coarse-scale signals were first removed by ideal spatial lowpass and highpass filters, and the residual activity patterns then analyzed by linear classifiers. Cat visual cortex, imaged at 0.3125 mm resolution, showed a strong orientation signal at the scale of individual columns. Nonetheless, reliable orientation bias could still be found at spatial scales of several millimeters. In the human visual cortex, imaged at 1 mm resolution, a majority of orientation information was found on scales of millimeters, with small contributions from global spatial biases exceeding approximately 1 cm. Our high-resolution imaging results demonstrate a reliable millimeters-scale orientation signal, likely emerging from irregular spatial arrangements of orientation columns and their supporting vasculature. fMRI pattern analysis methods are thus likely to be sensitive to signals originating from other irregular columnar structures elsewhere in the brain.
The concurrent maintenance of two visual working memory (VWM) arrays can lead to profound interference. It is unclear, however, whether these costs arise from limitations in VWM storage capacity (Fougnie & Marois, 2006) or from interference between the storage of one visual array and encoding or retrieval of another visual array (Cowan & Morey, 2007). Here, we show that encoding a VWM array does not interfere with maintenance of another VWM array unless the two displays exceed maintenance capacity (Experiments 1 and 2). Moreover, manipulating the extent to which encoding and maintenance can interfere with one another had no discernable effect on dual-task performance (Experiment 2). Finally, maintenance of a VWM array was not affected by retrieval of information from another VWM array (Experiment 3). Taken together, these findings demonstrate that dual-task interference between two concurrent VWM tasks is due to a capacity-limited store that is independent from encoding and retrieval processes.
Schizophrenia patients exhibit deficits in recognition and identification of facial emotional expressions, but it is unclear whether these deficits result from abnormal affective processing or an impaired ability to process complex visual stimuli such as faces. Participants comprised 16 outpatients with schizophrenia and 22 matched healthy control subjects who performed two computerized visual matching tasks (facial emotional expression and orientation). Accuracy and reaction time were recorded. Clinical symptoms were assessed in the patients using the Brief Psychiatric Rating Scale (BPRS), Scale for the Assessment of Positive Symptoms (SAPS), and Scale for the Assessment of Negative Symptoms (SANS). Social functioning as measured by the Zigler social competence scale was indexed in all participants. Patients with schizophrenia were less accurate than control participants on both facial emotion and orientation matching tasks, but there was no diagnosis-by-task interaction. Clinical symptoms of the patients were associated with deficits on emotion and orientation matching tasks. Worse social functioning was correlated with facial emotion matching errors across both groups. Patients with schizophrenia show general deficits in processing of faces, which is in turn associated with worse symptoms and reduced social functioning.
Compared with other objects, faces are processed more holistically and with a larger reliance on configural information. Such hallmarks of face processing can also be found for nonface objects as people develop expertise with them. Is this specifically a result of expertise individuating objects, or would any type of prolonged intensive experience with objects be sufficient? Two groups of participants were trained with artificial objects (Ziggerins). One group learned to rapidly individuate Ziggerins (i.e., subordinate-level training). The other group learned rapid, sequential categorizations at the basic level. Individuation experts showed a selective improvement at the subordinate level and an increase in holistic processing. Categorization experts improved only at the basic level, showing no changes in holistic processing. Attentive exposure to objects in a difficult training regimen is not sufficient to produce facelike expertise. Rather, qualitatively different types of expertise with objects of a given geometry can arise depending on the type of training.