The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
In this paper, we asked to what extent the depth of interocular suppression engendered by continuous flash suppression (CFS) varies depending on spatiotemporal properties of the suppressed stimulus and CFS suppressor. An answer to this question could have implications for interpreting the results in which CFS influences the processing of different categories of stimuli to different extents. In a series of experiments, we measured the selectivity and depth of suppression (i.e., elevation in contrast detection thresholds) as a function of the visual features of the stimulus being suppressed and the stimulus evoking suppression, namely, the popular "Mondrian" CFS stimulus (N. Tsuchiya & C. Koch, 2005). First, we found that CFS differentially suppresses the spatial components of the suppressed stimulus: Observers' sensitivity for stimuli of relatively low spatial frequency or cardinally oriented features was more strongly impaired in comparison to high spatial frequency or obliquely oriented stimuli. Second, we discovered that this feature-selective bias primarily arises from the spatiotemporal structure of the CFS stimulus, particularly within information residing in the low spatial frequency range and within the smooth rather than abrupt luminance changes over time. These results imply that this CFS stimulus operates by selectively attenuating certain classes of low-level signals while leaving others to be potentially encoded during suppression. These findings underscore the importance of considering the contribution of low-level features in stimulus-driven effects that are reported under CFS.
The error-related negativity (ERN) and positivity (Pe) are components of event-related potential (ERP) waveforms recorded from humans and are thought to reflect performance monitoring. Error-related signals have also been found in single-neuron responses and local-field potentials recorded in supplementary eye field and anterior cingulate cortex of macaque monkeys. However, the homology of these neural signals across species remains controversial. Here, we show that monkeys exhibit ERN and Pe components when they commit errors during a saccadic stop-signal task. The voltage distributions and current densities of these components were similar to those found in humans performing the same task. Subsequent analyses show that neither stimulus- nor response-related artifacts accounted for the error-ERPs. This demonstration of macaque homologues of the ERN and Pe forms a keystone in the bridge linking human and nonhuman primate studies on the neural basis of performance monitoring.
The theoretical framework of General Recognition Theory (GRT; Ashby & Townsend, Psychological Review, 93, 154-179, 1986) coupled with the empirical analysis tools of Multidimensional Signal Detection Analysis (MSDA; Kadlec & Townsend, Multidimensional models of perception and recognition, pp. 181-228, 1992) have become one important method for assessing dimensional interactions in perceptual decision-making. In this article, we critically examine MSDA and characterize cases where it is unable to discriminate two kinds of dimensional interactions: perceptual separability and decisional separability. We performed simulations with known instances of violations of perceptual or decisional separability, applied MSDA to the data generated by these simulations, and evaluated MSDA on its ability to accurately characterize the perceptual versus decisional source of these simulated dimensional interactions. Critical cases of violations of perceptual separability are often mischaracterized by MSDA as violations of decisional separability.
The elements most vivid in our conscious awareness are the ones to which we direct our attention. Scientific study confirms the impression of a close bond between selective attention and visual awareness, yet the nature of this association remains elusive. Using visual afterimages as an index, we investigate neural processing of stimuli as they enter awareness and as they become the object of attention. We find evidence of response enhancement accompanying both attention and awareness, both in the phase-sensitive neural channels characteristic of early processing stages and in the phase-insensitive channels typical of higher cortical areas. The effects of attention and awareness on phase-insensitive responses are positively correlated, but in the same experiments, we observe no correlation between the effects on phase-sensitive responses. This indicates independent signatures of attention and awareness in early visual areas yet a convergence of their effects at more advanced processing stages.
When events occur spontaneously during the acquisition of a series of images, traditional modeling methods for detecting functional MRI activation detection cannot be employed. The two-dimensional temporal clustering algorithm, 2dTCA, has been shown to accurately detect random, transient activations in computer simulations without the use of known event timings. In this study we applied the 2dTCA technique to detect the timings and spatial locations of sparse, irregular, transient activations of the visual, auditory, and motor cortices in 12 normal controls. Experiments with one and two independent types of stimuli were employed. Event-related activation using known timing was compared with event-related activation using 2dTCA-detected timing in individuals and across groups. The 2dTCA algorithm detected the activation from all presented stimuli in every subject. When compared with block-design results using a measure of correlation between activation maps, no significant difference was found between the 2dTCA activation maps and the event-related maps using known timing across all subjects. Therefore, 2dTCA has the potential to be an accurate and more practical method for detection of spontaneous, transient events using fMRI.
Face recognition involves several physiological and psychological processes, including those in visual, cognitive and affective domains. Studies have found that schizophrenia patients are deficient at recognizing facial emotions, yet visual and cognitive processing of facial information in this population has not been systematically examined. In this study, we examined visual detection, perceptual discrimination and working memory of faces as well as non-face visual objects in patients. Visual detection was measured by accuracy when detecting the presence of a briefly displayed face, image which contained only the basic configural information of a face. Perceptual discrimination was measured by discriminability scores for individual facial identity images, in which the degree of similarity between images was systematically varied via morphing. Working memory was measured by the discriminability scores when two comparison face images were separated by 3 or 10 s. All measurements were acquired using a psychophysical method (two-alternative forced choice). Relative to controls, patients showed significantly reduced accuracy in visual detection of faces (p=0.003), moderately degraded performance in perceptual discrimination of faces (p=0.065), and significantly impaired performance in working memory of faces (p<0.001 for both 3 and 10 sec conditions). Patients' performance on non-face versions of these tasks, while degraded, was not correlated with performance on face recognition. This pattern of results indicates that greater signal strength is required for visual and cognitive processing of facial information in schizophrenia.
The attentional blink (AB) and repetition blindness (RB) phenomena refer to subjects' impaired ability to detect the second of two different (AB) or identical (RB) target stimuli in a rapid serial visual presentation stream if they appear within 500 msec of one another. Despite the fact that the AB reveals a failure of conscious visual perception, it is at least partly due to limitations at central stages of information processing. Do all attentional limits to conscious perception have their locus at this central bottleneck? To address this question, here we investigated whether RB is affected by online response selection, a cognitive operation that requires central processing. The results indicate that, unlike the AB, RB does not result from central resource limitations. Evidently, temporal attentional limits to conscious perception can occur at multiple stages of information processing.
BACKGROUND - Higher levels of facial processing, such as recognition of the individuality and emotional expression of faces, are abnormal in schizophrenia. It is unknown, however, whether the visual detection of a face as face is impaired as well.
METHODS - We examined the performance of schizophrenia patients (n=29) and normal controls (n=28) in locating a line-drawn face on the left or the right side of a larger line drawing. To prevent the normal formation of general facial impressions, stimulus presentations were brief (13-104 ms). The face stimuli were either displayed upright or inverted in order to study the face inversion effect, ie, the specific effect of stimulus inversion on face processing.
RESULTS - Schizophrenia patients showed a significantly reduced face inversion effect, resulting primarily from significantly lower accuracy in detecting upright faces than normal controls. In tree detection, a comparison task that was also administered, the stimulus inversion effect was similarly small in both groups.
CONCLUSION - Given the primitive nature and brief duration of the stimuli, and the simplicity of the task, these results indicate that at the initial visual detection stage, facial processing is inefficient in schizophrenia. By isolating face detection from other aspects of face recognition, this study identifies a face-specific visual deficit in schizophrenia, which may ultimately contribute to impaired face-related cognitive and emotional processing and social interaction.
Self-processing is associated with distinct patterns of behavior and neural activity in healthy individuals. Self-monitoring deficits have been reported in schizophrenia in auditory and tactile modalities but it is unknown whether they generalize to all sensory domains. We investigated self-face recognition in patients with schizophrenia, using a visual search paradigm with three types of targets: objects, famous faces and self-faces. Schizophrenic patients showed increased reaction time (RT) for detecting targets overall compared to normal controls but they showed faster RT for self-face compared with the Famous-face condition. For healthy controls, there was no difference between Self- and Famous-face conditions. Thus, visual search for self-face is more efficient than for famous faces and self-face recognition is spared in schizophrenia. These findings suggest that impaired self-processing in schizophrenia may be task-dependent rather than ubiquitous.