The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
Functional magnetic resonance imaging (fMRI) depicts neural activity in the brain indirectly by measuring blood oxygenation level dependent (BOLD) signals. The majority of fMRI studies have focused on detecting cortical activity in gray matter (GM), but whether functional BOLD signal changes also arise in white matter (WM), and whether neural activities trigger hemodynamic changes in WM similarly to GM, remain controversial, particularly in light of the much lower vascular density in WM. However, BOLD effects in WM are readily detected under hypercapnic challenges, and the number of reports supporting reliable detections of stimulus-induced activations in WM continues to grow. Rather than assume a particular hemodynamic response function, we used a voxel-by-voxel analysis of frequency spectra in WM to detect WM activations under visual stimulation, whose locations were validated with fiber tractography using diffusion tensor imaging (DTI). We demonstrate that specific WM regions are robustly activated in response to visual stimulation, and that regional distributions of WM activation are consistent with fiber pathways reconstructed using DTI. We further examined the variation in the concordance between WM activation and fiber density in groups of different sample sizes, and compared the signal profiles of BOLD time series between resting state and visual stimulation conditions in activated GM as well as activated and non-activated WM regions. Our findings confirm that BOLD signal variations in WM are modulated by neural activity and are detectable with conventional fMRI using appropriate methods, thus offering the potential of expanding functional connectivity measurements throughout the brain.
Copyright © 2018 Elsevier Inc. All rights reserved.
Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3-5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6-9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression.
Copyright © 2017 Elsevier Ltd. All rights reserved.
Cortical stimulation mapping (CSM) has provided important insights into the neuroanatomy of language because of its high spatial and temporal resolution, and the causal relationships that can be inferred from transient disruption of specific functions. Almost all CSM studies to date have focused on word-level processes such as naming, comprehension, and repetition. In this study, we used CSM to identify sites where stimulation interfered selectively with syntactic encoding during sentence production. Fourteen patients undergoing left-hemisphere neurosurgery participated in the study. In 7 of the 14 patients, we identified nine sites where cortical stimulation interfered with syntactic encoding but did not interfere with single word processing. All nine sites were localized to the inferior frontal gyrus, mostly to the pars triangularis and opercularis. Interference with syntactic encoding took several different forms, including misassignment of arguments to grammatical roles, misassignment of nouns to verb slots, omission of function words and inflectional morphology, and various paragrammatic constructions. Our findings suggest that the left inferior frontal gyrus plays an important role in the encoding of syntactic structure during sentence production.
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Copyright © 2016 Elsevier Ltd. All rights reserved.
Some vertebrate species have evolved means of extending their visual sensitivity beyond the range of human vision. One mechanism of enhancing sensitivity to long-wavelength light is to replace the 11-cis retinal chromophore in photopigments with 11-cis 3,4-didehydroretinal. Despite over a century of research on this topic, the enzymatic basis of this perceptual switch remains unknown. Here, we show that a cytochrome P450 family member, Cyp27c1, mediates this switch by converting vitamin A1 (the precursor of 11-cis retinal) into vitamin A2 (the precursor of 11-cis 3,4-didehydroretinal). Knockout of cyp27c1 in zebrafish abrogates production of vitamin A2, eliminating the animal's ability to red-shift its photoreceptor spectral sensitivity and reducing its ability to see and respond to near-infrared light. Thus, the expression of a single enzyme mediates dynamic spectral tuning of the entire visual system by controlling the balance of vitamin A1 and A2 in the eye.
Copyright © 2015 Elsevier Ltd. All rights reserved.
The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge.
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
© 2015 International Society for Autism Research, Wiley Periodicals, Inc.
How much do people differ in their abilities to recognize objects, and what is the source of these differences? To address the first question, psychologists have created visual learning tests including the Cambridge Face Memory Test (Duchaine & Nakayama, 2006) and the Vanderbilt Expertise Test (VET; McGugin et al., 2012). The second question requires consideration of the influences of both innate potential and experience, but experience is difficult to measure. One solution is to measure the products of experience beyond perceptual knowledge-specifically, nonvisual semantic knowledge. For instance, the relation between semantic and perceptual knowledge can help clarify the nature of object recognition deficits in brain-damaged patients (Barton, Hanif, & Ashraf, Brain, 132, 3456-3466, 2009). We present a reliable measure of nonperceptual knowledge in a format applicable across categories. The Semantic Vanderbilt Expertise Test (SVET) measures knowledge of relevant category-specific nomenclature. We present SVETs for eight categories: cars, planes, Transformers, dinosaurs, shoes, birds, leaves, and mushrooms. The SVET demonstrated good reliability and domain-specific validity. We found partial support for the idea that the only source of domain-specific shared variance between the VET and SVET is experience with a category. We also demonstrated the utility of the SVET-Bird in experts. The SVET can facilitate the study of individual differences in visual recognition.
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.