The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
The left ventral occipitotemporal cortex (vOT) is a critical region in reading. According to the interactive account of reading, the vOT is an interface between the lower-level visual regions and higher-level language areas. One prediction of the interactive account is that orthographic activation in vOT should be automatically influenced by semantics and phonology. In the current study, we used functional magnetic resonance imaging (fMRI) and a masked priming paradigm with a relatively short duration (150 ms) to examine whether language information automatically influences vOT during Chinese reading. Participants were asked to perform a lexical decision task on target characters. We separately tested the phonological and semantic influence on orthographic processing in vOT. Brain activation analyses showed that the activation of vOT was modulated by semantic information. In addition, a functional connectivity analysis showed stronger connectivity between vOT and the left ventral inferior frontal gyrus was modulated by semantic information. These findings provided converging evidence for the existence of an automatic semantic influence on vOT during reading, supporting the interactive account. Our study did not show a phonological effect either in the activation of or connectivity with vOT. Taken together, these results reflect the unique processes of Chinese reading, which relies more on the mapping between orthography and semantics, as compared to the orthographic to phonology mapping.
Copyright © 2019 Elsevier Ltd. All rights reserved.
BACKGROUND - Learning and memory are impaired in schizophrenia. Some theories have proposed that one form of memory, habituation, is particularly impaired. Preliminary evidence suggests that memory impairment is associated with failed hippocampal habituation in patients with chronic schizophrenia. We studied how abnormal habituation of the hippocampus is related to relational memory deficits in the early stage of psychosis.
METHODS - We measured hippocampal activity in 62 patients with early psychosis and 70 healthy individuals using functional magnetic resonance imaging. Habituation was defined as the slope of functional magnetic resonance imaging signal change to repeated presentations of faces and objects. Relational memory ability was measured as the slope of preferential viewing during a face-scene pair eye movement task outside the scanner.
RESULTS - Patients with early psychosis showed impaired relational memory (p < .001) and less hippocampal habituation to objects (p = .01) than healthy control subjects. In the healthy control group, better relational memory was associated with faster anterior hippocampal habituation (faces, r = -.28, p = .03). In contrast, patients with early psychosis showed no brain-behavior relationship (r = .12, p = .40).
CONCLUSIONS - We found evidence for disrupted hippocampal habituation in the early stage of psychosis along with an altered association between hippocampal habituation and relational memory ability. These results suggest that neural habituation may provide a novel target for early cognitive interventions in psychosis.
Copyright © 2019 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Accurate estimates of the BOLD hemodynamic response function (HRF) are crucial for the interpretation and analysis of event-related functional MRI data. To date, however, there have been no comprehensive measurements of the HRF in white matter (WM) despite increasing evidence that BOLD signals in WM change after a stimulus. We performed an event-related cognitive task (Stroop color-word interference) to measure the HRF in selected human WM pathways. The task was chosen in order to produce robust, distributed centers of activity throughout the cortex. To measure the HRF in WM, fiber tracts were reconstructed between each pair of activated cortical areas. We observed clear task-specific HRFs with reduced magnitudes, delayed onsets and prolonged initial dips in WM tracts compared with activated grey matter, thus calling for significant changes to current standard models for accurately characterizing the HRFs in WM and for modifications of standard methods of analysis of functional imaging data.
Visual object expertise correlates with neural selectivity in the fusiform face area (FFA). Although behavioral studies suggest that visual expertise is associated with increased use of holistic and configural information, little is known about the nature of the supporting neural representations. Using high-resolution 7-T functional magnetic resonance imaging, we recorded the multivoxel activation patterns elicited by whole cars, configurally disrupted cars, and car parts in individuals with a wide range of car expertise. A probabilistic support vector machine classifier was trained to differentiate activation patterns elicited by whole car images from activation patterns elicited by misconfigured car images. The classifier was then used to classify new combined activation patterns that were created by averaging activation patterns elicited by individually presented top and bottom car parts. In line with the idea that the configuration of parts is critical to expert visual perception, car expertise was negatively associated with the probability of a combined activation pattern being classified as a whole car in the right anterior FFA, a region critical to vision for categories of expertise. Thus, just as found for faces in normal observers, the neural representation of cars in right anterior FFA is more holistic for car experts than car novices, consistent with common mechanisms of neural selectivity for faces and other objects of expertise in this area.
While much research has focused on understanding how individual stimuli are encoded in episodic memory, less is known about how a series of events is bound into a coherent episode. Cognitive models of episodic memory propose that information about presented stimuli is integrated into a composite representation reflecting one's past experience, allowing events separated in time to become associated. Recent evidence suggests that neural oscillatory activity may be critically involved in this process. To examine how oscillatory activity contributes to binding of information across events, we measured scalp EEG as participants studied categorized lists of people, places, and objects. We assessed their memory for the lists using free recall, allowing us to characterize the temporal and semantic organization of the studied items in memory. Using pattern classification, we identified EEG activity during encoding at a range of frequencies and scalp locations that was sensitive to the category of presented stimuli. In the beta band (16-25Hz) at right posterior electrodes, we observed activity that was also sensitive to the category of recently presented stimuli. This neural activity showed two characteristics consistent with a representation of the recent past: It became stronger when multiple items from the same category were presented in succession, and it contained a fading trace of the previous category after a category shift. When items were separated by an inter-item distraction task, this integrative beta-band activity was disrupted. Distraction also led to decreased semantic organization of the studied materials without affecting their temporal organization; this suggests that distraction disrupts the integration of semantic information over time, preventing encoding of items in terms of the semantic context of earlier items. Our results provide evidence that beta-band activity is involved in maintaining information about recent events, allowing construction of a coherent representation of a temporally extended episode in memory.
Copyright © 2017 Elsevier Inc. All rights reserved.
Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14-24, 2008].
The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.
Spatial resolution fundamentally limits any image representation. Although this limit has been extensively investigated for perceptual representations by assessing how neighboring flankers degrade the perception of a peripheral target with visual crowding, the corresponding limit for representations held in visual working memory (VWM) is unknown. In the present study, we evoked crowding in VWM and directly compared resolution in VWM and perception. Remarkably, the spatial resolution of VWM proved to be no worse than that of perception. However, mixture modeling of errors caused by crowding revealed the qualitatively distinct nature of these representations. Perceptual crowding errors arose from both increased imprecision in target representations and substitution of flankers for targets. By contrast, VWM crowding errors arose exclusively from substitutions, which suggests that VWM transforms analog perceptual representations into discrete items. Thus, although perception and VWM share a common resolution limit, exceeding this limit reveals distinct mechanisms for perceiving images and holding them in mind.
© The Author(s) 2015.
The M1 muscarinic acetylcholine receptor (mAChR) subtype has been implicated in the underlying mechanisms of learning and memory and represents an important potential pharmacotherapeutic target for the cognitive impairments observed in neuropsychiatric disorders such as schizophrenia. Patients with schizophrenia show impairments in top-down processing involving conflict between sensory-driven and goal-oriented processes that can be modeled in preclinical studies using touchscreen-based cognition tasks. The present studies used a touchscreen visual pairwise discrimination task in which mice discriminated between a less salient and a more salient stimulus to assess the influence of the M1 mAChR on top-down processing. M1 mAChR knockout (M1 KO) mice showed a slower rate of learning, evidenced by slower increases in accuracy over 12 consecutive days, and required more days to acquire (achieve 80% accuracy) this discrimination task compared to wild-type mice. In addition, the M1 positive allosteric modulator BQCA enhanced the rate of learning this discrimination in wild-type, but not in M1 KO, mice when BQCA was administered daily prior to testing over 12 consecutive days. Importantly, in discriminations between stimuli of equal salience, M1 KO mice did not show impaired acquisition and BQCA did not affect the rate of learning or acquisition in wild-type mice. These studies are the first to demonstrate performance deficits in M1 KO mice using touchscreen cognitive assessments and enhanced rate of learning and acquisition in wild-type mice through M1 mAChR potentiation when the touchscreen discrimination task involves top-down processing. Taken together, these findings provide further support for M1 potentiation as a potential treatment for the cognitive symptoms associated with schizophrenia.
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.
(c) 2015 APA, all rights reserved).