The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
Stochastic accumulator models account for response times and errors in perceptual decision making by assuming a noisy accumulation of perceptual evidence to a threshold. Previously, we explained saccade visual search decision making by macaque monkeys with a stochastic multiaccumulator model in which accumulation was driven by a gated feed-forward integration to threshold of spike trains from visually responsive neurons in frontal eye field that signal stimulus salience. This neurally constrained model quantitatively accounted for response times and errors in visual search for a target among varying numbers of distractors and replicated the dynamics of presaccadic movement neurons hypothesized to instantiate evidence accumulation. This modeling framework suggested strategic control over gate or over threshold as two potential mechanisms to accomplish speed-accuracy tradeoff (SAT). Here, we show that our gated accumulator model framework can account for visual search performance under SAT instructions observed in a milestone neurophysiological study of frontal eye field. This framework captured key elements of saccade search performance, through observed modulations of neural input, as well as flexible combinations of gate and threshold parameters necessary to explain differences in SAT strategy across monkeys. However, the trajectories of the model accumulators deviated from the dynamics of most presaccadic movement neurons. These findings demonstrate that traditional theoretical accounts of SAT are incomplete descriptions of the underlying neural adjustments that accomplish SAT, offer a novel mechanistic account of decision-making mechanisms during speed-accuracy tradeoff, and highlight questions regarding the identity of model and neural accumulators. NEW & NOTEWORTHY A gated accumulator model is used to elucidate neurocomputational mechanisms of speed-accuracy tradeoff. Whereas canonical stochastic accumulators adjust strategy only through variation of an accumulation threshold, we demonstrate that strategic adjustments are accomplished by flexible combinations of both modulation of the evidence representation and adaptation of accumulator gate and threshold. The results indicate how model-based cognitive neuroscience can translate between abstract cognitive models of performance and neural mechanisms of speed-accuracy tradeoff.
An ever-growing literature has aimed to determine how individuals with autism spectrum disorder (ASD) differ from their typically developing (TD) peers on measures of multisensory integration (MSI) and to ascertain the degree to which differences in MSI are associated with the broad range of symptoms associated with ASD. Findings, however, have been highly variable across the studies carried out to date. The present work systematically reviews and quantitatively synthesizes the large literature on audiovisual MSI in individuals with ASD to evaluate the cumulative evidence for (a) group differences between individuals with ASD and TD peers, (b) correlations between MSI and autism symptoms in individuals with ASD and (c) study level factors that may moderate findings (i.e., explain differential effects) observed across studies. To identify eligible studies, a comprehensive search strategy was employed using the ProQuest search engine, PubMed database, forwards and backwards citation searches, direct author contact, and hand-searching of select conference proceedings. A significant between-group difference in MSI was evident in the literature, with individuals with ASD demonstrating worse audiovisual integration on average across studies compared to TD controls. This effect was moderated by mean participant age, such that between-group differences were more pronounced in younger samples. The mean correlation between MSI and autism and related symptomatology was also significant, indicating that increased audiovisual integration in individuals with ASD is associated with better language/communication abilities and/or reduced autism symptom severity in the extant literature. This effect was moderated by whether the stimuli were linguistic versus non-linguistic in nature, such that correlation magnitudes tended to be significantly greater when linguistic stimuli were utilized in the measure of MSI. Limitations and future directions for primary and meta-analytic research are discussed.
Copyright © 2018 Elsevier Ltd. All rights reserved.
Functional magnetic resonance imaging (fMRI) depicts neural activity in the brain indirectly by measuring blood oxygenation level dependent (BOLD) signals. The majority of fMRI studies have focused on detecting cortical activity in gray matter (GM), but whether functional BOLD signal changes also arise in white matter (WM), and whether neural activities trigger hemodynamic changes in WM similarly to GM, remain controversial, particularly in light of the much lower vascular density in WM. However, BOLD effects in WM are readily detected under hypercapnic challenges, and the number of reports supporting reliable detections of stimulus-induced activations in WM continues to grow. Rather than assume a particular hemodynamic response function, we used a voxel-by-voxel analysis of frequency spectra in WM to detect WM activations under visual stimulation, whose locations were validated with fiber tractography using diffusion tensor imaging (DTI). We demonstrate that specific WM regions are robustly activated in response to visual stimulation, and that regional distributions of WM activation are consistent with fiber pathways reconstructed using DTI. We further examined the variation in the concordance between WM activation and fiber density in groups of different sample sizes, and compared the signal profiles of BOLD time series between resting state and visual stimulation conditions in activated GM as well as activated and non-activated WM regions. Our findings confirm that BOLD signal variations in WM are modulated by neural activity and are detectable with conventional fMRI using appropriate methods, thus offering the potential of expanding functional connectivity measurements throughout the brain.
Copyright © 2018 Elsevier Inc. All rights reserved.
For many cochlear implant (CI) users, visual cues are vitally important for interpreting the impoverished auditory speech information that an implant conveys. Although the temporal relationship between auditory and visual stimuli is crucial for how this information is integrated, audiovisual temporal processing in CI users is poorly understood. In this study, we tested unisensory (auditory alone, visual alone) and multisensory (audiovisual) temporal processing in postlingually deafened CI users (n = 48) and normal-hearing controls (n = 54) using simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks. We varied the timing onsets between the auditory and visual components of either a syllable/viseme or a simple flash/beep pairing, and participants indicated either which stimulus appeared first (TOJ) or if the pair occurred simultaneously (SJ). Results indicate that temporal binding windows-the interval within which stimuli are likely to be perceptually 'bound'-are not significantly different between groups for either speech or non-speech stimuli. However, the point of subjective simultaneity for speech was less visually leading in CI users, who interestingly, also had improved visual-only TOJ thresholds. Further signal detection analysis suggests that this SJ shift may be due to greater visual bias within the CI group, perhaps reflecting heightened attentional allocation to visual cues.
Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3-5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6-9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression.
Copyright © 2017 Elsevier Ltd. All rights reserved.
Cortical stimulation mapping (CSM) has provided important insights into the neuroanatomy of language because of its high spatial and temporal resolution, and the causal relationships that can be inferred from transient disruption of specific functions. Almost all CSM studies to date have focused on word-level processes such as naming, comprehension, and repetition. In this study, we used CSM to identify sites where stimulation interfered selectively with syntactic encoding during sentence production. Fourteen patients undergoing left-hemisphere neurosurgery participated in the study. In 7 of the 14 patients, we identified nine sites where cortical stimulation interfered with syntactic encoding but did not interfere with single word processing. All nine sites were localized to the inferior frontal gyrus, mostly to the pars triangularis and opercularis. Interference with syntactic encoding took several different forms, including misassignment of arguments to grammatical roles, misassignment of nouns to verb slots, omission of function words and inflectional morphology, and various paragrammatic constructions. Our findings suggest that the left inferior frontal gyrus plays an important role in the encoding of syntactic structure during sentence production.
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
(c) 2018 APA, all rights reserved).
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Copyright © 2016 Elsevier Ltd. All rights reserved.
Some vertebrate species have evolved means of extending their visual sensitivity beyond the range of human vision. One mechanism of enhancing sensitivity to long-wavelength light is to replace the 11-cis retinal chromophore in photopigments with 11-cis 3,4-didehydroretinal. Despite over a century of research on this topic, the enzymatic basis of this perceptual switch remains unknown. Here, we show that a cytochrome P450 family member, Cyp27c1, mediates this switch by converting vitamin A1 (the precursor of 11-cis retinal) into vitamin A2 (the precursor of 11-cis 3,4-didehydroretinal). Knockout of cyp27c1 in zebrafish abrogates production of vitamin A2, eliminating the animal's ability to red-shift its photoreceptor spectral sensitivity and reducing its ability to see and respond to near-infrared light. Thus, the expression of a single enzyme mediates dynamic spectral tuning of the entire visual system by controlling the balance of vitamin A1 and A2 in the eye.
Copyright © 2015 Elsevier Ltd. All rights reserved.