Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 54

Publication Record

Connections

Vocal Communication With Canonical Syllables Predicts Later Expressive Language Skills in Preschool-Aged Children With Autism Spectrum Disorder.
McDaniel J, Woynaroski T, Keceli-Kaysili B, Watson LR, Yoder P
(2019) J Speech Lang Hear Res 62: 3826-3833
MeSH Terms: Autism Spectrum Disorder, Child Language, Child, Preschool, Communication, Female, Humans, Language Development Disorders, Longitudinal Studies, Male, Phonetics, Regression Analysis, Speech Production Measurement
Show Abstract · Added March 18, 2020
Purpose We examined associations between vocal communication with canonical syllables and expressive language and then examined 2 potential alternative explanations for such associations. Method Specifically, we tested whether the associations remained when excluding canonical syllables in identifiable words and controlling for the number of communication acts. Participants included 68 preverbal or low verbal children with autism spectrum disorder ( = 35.26 months). Results Vocal communication with canonical syllables and expressive language were concurrently and longitudinally associated with moderate to strong (s = .13-.70) and significant (s < .001) effect sizes. Even when excluding spoken words from the vocal predictor and controlling for the number of communication acts, vocal communication with canonical syllables predicted expressive language. Conclusions The findings provide increased support for measuring vocal communication with canonical syllables and for examining a causal relation between vocal communication with canonical syllables and expressive language in children with ASD who are preverbal or low verbal. In future studies, it may be unnecessary to eliminate identifiable words when measuring vocal communication in this population. Following replication, vocal communication with canonical syllables may be considered when making intervention- planning decisions.
0 Communities
2 Members
0 Resources
12 MeSH Terms
Multivariate Approaches to Understanding Aphasia and its Neural Substrates.
Wilson SM, Hula WD
(2019) Curr Neurol Neurosci Rep 19: 53
MeSH Terms: Aged, Aphasia, Brain, Brain Mapping, Female, Humans, Language, Language Tests, Magnetic Resonance Imaging, Male, Middle Aged, Speech, Stroke
Show Abstract · Added March 30, 2020
PURPOSE OF REVIEW - Aphasia is often characterized in terms of subtype and severity, yet these constructs have limited explanatory power, because aphasia is inherently multifactorial both in its neural substrates and in its symptomatology. The purpose of this review is to survey current and emerging multivariate approaches to understanding aphasia.
RECENT FINDINGS - Techniques such as factor analysis and principal component analysis have been used to define latent underlying factors that can account for performance on batteries of speech and language tests, and for characteristics of spontaneous speech production. Multivariate lesion-symptom mapping has been shown to outperform univariate approaches to lesion-symptom mapping for identifying brain regions where damage is associated with specific speech and language deficits. It is increasingly clear that structural damage results in functional changes in wider neural networks, which mediate speech and language outcomes. Multivariate statistical approaches are essential for understanding the complex relationships between the neural substrates of aphasia, and resultant profiles of speech and language function.
0 Communities
1 Members
0 Resources
13 MeSH Terms
Auditory-Perceptual Rating of Connected Speech in Aphasia.
Casilio M, Rising K, Beeson PM, Bunton K, Wilson SM
(2019) Am J Speech Lang Pathol 28: 550-568
MeSH Terms: Aged, Aphasia, Feasibility Studies, Female, Humans, Judgment, Male, Middle Aged, Observer Variation, Predictive Value of Tests, Reproducibility of Results, Speech, Speech Perception, Speech Production Measurement, Speech-Language Pathology, Voice Quality
Show Abstract · Added March 30, 2020
Purpose Auditory-perceptual assessment, in which trained listeners rate a large number of perceptual features of speech samples, is the gold standard for the differential diagnosis of motor speech disorders. The goal of this study was to investigate the feasibility of applying a similar, formalized auditory-perceptual approach to the assessment of language deficits in connected speech samples from individuals with aphasia. Method Twenty-seven common features of connected speech in aphasia were defined, each of which was rated on a 5-point scale. Three experienced researchers evaluated 24 connected speech samples from the AphasiaBank database, and 12 student clinicians evaluated subsets of 8 speech samples each. We calculated interrater reliability for each group of raters and investigated the validity of the auditory-perceptual approach by comparing feature ratings to related quantitative measures derived from transcripts and clinical measures, and by examining patterns of feature co-occurrence. Results Most features were rated with good-to-excellent interrater reliability by researchers and student clinicians. Most features demonstrated strong concurrent validity with respect to quantitative connected speech measures computed from AphasiaBank transcripts and/or clinical aphasia battery subscores. Factor analysis showed that 4 underlying factors, which we labeled Paraphasia, Logopenia, Agrammatism, and Motor Speech, accounted for 79% of the variance in connected speech profiles. Examination of individual patients' factor scores revealed striking diversity among individuals classified with a given aphasia type. Conclusion Auditory-perceptual rating of connected speech in aphasia shows potential to be a comprehensive, efficient, reliable, and valid approach for characterizing connected speech in aphasia.
0 Communities
1 Members
0 Resources
16 MeSH Terms
Remote Microphone System Use at Home: Impact on Child-Directed Speech.
Benítez-Barrera CR, Thompson EC, Angley GP, Woynaroski T, Tharpe AM
(2019) J Speech Lang Hear Res 62: 2002-2008
MeSH Terms: Child Language, Child, Preschool, Communication, Communication Aids for Disabled, Correction of Hearing Impairment, Female, Hearing Loss, Humans, Male, Speech
Show Abstract · Added March 18, 2020
Purpose The impact of home use of a remote microphone system (RMS) on the caregiver production of, and child access to, child-directed speech (CDS) in families with a young child with hearing loss was investigated. Method We drew upon extant data that were collected via Language ENvironment Analysis (LENA) recorders used with 9 families during 2 consecutive weekends (RMS weekend and no-RMS weekend). Audio recordings of primary caregivers and their children with hearing loss obtained while wearing and not wearing an RMS were manually coded to estimate the amount of CDS produced. The proportion of CDS that was likely accessible to children with hearing loss under both conditions was determined. Results Caregivers produced the same amount of CDS when using and when not using the RMS. However, it was concluded that children with hearing loss, on average, could potentially access 12% more CDS if caregivers used an RMS because of their distance from their children when talking to them. Conclusion Given our understanding of typical child language development, findings from this investigation suggest that children with hearing loss could receive auditory, speech, and language benefits from the use of an RMS in the home environment.
0 Communities
1 Members
0 Resources
10 MeSH Terms
Audiovisual Temporal Processing in Postlingually Deafened Adults with Cochlear Implants.
Butera IM, Stevenson RA, Mangus BD, Woynaroski TG, Gifford RH, Wallace MT
(2018) Sci Rep 8: 11345
MeSH Terms: Acoustic Stimulation, Adult, Aged, Auditory Perception, Cochlear Implants, Deafness, Female, Humans, Judgment, Language, Male, Middle Aged, Regression Analysis, Signal Processing, Computer-Assisted, Speech Perception, Task Performance and Analysis, Time Factors, Visual Perception, Young Adult
Show Abstract · Added March 18, 2020
For many cochlear implant (CI) users, visual cues are vitally important for interpreting the impoverished auditory speech information that an implant conveys. Although the temporal relationship between auditory and visual stimuli is crucial for how this information is integrated, audiovisual temporal processing in CI users is poorly understood. In this study, we tested unisensory (auditory alone, visual alone) and multisensory (audiovisual) temporal processing in postlingually deafened CI users (n = 48) and normal-hearing controls (n = 54) using simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks. We varied the timing onsets between the auditory and visual components of either a syllable/viseme or a simple flash/beep pairing, and participants indicated either which stimulus appeared first (TOJ) or if the pair occurred simultaneously (SJ). Results indicate that temporal binding windows-the interval within which stimuli are likely to be perceptually 'bound'-are not significantly different between groups for either speech or non-speech stimuli. However, the point of subjective simultaneity for speech was less visually leading in CI users, who interestingly, also had improved visual-only TOJ thresholds. Further signal detection analysis suggests that this SJ shift may be due to greater visual bias within the CI group, perhaps reflecting heightened attentional allocation to visual cues.
0 Communities
1 Members
0 Resources
MeSH Terms
Neural representation of vowel formants in tonotopic auditory cortex.
Fisher JM, Dick FK, Levy DF, Wilson SM
(2018) Neuroimage 178: 574-582
MeSH Terms: Acoustic Stimulation, Adult, Auditory Cortex, Brain Mapping, Female, Humans, Magnetic Resonance Imaging, Male, Phonetics, Speech Perception
Show Abstract · Added March 26, 2019
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Copyright © 2018 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
MeSH Terms
Predicting Receptive-Expressive Vocabulary Discrepancies in Preschool Children With Autism Spectrum Disorder.
McDaniel J, Yoder P, Woynaroski T, Watson LR
(2018) J Speech Lang Hear Res 61: 1426-1439
MeSH Terms: Autism Spectrum Disorder, Child Language, Child, Preschool, Female, Humans, Infant, Language Development, Language Development Disorders, Language Tests, Linguistics, Male, Speech Perception, Vocabulary
Show Abstract · Added March 18, 2020
Purpose - Correlates of receptive-expressive vocabulary size discrepancies may provide insights into why language development in children with autism spectrum disorder (ASD) deviates from typical language development and ultimately improve intervention outcomes.
Method - We indexed receptive-expressive vocabulary size discrepancies of 65 initially preverbal children with ASD (20-48 months) to a comparison sample from the MacArthur-Bates Communicative Development Inventories Wordbank (Frank, Braginsky, Yurovsky, & Marchman, 2017) to quantify typicality. We then tested whether attention toward a speaker and oral motor performance predict typicality of the discrepancy 8 months later.
Results - Attention toward a speaker correlated positively with receptive-expressive vocabulary size discrepancy typicality. Imitative and nonimitative oral motor performance were not significant predictors of vocabulary size discrepancy typicality. Secondary analyses indicated that midpoint receptive vocabulary size mediated the association between initial attention toward a speaker and end point receptive-expressive vocabulary size discrepancy typicality.
Conclusions - Findings support the hypothesis that variation in attention toward a speaker might partially explain receptive-expressive vocabulary size discrepancy magnitude in children with ASD. Results are consistent with an input-processing deficit explanation of language impairment in this clinical population. Future studies should test whether attention toward a speaker is malleable and causally related to receptive-expressive discrepancies in children with ASD.
0 Communities
2 Members
0 Resources
MeSH Terms
Retraining speech production and fluency in non-fluent/agrammatic primary progressive aphasia.
Henry ML, Hubbard HI, Grasso SM, Mandelli ML, Wilson SM, Sathishkumar MT, Fridriksson J, Daigle W, Boxer AL, Miller BL, Gorno-Tempini ML
(2018) Brain 141: 1799-1814
MeSH Terms: Aged, Aphasia, Primary Progressive, Aphasia, Wernicke, Female, Follow-Up Studies, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Middle Aged, Neuropsychological Tests, Speech, Speech Therapy, Treatment Outcome
Show Abstract · Added March 26, 2019
The non-fluent/agrammatic variant of primary progressive aphasia (nfvPPA) presents with a gradual decline in grammar and motor speech resulting from selective degeneration of speech-language regions in the brain. There has been considerable progress in identifying treatment approaches to remediate language deficits in other primary progressive aphasia variants; however, interventions for the core deficits in nfvPPA have yet to be systematically investigated. Further, the neural mechanisms that support behavioural restitution in the context of neurodegeneration are not well understood. We examined the immediate and long-term benefits of video implemented script training for aphasia (VISTA) in 10 individuals with nfvPPA. The treatment approach involved repeated rehearsal of individualized scripts via structured treatment with a clinician as well as intensive home practice with an audiovisual model using 'speech entrainment'. We evaluated accuracy of script production as well as overall intelligibility and grammaticality for trained and untrained scripts. These measures and standardized test scores were collected at post-treatment and 3-, 6-, and 12-month follow-up visits. Treatment resulted in significant improvement in production of correct, intelligible scripted words for trained topics, a reduction in grammatical errors for trained topics, and an overall increase in intelligibility for trained as well as untrained topics at post-treatment. Follow-up testing revealed maintenance of gains for trained scripts up to 1 year post-treatment on the primary outcome measure. Performance on untrained scripts and standardized tests remained relatively stable during the follow-up period, indicating that treatment helped to stabilize speech and language despite disease progression. To identify neural predictors of responsiveness to intervention, we examined treatment effect sizes relative to grey matter volumes in regions of interest derived from a previously identified speech production network. Regions of significant atrophy within this network included bilateral inferior frontal cortices and supplementary motor area as well as left striatum. Volumes in a left middle/inferior temporal region of interest were significantly correlated with the magnitude of treatment effects. This region, which was relatively spared anatomically in nfvPPA patients, has been implicated in syntactic production as well as visuo-motor facilitation of speech. This is the first group study to document the benefits of behavioural intervention that targets both linguistic and motoric deficits in nfvPPA. Findings indicate that behavioural intervention may result in lasting and generalized improvement of communicative function in individuals with neurodegenerative disease and that the integrity of spared regions within the speech-language network may be an important predictor of treatment response.
0 Communities
1 Members
0 Resources
14 MeSH Terms
A new measure of child vocal reciprocity in children with autism spectrum disorder.
Harbison AL, Woynaroski TG, Tapp J, Wade JW, Warlaumont AS, Yoder PJ
(2018) Autism Res 11: 903-915
MeSH Terms: Acoustic Stimulation, Adult, Autism Spectrum Disorder, Child Language, Child, Preschool, Communication, Female, Humans, Language Development Disorders, Male, Parents, Reproducibility of Results, Speech
Show Abstract · Added March 18, 2020
Children's vocal development occurs in the context of reciprocal exchanges with a communication partner who models "speechlike" productions. We propose a new measure of child vocal reciprocity, which we define as the degree to which an adult vocal response increases the probability of an immediately following child vocal response. Vocal reciprocity is likely to be associated with the speechlikeness of vocal communication in young children with autism spectrum disorder (ASD). Two studies were conducted to test the utility of the new measure. The first used simulated vocal samples with randomly sequenced child and adult vocalizations to test the accuracy of the proposed index of child vocal reciprocity. The second was an empirical study of 21 children with ASD who were preverbal or in the early stages of language development. Daylong vocal samples collected in the natural environment were computer analyzed to derive the proposed index of child vocal reciprocity, which was highly stable when derived from two daylong vocal samples and was associated with speechlikeness of vocal communication. This association was significant even when controlling for chance probability of child vocalizations to adult vocal responses, probability of adult vocalizations, or probability of child vocalizations. A valid measure of children's vocal reciprocity might eventually improve our ability to predict which children are on track to develop useful speech and/or are most likely to respond to language intervention. A link to a free, publicly-available software program to derive the new measure of child vocal reciprocity is provided. Autism Res 2018, 11: 903-915. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
LAY SUMMARY - Children and adults often engage in back-and-forth vocal exchanges. The extent to which they do so is believed to support children's early speech and language development. Two studies tested a new measure of child vocal reciprocity using computer-generated and real-life vocal samples of young children with autism collected in natural settings. The results provide initial evidence of accuracy, test-retest reliability, and validity of the new measure of child vocal reciprocity. A sound measure of children's vocal reciprocity might improve our ability to predict which children are on track to develop useful speech and/or are most likely to respond to language intervention. A free, publicly-available software program and manuals are provided.
© 2018 International Society for Autism Research, Wiley Periodicals, Inc.
0 Communities
2 Members
0 Resources
MeSH Terms
Convergence of spoken and written language processing in the superior temporal sulcus.
Wilson SM, Bautista A, McCarron A
(2018) Neuroimage 171: 62-74
MeSH Terms: Adult, Aged, Aged, 80 and over, Brain Mapping, Comprehension, Female, Humans, Magnetic Resonance Imaging, Male, Middle Aged, Reading, Speech Perception, Temporal Lobe, Writing, Young Adult
Show Abstract · Added March 26, 2019
Spoken and written language processing streams converge in the superior temporal sulcus (STS), but the functional and anatomical nature of this convergence is not clear. We used functional MRI to quantify neural responses to spoken and written language, along with unintelligible stimuli in each modality, and employed several strategies to segregate activations on the dorsal and ventral banks of the STS. We found that intelligible and unintelligible inputs in both modalities activated the dorsal bank of the STS. The posterior dorsal bank was able to discriminate between modalities based on distributed patterns of activity, pointing to a role in encoding of phonological and orthographic word forms. The anterior dorsal bank was agnostic to input modality, suggesting that this region represents abstract lexical nodes. In the ventral bank of the STS, responses to unintelligible inputs in both modalities were attenuated, while intelligible inputs continued to drive activation, indicative of higher level semantic and syntactic processing. Our results suggest that the processing of spoken and written language converges on the posterior dorsal bank of the STS, which is the first of a heterogeneous set of language regions within the STS, with distinct functions spanning a broad range of linguistic processes.
Copyright © 2017 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
MeSH Terms