Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 135

Publication Record

Connections

Heart Disease and Stroke Statistics-2018 Update: A Report From the American Heart Association.
Benjamin EJ, Virani SS, Callaway CW, Chamberlain AM, Chang AR, Cheng S, Chiuve SE, Cushman M, Delling FN, Deo R, de Ferranti SD, Ferguson JF, Fornage M, Gillespie C, Isasi CR, Jiménez MC, Jordan LC, Judd SE, Lackland D, Lichtman JH, Lisabeth L, Liu S, Longenecker CT, Lutsey PL, Mackey JS, Matchar DB, Matsushita K, Mussolino ME, Nasir K, O'Flaherty M, Palaniappan LP, Pandey A, Pandey DK, Reeves MJ, Ritchey MD, Rodriguez CJ, Roth GA, Rosamond WD, Sampson UKA, Satou GM, Shah SH, Spartano NL, Tirschwell DL, Tsao CW, Voeks JH, Willey JZ, Wilkins JT, Wu JH, Alger HM, Wong SS, Muntner P, American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee
(2018) Circulation 137: e67-e492
MeSH Terms: American Heart Association, Comorbidity, Data Interpretation, Statistical, Health Status, Heart Diseases, Humans, Life Style, Prognosis, Risk Assessment, Risk Factors, Stroke, United States
Added April 2, 2019
0 Communities
2 Members
0 Resources
12 MeSH Terms
Harmonization of cortical thickness measurements across scanners and sites.
Fortin JP, Cullen N, Sheline YI, Taylor WD, Aselcioglu I, Cook PA, Adams P, Cooper C, Fava M, McGrath PJ, McInnis M, Phillips ML, Trivedi MH, Weissman MM, Shinohara RT
(2018) Neuroimage 167: 104-120
MeSH Terms: Adolescent, Adult, Aged, Aged, 80 and over, Cerebral Cortex, Data Interpretation, Statistical, Datasets as Topic, Female, Humans, Magnetic Resonance Imaging, Male, Middle Aged, Models, Theoretical, Multicenter Studies as Topic, Young Adult
Show Abstract · Added March 14, 2018
With the proliferation of multi-site neuroimaging studies, there is a greater need for handling non-biological variance introduced by differences in MRI scanners and acquisition protocols. Such unwanted sources of variation, which we refer to as "scanner effects", can hinder the detection of imaging features associated with clinical covariates of interest and cause spurious findings. In this paper, we investigate scanner effects in two large multi-site studies on cortical thickness measurements across a total of 11 scanners. We propose a set of tools for visualizing and identifying scanner effects that are generalizable to other modalities. We then propose to use ComBat, a technique adopted from the genomics literature and recently applied to diffusion tensor imaging data, to combine and harmonize cortical thickness values across scanners. We show that ComBat removes unwanted sources of scan variability while simultaneously increasing the power and reproducibility of subsequent statistical analyses. We also show that ComBat is useful for combining imaging data with the goal of studying life-span trajectories in the brain.
Copyright © 2017 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
15 MeSH Terms
Reproducibility of Differential Proteomic Technologies in CPTAC Fractionated Xenografts.
Tabb DL, Wang X, Carr SA, Clauser KR, Mertins P, Chambers MC, Holman JD, Wang J, Zhang B, Zimmerman LJ, Chen X, Gunawardena HP, Davies SR, Ellis MJ, Li S, Townsend RR, Boja ES, Ketchum KA, Kinsinger CR, Mesri M, Rodriguez H, Liu T, Kim S, McDermott JE, Payne SH, Petyuk VA, Rodland KD, Smith RD, Yang F, Chan DW, Zhang B, Zhang H, Zhang Z, Zhou JY, Liebler DC
(2016) J Proteome Res 15: 691-706
MeSH Terms: Breast Neoplasms, Chromatography, Liquid, Data Interpretation, Statistical, Female, Gene Expression Profiling, Heterografts, Humans, Metabolic Networks and Pathways, Observer Variation, Proteome, Proteomics, Quality Control, Reproducibility of Results, Tandem Mass Spectrometry
Show Abstract · Added February 15, 2016
The NCI Clinical Proteomic Tumor Analysis Consortium (CPTAC) employed a pair of reference xenograft proteomes for initial platform validation and ongoing quality control of its data collection for The Cancer Genome Atlas (TCGA) tumors. These two xenografts, representing basal and luminal-B human breast cancer, were fractionated and analyzed on six mass spectrometers in a total of 46 replicates divided between iTRAQ and label-free technologies, spanning a total of 1095 LC-MS/MS experiments. These data represent a unique opportunity to evaluate the stability of proteomic differentiation by mass spectrometry over many months of time for individual instruments or across instruments running dissimilar workflows. We evaluated iTRAQ reporter ions, label-free spectral counts, and label-free extracted ion chromatograms as strategies for data interpretation (source code is available from http://homepages.uc.edu/~wang2x7/Research.htm ). From these assessments, we found that differential genes from a single replicate were confirmed by other replicates on the same instrument from 61 to 93% of the time. When comparing across different instruments and quantitative technologies, using multiple replicates, differential genes were reproduced by other data sets from 67 to 99% of the time. Projecting gene differences to biological pathways and networks increased the degree of similarity. These overlaps send an encouraging message about the maturity of technologies for proteomic differentiation.
0 Communities
1 Members
0 Resources
14 MeSH Terms
Deconvolution of fluorescence lifetime imaging microscopy by a library of exponentials.
Campos-Delgado DU, Navarro OG, Arce-Santana ER, Walsh AJ, Skala MC, Jo JA
(2015) Opt Express 23: 23748-67
MeSH Terms: Algorithms, Data Interpretation, Statistical, Image Enhancement, Image Interpretation, Computer-Assisted, Microscopy, Fluorescence, Molecular Imaging, Reproducibility of Results, Sensitivity and Specificity
Show Abstract · Added February 4, 2016
Fluorescence lifetime microscopy imaging (FLIM) is an optic technique that allows a quantitative characterization of the fluorescent components of a sample. However, for an accurate interpretation of FLIM, an initial processing step is required to deconvolve the instrument response of the system from the measured fluorescence decays. In this paper, we present a novel strategy for the deconvolution of FLIM data based on a library of exponentials. Our approach searches for the scaling coefficients of the library by non-negative least squares approximations plus Thikonov/l(2) or l(1) regularization terms. The parameters of the library are given by the lower and upper bounds in the characteristic lifetimes of the exponential functions and the size of the library, where we observe that this last variable is not a limiting factor in the resulting fitting accuracy. We compare our proposal to nonlinear least squares and global non-linear least squares estimations with a multi-exponential model, and also to constrained Laguerre-base expansions, where we visualize an advantage of our proposal based on Thikonov/l(2) regularization in terms of estimation accuracy, computational time, and tuning strategy. Our validation strategy considers synthetic datasets subject to both shot and Gaussian noise and samples with different lifetime maps, and experimental FLIM data of ex-vivo atherosclerotic plaques and human breast cancer cells.
0 Communities
1 Members
0 Resources
8 MeSH Terms
Simultaneous control of error rates in fMRI data analysis.
Kang H, Blume J, Ombao H, Badre D
(2015) Neuroimage 123: 102-13
MeSH Terms: Brain Mapping, Computer Simulation, Data Interpretation, Statistical, Frontal Lobe, Humans, Likelihood Functions, Magnetic Resonance Imaging, Research Design
Show Abstract · Added February 22, 2016
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.
Copyright © 2015 Elsevier Inc. All rights reserved.
0 Communities
2 Members
0 Resources
8 MeSH Terms
A haplotype-based framework for group-wise transmission/disequilibrium tests for rare variant association analysis.
Chen R, Wei Q, Zhan X, Zhong X, Sutcliffe JS, Cox NJ, Cook EH, Li C, Chen W, Li B
(2015) Bioinformatics 31: 1452-9
MeSH Terms: Autistic Disorder, Computer Simulation, Data Interpretation, Statistical, Exome, Genetic Association Studies, Genetic Variation, Haplotypes, Humans, Linkage Disequilibrium, Models, Genetic, Sequence Analysis, DNA
Show Abstract · Added January 23, 2015
MOTIVATION - A major focus of current sequencing studies for human genetics is to identify rare variants associated with complex diseases. Aside from reduced power of detecting associated rare variants, controlling for population stratification is particularly challenging for rare variants. Transmission/disequilibrium tests (TDT) based on family designs are robust to population stratification and admixture, and therefore provide an effective approach to rare variant association studies to eliminate spurious associations. To increase power of rare variant association analysis, gene-based collapsing methods become standard approaches for analyzing rare variants. Existing methods that extend this strategy to rare variants in families usually combine TDT statistics at individual variants and therefore lack the flexibility of incorporating other genetic models.
RESULTS - In this study, we describe a haplotype-based framework for group-wise TDT (gTDT) that is flexible to encompass a variety of genetic models such as additive, dominant and compound heterozygous (CH) (i.e. recessive) models as well as other complex interactions. Unlike existing methods, gTDT constructs haplotypes by transmission when possible and inherently takes into account the linkage disequilibrium among variants. Through extensive simulations we showed that type I error was correctly controlled for rare variants under all models investigated, and this remained true in the presence of population stratification. Under a variety of genetic models, gTDT showed increased power compared with the single marker TDT. Application of gTDT to an autism exome sequencing data of 118 trios identified potentially interesting candidate genes with CH rare variants.
AVAILABILITY AND IMPLEMENTATION - We implemented gTDT in C++ and the source code and the detailed usage are available on the authors' website (https://medschool.vanderbilt.edu/cgg).
CONTACT - bingshan.li@vanderbilt.edu or wei.chen@chp.edu
SUPPLEMENTARY INFORMATION - Supplementary data are available at Bioinformatics online.
© The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
0 Communities
3 Members
0 Resources
11 MeSH Terms
Consensus Genotyper for Exome Sequencing (CGES): improving the quality of exome variant genotypes.
Trubetskoy V, Rodriguez A, Dave U, Campbell N, Crawford EL, Cook EH, Sutcliffe JS, Foster I, Madduri R, Cox NJ, Davis LK
(2015) Bioinformatics 31: 187-93
MeSH Terms: Algorithms, Autistic Disorder, Consensus Sequence, Data Interpretation, Statistical, Exome, Genetic Testing, Genotype, High-Throughput Nucleotide Sequencing, Humans, Polymorphism, Single Nucleotide, Software
Show Abstract · Added February 22, 2016
MOTIVATION - The development of cost-effective next-generation sequencing methods has spurred the development of high-throughput bioinformatics tools for detection of sequence variation. With many disparate variant-calling algorithms available, investigators must ask, 'Which method is best for my data?' Machine learning research has shown that so-called ensemble methods that combine the output of multiple models can dramatically improve classifier performance. Here we describe a novel variant-calling approach based on an ensemble of variant-calling algorithms, which we term the Consensus Genotyper for Exome Sequencing (CGES). CGES uses a two-stage voting scheme among four algorithm implementations. While our ensemble method can accept variants generated by any variant-calling algorithm, we used GATK2.8, SAMtools, FreeBayes and Atlas-SNP2 in building CGES because of their performance, widespread adoption and diverse but complementary algorithms.
RESULTS - We apply CGES to 132 samples sequenced at the Hudson Alpha Institute for Biotechnology (HAIB, Huntsville, AL) using the Nimblegen Exome Capture and Illumina sequencing technology. Our sample set consisted of 40 complete trios, two families of four, one parent-child duo and two unrelated individuals. CGES yielded the fewest total variant calls (N(CGES) = 139° 897), the highest Ts/Tv ratio (3.02), the lowest Mendelian error rate across all genotypes (0.028%), the highest rediscovery rate from the Exome Variant Server (EVS; 89.3%) and 1000 Genomes (1KG; 84.1%) and the highest positive predictive value (PPV; 96.1%) for a random sample of previously validated de novo variants. We describe these and other quality control (QC) metrics from consensus data and explain how the CGES pipeline can be used to generate call sets of varying quality stringency, including consensus calls present across all four algorithms, calls that are consistent across any three out of four algorithms, calls that are consistent across any two out of four algorithms or a more liberal set of all calls made by any algorithm.
AVAILABILITY AND IMPLEMENTATION - To enable accessible, efficient and reproducible analysis, we implement CGES both as a stand-alone command line tool available for download in GitHub and as a set of Galaxy tools and workflows configured to execute on parallel computers.
SUPPLEMENTARY INFORMATION - Supplementary data are available at Bioinformatics online.
© The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
0 Communities
1 Members
0 Resources
11 MeSH Terms
DupChecker: a bioconductor package for checking high-throughput genomic data redundancy in meta-analysis.
Sheng Q, Shyr Y, Chen X
(2014) BMC Bioinformatics 15: 323
MeSH Terms: Cluster Analysis, Data Interpretation, Statistical, Databases, Genetic, Gene Expression Profiling, Genomics, High-Throughput Nucleotide Sequencing, Meta-Analysis as Topic, Software
Show Abstract · Added February 19, 2015
BACKGROUND - Meta-analysis has become a popular approach for high-throughput genomic data analysis because it often can significantly increase power to detect biological signals or patterns in datasets. However, when using public-available databases for meta-analysis, duplication of samples is an often encountered problem, especially for gene expression data. Not removing duplicates could lead false positive finding, misleading clustering pattern or model over-fitting issue, etc in the subsequent data analysis.
RESULTS - We developed a Bioconductor package Dupchecker that efficiently identifies duplicated samples by generating MD5 fingerprints for raw data. A real data example was demonstrated to show the usage and output of the package.
CONCLUSIONS - Researchers may not pay enough attention to checking and removing duplicated samples, and then data contamination could make the results or conclusions from meta-analysis questionable. We suggest applying DupChecker to examine all gene expression data sets before any data analysis step.
0 Communities
1 Members
0 Resources
8 MeSH Terms
R PheWAS: data analysis and plotting tools for phenome-wide association studies in the R environment.
Carroll RJ, Bastarache L, Denny JC
(2014) Bioinformatics 30: 2375-6
MeSH Terms: Data Interpretation, Statistical, Genetic Association Studies, Genetic Variation, Humans, Multiple Sclerosis, Phenotype, Software
Show Abstract · Added May 27, 2014
UNLABELLED - Phenome-wide association studies (PheWAS) have been used to replicate known genetic associations and discover new phenotype associations for genetic variants. This PheWAS implementation allows users to translate ICD-9 codes to PheWAS case and control groups, perform analyses using these and/or other phenotypes with covariate adjustments and plot the results. We demonstrate the methods by replicating a PheWAS on rs3135388 (near HLA-DRB, associated with multiple sclerosis) and performing a novel PheWAS using an individual's maximum white blood cell count (WBC) as a continuous measure. Our results for rs3135388 replicate known associations with more significant results than the original study on the same dataset. Our PheWAS of WBC found expected results, including associations with infections, myeloproliferative diseases and associated conditions, such as anemia. These results demonstrate the performance of the improved classification scheme and the flexibility of PheWAS encapsulated in this package.
AVAILABILITY AND IMPLEMENTATION - This R package is freely available under the Gnu Public License (GPL-3) from http://phewascatalog.org. It is implemented in native R and is platform independent.
© The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
0 Communities
1 Members
0 Resources
7 MeSH Terms
Identifying and quantifying multisensory integration: a tutorial review.
Stevenson RA, Ghose D, Fister JK, Sarko DK, Altieri NA, Nidiffer AR, Kurela LR, Siemann JK, James TW, Wallace MT
(2014) Brain Topogr 27: 707-30
MeSH Terms: Animals, Brain, Brain Mapping, Data Interpretation, Statistical, Electroencephalography, Humans, Magnetic Resonance Imaging, Neurons, Perception
Show Abstract · Added February 11, 2015
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
0 Communities
1 Members
0 Resources
9 MeSH Terms