, a bio/informatics shared resource is still "open for business" - Visit the CDS website


Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 41

Publication Record

Connections

Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets.
Zhou X, Shen XX, Hittinger CT, Rokas A
(2018) Mol Biol Evol 35: 486-503
MeSH Terms: Likelihood Functions, Phylogeny, Software
Show Abstract · Added March 21, 2018
The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs' relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses.
© The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
0 Communities
1 Members
0 Resources
3 MeSH Terms
Simultaneous control of error rates in fMRI data analysis.
Kang H, Blume J, Ombao H, Badre D
(2015) Neuroimage 123: 102-13
MeSH Terms: Brain Mapping, Computer Simulation, Data Interpretation, Statistical, Frontal Lobe, Humans, Likelihood Functions, Magnetic Resonance Imaging, Research Design
Show Abstract · Added February 22, 2016
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.
Copyright © 2015 Elsevier Inc. All rights reserved.
0 Communities
2 Members
0 Resources
8 MeSH Terms
Improving pulse oximetry pitch perception with multisensory perceptual training.
Schlesinger JJ, Stevenson RA, Shotwell MS, Wallace MT
(2014) Anesth Analg 118: 1249-53
MeSH Terms: Algorithms, Anesthesiology, Attention, Computer Simulation, Health Facility Environment, Humans, Inservice Training, Internship and Residency, Likelihood Functions, Linear Models, Noise, Operating Rooms, Oximetry, Pitch Perception, Psychomotor Performance, Reaction Time
Show Abstract · Added February 11, 2015
The pulse oximeter is a critical monitor in anesthesia practice designed to improve patient safety. Here, we present an approach to improve the ability of anesthesiologists to monitor arterial oxygen saturation via pulse oximetry through an audiovisual training process. Fifteen residents' abilities to detect auditory changes in pulse oximetry were measured before and after perceptual training. Training resulted in a 9% (95% confidence interval, 4%-14%, P = 0.0004, t(166) = 3.60) increase in detection accuracy, and a 72-millisecond (95% confidence interval, 40-103 milliseconds, P < 0.0001, t(166) = -4.52) speeding of response times in attentionally demanding and noisy conditions that were designed to simulate an operating room. This study illustrates the benefits of multisensory training and sets the stage for further work to better define the role of perceptual training in clinical anesthesiology.
0 Communities
1 Members
0 Resources
16 MeSH Terms
A structure-specific nucleic acid-binding domain conserved among DNA repair proteins.
Mason AC, Rambo RP, Greer B, Pritchett M, Tainer JA, Cortez D, Eichman BF
(2014) Proc Natl Acad Sci U S A 111: 7618-23
MeSH Terms: Adenosine Triphosphate, Animals, Chromatography, Affinity, Chromatography, Agarose, Chromatography, Gel, Chromatography, Ion Exchange, Cloning, Molecular, Crystallization, DNA Helicases, DNA Repair, Hydrolysis, Likelihood Functions, Mice, Models, Molecular, Nucleic Acids, Protein Structure, Tertiary, Scattering, Small Angle, X-Ray Diffraction
Show Abstract · Added May 19, 2014
SMARCAL1, a DNA remodeling protein fundamental to genome integrity during replication, is the only gene associated with the developmental disorder Schimke immuno-osseous dysplasia (SIOD). SMARCAL1-deficient cells show collapsed replication forks, S-phase cell cycle arrest, increased chromosomal breaks, hypersensitivity to genotoxic agents, and chromosomal instability. The SMARCAL1 catalytic domain (SMARCAL1(CD)) is composed of an SNF2-type double-stranded DNA motor ATPase fused to a HARP domain of unknown function. The mechanisms by which SMARCAL1 and other DNA translocases repair replication forks are poorly understood, in part because of a lack of structural information on the domains outside of the common ATPase motor. In the present work, we determined the crystal structure of the SMARCAL1 HARP domain and examined its conformation and assembly in solution by small angle X-ray scattering. We report that this domain is conserved with the DNA mismatch and damage recognition domains of MutS/MSH and NER helicase XPB, respectively, as well as with the putative DNA specificity motif of the T4 phage fork regression protein UvsW. Loss of UvsW fork regression activity by deletion of this domain was rescued by its replacement with HARP, establishing the importance of this domain in UvsW and demonstrating a functional complementarity between these structurally homologous domains. Mutation of predicted DNA-binding residues in HARP dramatically reduced fork binding and regression activities of SMARCAL1(CD). Thus, this work has uncovered a conserved substrate recognition domain in DNA repair enzymes that couples ATP-hydrolysis to remodeling of a variety of DNA structures, and provides insight into this domain's role in replication fork stability and genome integrity.
1 Communities
2 Members
0 Resources
18 MeSH Terms
Validation of the Confusion Assessment Method for the Intensive Care Unit in older emergency department patients.
Han JH, Wilson A, Graves AJ, Shintani A, Schnelle JF, Dittus RS, Powers JS, Vernon J, Storrow AB, Ely EW
(2014) Acad Emerg Med 21: 180-7
MeSH Terms: Aged, Aged, 80 and over, Confusion, Delirium, Emergency Service, Hospital, Female, Humans, Intensive Care Units, Likelihood Functions, Logistic Models, Male, Observer Variation, Prospective Studies, Psychiatric Status Rating Scales, Reproducibility of Results, Sensitivity and Specificity
Show Abstract · Added May 27, 2014
OBJECTIVES - In the emergency department (ED), health care providers miss delirium approximately 75% of the time, because they do not routinely screen for this syndrome. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) is a brief (<1 minute) delirium assessment that may be feasible for use in the ED. The study objective was to determine its validity and reliability in older ED patients.
METHODS - In this prospective observational cohort study, patients aged 65 years or older were enrolled at an academic, tertiary care ED from July 2009 to February 2012. An emergency physician (EP) and research assistants (RAs) performed the CAM-ICU. The reference standard for delirium was a comprehensive (~30 minutes) psychiatrist assessment using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision criteria. All assessments were blinded to each other and were conducted within 3 hours. Sensitivities, specificities, and likelihood ratios were calculated for both the EP and the RAs using the psychiatrist's assessment as the reference standard. Kappa values between the EP and RAs were also calculated to measure reliability.
RESULTS - Of 406 patients enrolled, 50 (12.3%) had delirium. The median age was 73.5 years old (interquartile range [IQR] = 69 to 80 years), 202 (49.8%) were female, and 57 (14.0%) were nonwhite. The CAM-ICU's sensitivities were 72.0% (95% confidence interval [CI] = 58.3% to 82.5%) and 68.0% (95% CI = 54.2% to 79.2%) in the EP and RAs, respectively. The CAM-ICU's specificity was 98.6% (95% CI = 96.8% to 99.4%) for both raters. The negative likelihood ratios (LR-) were 0.28 (95% CI = 0.18 to 0.44) and 0.32 (95% CI = 0.22 to 0.49) in the EP and RAs, respectively. The positive likelihood ratios (LR+) were 51.3 (95% CI = 21.1 to 124.5) and 48.4 (95% CI = 19.9 to 118.0), respectively. The kappa between the EP and RAs was 0.92 (95% CI = 0.85 to 0.98), indicating excellent interobserver reliability.
CONCLUSIONS - In older ED patients, the CAM-ICU is highly specific, and a positive test is nearly diagnostic for delirium when used by both the EP and RAs. However, the CAM-ICU's sensitivity was modest, and a negative test decreased the likelihood of delirium by a small amount. The consequences of a false-negative CAM-ICU are unknown and deserve further study.
© 2014 by the Society for Academic Emergency Medicine.
0 Communities
1 Members
0 Resources
16 MeSH Terms
Sample size calculation based on exact test for assessing differential expression analysis in RNA-seq data.
Li CI, Su PF, Shyr Y
(2013) BMC Bioinformatics 14: 357
MeSH Terms: Base Sequence, Computer Simulation, Gene Expression Regulation, Likelihood Functions, Models, Statistical, Poisson Distribution, RNA, Random Allocation, Research Design, Sample Size, Sequence Analysis, RNA, User-Computer Interface
Show Abstract · Added March 10, 2014
BACKGROUND - Sample size calculation is an important issue in the experimental design of biomedical research. For RNA-seq experiments, the sample size calculation method based on the Poisson model has been proposed; however, when there are biological replicates, RNA-seq data could exhibit variation significantly greater than the mean (i.e. over-dispersion). The Poisson model cannot appropriately model the over-dispersion, and in such cases, the negative binomial model has been used as a natural extension of the Poisson model. Because the field currently lacks a sample size calculation method based on the negative binomial model for assessing differential expression analysis of RNA-seq data, we propose a method to calculate the sample size.
RESULTS - We propose a sample size calculation method based on the exact test for assessing differential expression analysis of RNA-seq data.
CONCLUSIONS - The proposed sample size calculation method is straightforward and not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size method are presented; the results indicate our method works well, with achievement of desired power.
0 Communities
1 Members
0 Resources
12 MeSH Terms
Integrated model of de novo and inherited genetic variants yields greater power to identify risk genes.
He X, Sanders SJ, Liu L, De Rubeis S, Lim ET, Sutcliffe JS, Schellenberg GD, Gibbs RA, Daly MJ, Buxbaum JD, State MW, Devlin B, Roeder K
(2013) PLoS Genet 9: e1003671
MeSH Terms: Bayes Theorem, Case-Control Studies, Child Development Disorders, Pervasive, Exome, Genetic Predisposition to Disease, Genetic Variation, Humans, Likelihood Functions, Models, Theoretical, Mutation, Risk Factors, Sequence Analysis, DNA
Show Abstract · Added February 20, 2014
De novo mutations affect risk for many diseases and disorders, especially those with early-onset. An example is autism spectrum disorders (ASD). Four recent whole-exome sequencing (WES) studies of ASD families revealed a handful of novel risk genes, based on independent de novo loss-of-function (LoF) mutations falling in the same gene, and found that de novo LoF mutations occurred at a twofold higher rate than expected by chance. However successful these studies were, they used only a small fraction of the data, excluding other types of de novo mutations and inherited rare variants. Moreover, such analyses cannot readily incorporate data from case-control studies. An important research challenge in gene discovery, therefore, is to develop statistical methods that accommodate a broader class of rare variation. We develop methods that can incorporate WES data regarding de novo mutations, inherited variants present, and variants identified within cases and controls. TADA, for Transmission And De novo Association, integrates these data by a gene-based likelihood model involving parameters for allele frequencies and gene-specific penetrances. Inference is based on a Hierarchical Bayes strategy that borrows information across all genes to infer parameters that would be difficult to estimate for individual genes. In addition to theoretical development we validated TADA using realistic simulations mimicking rare, large-effect mutations affecting risk for ASD and show it has dramatically better power than other common methods of analysis. Thus TADA's integration of various kinds of WES data can be a highly effective means of identifying novel risk genes. Indeed, application of TADA to WES data from subjects with ASD and their families, as well as from a study of ASD subjects and controls, revealed several novel and promising ASD candidate genes with strong statistical support.
0 Communities
1 Members
0 Resources
12 MeSH Terms
Out-of-atlas likelihood estimation using multi-atlas segmentation.
Asman AJ, Chambless LB, Thompson RC, Landman BA
(2013) Med Phys 40: 043702
MeSH Terms: Algorithms, Brain, Data Interpretation, Statistical, Diffusion Tensor Imaging, Humans, Image Enhancement, Image Interpretation, Computer-Assisted, Likelihood Functions, Pattern Recognition, Automated, Reproducibility of Results, Sensitivity and Specificity, Subtraction Technique
Show Abstract · Added March 26, 2014
PURPOSE - Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to "in-atlas" applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases.
METHODS - The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image.
RESULTS - Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain-an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading).
CONCLUSIONS - The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on regions of normal anatomy to ascertain image quality and adapt to image appearance characteristics.
0 Communities
1 Members
0 Resources
12 MeSH Terms
Outcome vector dependent sampling with longitudinal continuous response data: stratified sampling based on summary statistics.
Schildcrout JS, Garbett SP, Heagerty PJ
(2013) Biometrics 69: 405-16
MeSH Terms: Asthma, Biometry, Child, Data Interpretation, Statistical, Forced Expiratory Volume, Humans, Likelihood Functions, Linear Models, Longitudinal Studies, Models, Statistical
Show Abstract · Added February 18, 2013
The analysis of longitudinal trajectories usually focuses on evaluation of explanatory factors that are either associated with rates of change, or with overall mean levels of a continuous outcome variable. In this article, we introduce valid design and analysis methods that permit outcome dependent sampling of longitudinal data for scenarios where all outcome data currently exist, but a targeted substudy is being planned in order to collect additional key exposure information on a limited number of subjects. We propose a stratified sampling based on specific summaries of individual longitudinal trajectories, and we detail an ascertainment corrected maximum likelihood approach for estimation using the resulting biased sample of subjects. In addition, we demonstrate that the efficiency of an outcome-based sampling design relative to use of a simple random sample depends highly on the choice of outcome summary statistic used to direct sampling, and we show a natural link between the goals of the longitudinal regression model and corresponding desirable designs. Using data from the Childhood Asthma Management Program, where genetic information required retrospective ascertainment, we study a range of designs that examine lung function profiles over 4 years of follow-up for children classified according to their genotype for the IL 13 cytokine.
© 2013, The International Biometric Society.
0 Communities
2 Members
0 Resources
10 MeSH Terms
Bayesian speckle tracking. Part I: an implementable perturbation to the likelihood function for ultrasound displacement estimation.
Byram B, Trahey GE, Palmeri M
(2013) IEEE Trans Ultrason Ferroelectr Freq Control 60: 132-43
MeSH Terms: Algorithms, Bayes Theorem, Likelihood Functions, Models, Theoretical, Scattering, Radiation, Signal-To-Noise Ratio, Ultrasonics, Ultrasonography
Show Abstract · Added May 29, 2014
Accurate and precise displacement estimation has been a hallmark of clinical ultrasound. Displacement estimation accuracy has largely been considered to be limited by the Cramer-Rao lower bound (CRLB). However, the CRLB only describes the minimum variance obtainable from unbiased estimators. Unbiased estimators are generally implemented using Bayes' theorem, which requires a likelihood function. The classic likelihood function for the displacement estimation problem is not discriminative and is difficult to implement for clinically relevant ultrasound with diffuse scattering. Because the classic likelihood function is not effective, a perturbation is proposed. The proposed likelihood function was evaluated and compared against the classic likelihood function by converting both to posterior probability density functions (PDFs) using a noninformative prior. Example results are reported for bulk motion simulations using a 6λ tracking kernel and 30 dB SNR for 1000 data realizations. The canonical likelihood function assigned the true displacement a mean probability of only 0.070 ± 0.020, whereas the new likelihood function assigned the true displacement a much higher probability of 0.22 ± 0.16. The new likelihood function shows improvements at least for bulk motion, acoustic radiation force induced motion, and compressive motion, and at least for SNRs greater than 10 dB and kernel lengths between 1.5 and 12λ.
0 Communities
1 Members
0 Resources
8 MeSH Terms