Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 776

Publication Record

Connections

Multi-tissue transcriptome analyses identify genetic mechanisms underlying neuropsychiatric traits.
Gamazon ER, Zwinderman AH, Cox NJ, Denys D, Derks EM
(2019) Nat Genet 51: 933-940
MeSH Terms: Algorithms, Computational Biology, Gene Expression Profiling, Gene Expression Regulation, Gene Regulatory Networks, Genetic Association Studies, Genetic Predisposition to Disease, Genome-Wide Association Study, Humans, Mental Disorders, Organ Specificity, Polymorphism, Single Nucleotide, Quantitative Trait Loci, Quantitative Trait, Heritable, Transcriptome
Show Abstract · Added July 17, 2019
The genetic architecture of psychiatric disorders is characterized by a large number of small-effect variants located primarily in non-coding regions, suggesting that the underlying causal effects may influence disease risk by modulating gene expression. We provide comprehensive analyses using transcriptome data from an unprecedented collection of tissues to gain pathophysiological insights into the role of the brain, neuroendocrine factors (adrenal gland) and gastrointestinal systems (colon) in psychiatric disorders. In each tissue, we perform PrediXcan analysis and identify trait-associated genes for schizophrenia (n associations = 499; n unique genes = 275), bipolar disorder (n associations = 17; n unique genes = 13), attention deficit hyperactivity disorder (n associations = 19; n unique genes = 12) and broad depression (n associations = 41; n unique genes = 31). Importantly, both PrediXcan and summary-data-based Mendelian randomization/heterogeneity in dependent instruments analyses suggest potentially causal genes in non-brain tissues, showing the utility of these tissues for mapping psychiatric disease genetic predisposition. Our analyses further highlight the importance of joint tissue approaches as 76% of the genes were detected only in difficult-to-acquire tissues.
0 Communities
1 Members
0 Resources
MeSH Terms
Isomeric and Conformational Analysis of Small Drug and Drug-Like Molecules by Ion Mobility-Mass Spectrometry (IM-MS).
Phillips ST, Dodds JN, May JC, McLean JA
(2019) Methods Mol Biol 1939: 161-178
MeSH Terms: Algorithms, Amino Acids, Carbohydrates, Ion Mobility Spectrometry, Isomerism, Mass Spectrometry, Molecular Conformation, Pharmaceutical Preparations, Small Molecule Libraries, Software
Show Abstract · Added August 7, 2019
This chapter provides a broad overview of ion mobility-mass spectrometry (IM-MS) and its applications in separation science, with a focus on pharmaceutical applications. A general overview of fundamental ion mobility (IM) theory is provided with descriptions of several contemporary instrument platforms which are available commercially (i.e., drift tube and traveling wave IM). Recent applications of IM-MS toward the evaluation of structural isomers are highlighted and placed in the context of both a separation and characterization perspective. We conclude this chapter with a guided reference protocol for obtaining routine IM-MS spectra on a commercially available uniform-field IM-MS.
1 Communities
0 Members
0 Resources
MeSH Terms
Challenges in diffusion MRI tractography - Lessons learned from international benchmark competitions.
Schilling KG, Daducci A, Maier-Hein K, Poupon C, Houde JC, Nath V, Anderson AW, Landman BA, Descoteaux M
(2019) Magn Reson Imaging 57: 194-209
MeSH Terms: Algorithms, Benchmarking, Brain, Diffusion Tensor Imaging, Humans, Internationality, Neuroimaging, Reproducibility of Results
Show Abstract · Added March 26, 2019
Diffusion MRI (dMRI) fiber tractography has become a pillar of the neuroimaging community due to its ability to noninvasively map the structural connectivity of the brain. Despite widespread use in clinical and research domains, these methods suffer from several potential drawbacks or limitations. Thus, validating the accuracy and reproducibility of techniques is critical for sound scientific conclusions and effective clinical outcomes. Towards this end, a number of international benchmark competitions, or "challenges", has been organized by the diffusion MRI community in order to investigate the reliability of the tractography process by providing a platform to compare algorithms and results in a fair manner, and evaluate common and emerging algorithms in an effort to advance the state of the field. In this paper, we summarize the lessons from a decade of challenges in tractography, and give perspective on the past, present, and future "challenges" that the field of diffusion tractography faces.
Copyright © 2018 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
8 MeSH Terms
Anatomical accuracy of standard-practice tractography algorithms in the motor system - A histological validation in the squirrel monkey brain.
Schilling KG, Gao Y, Stepniewska I, Janve V, Landman BA, Anderson AW
(2019) Magn Reson Imaging 55: 7-25
MeSH Terms: Algorithms, Animals, Brain, Brain Mapping, Diffusion Tensor Imaging, Image Processing, Computer-Assisted, Models, Anatomic, Motor Cortex, Probability, Reproducibility of Results, Saimiri, Sensitivity and Specificity, Software, White Matter
Show Abstract · Added March 26, 2019
For two decades diffusion fiber tractography has been used to probe both the spatial extent of white matter pathways and the region to region connectivity of the brain. In both cases, anatomical accuracy of tractography is critical for sound scientific conclusions. Here we assess and validate the algorithms and tractography implementations that have been most widely used - often because of ease of use, algorithm simplicity, or availability offered in open source software. Comparing forty tractography results to a ground truth defined by histological tracers in the primary motor cortex on the same squirrel monkey brains, we assess tract fidelity on the scale of voxels as well as over larger spatial domains or regional connectivity. No algorithms are successful in all metrics, and, in fact, some implementations fail to reconstruct large portions of pathways or identify major points of connectivity. The accuracy is most dependent on reconstruction method and tracking algorithm, as well as the seed region and how this region is utilized. We also note a tremendous variability in the results, even though the same MR images act as inputs to all algorithms. In addition, anatomical accuracy is significantly decreased at increased distances from the seed. An analysis of the spatial errors in tractography reveals that many techniques have trouble properly leaving the gray matter, and many only reveal connectivity to adjacent regions of interest. These results show that the most commonly implemented algorithms have several shortcomings and limitations, and choices in implementations lead to very different results. This study should provide guidance for algorithm choices based on study requirements for sensitivity, specificity, or the need to identify particular connections, and should serve as a heuristic for future developments in tractography.
Copyright © 2018 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
14 MeSH Terms
Micro-Data-Independent Acquisition for High-Throughput Proteomics and Sensitive Peptide Mass Spectrum Identification.
Heaven MR, Cobbs AL, Nei YW, Gutierrez DB, Herren AW, Gunawardena HP, Caprioli RM, Norris JL
(2018) Anal Chem 90: 8905-8911
MeSH Terms: Algorithms, Chromatography, Liquid, Databases, Protein, Escherichia coli, Escherichia coli Proteins, HeLa Cells, High-Throughput Screening Assays, Humans, Peptides, Proteome, Proteomics, Software, Tandem Mass Spectrometry, Workflow
Show Abstract · Added August 27, 2018
State-of-the-art strategies for proteomics are not able to rapidly interrogate complex peptide mixtures in an untargeted manner with sensitive peptide and protein identification rates. We describe a data-independent acquisition (DIA) approach, microDIA (μDIA), that applies a novel tandem mass spectrometry (MS/MS) mass spectral deconvolution method to increase the specificity of tandem mass spectra acquired during proteomics experiments. Using the μDIA approach with a 10 min liquid chromatography gradient allowed detection of 3.1-fold more HeLa proteins than the results obtained from data-dependent acquisition (DDA) of the same samples. Additionally, we found the μDIA MS/MS deconvolution procedure is critical for resolving modified peptides with relatively small precursor mass shifts that cause the same peptide sequence in modified and unmodified forms to theoretically cofragment in the same raw MS/MS spectra. The μDIA workflow is implemented in the PROTALIZER software tool which fully automates tandem mass spectral deconvolution, queries every peptide with a library-free search algorithm against a user-defined protein database, and confidently identifies multiple peptides in a single tandem mass spectrum. We also benchmarked μDIA against DDA using a 90 min gradient analysis of HeLa and Escherichia coli peptides that were mixed in predefined quantitative ratios, and our results showed μDIA provided 24% more true positives at the same false positive rate.
0 Communities
1 Members
0 Resources
14 MeSH Terms
Validation of an algorithm to identify heart failure hospitalisations in patients with diabetes within the veterans health administration.
Presley CA, Min JY, Chipman J, Greevy RA, Grijalva CG, Griffin MR, Roumie CL
(2018) BMJ Open 8: e020455
MeSH Terms: Adult, Aged, Algorithms, Diabetes Complications, Diabetes Mellitus, Diagnosis-Related Groups, Female, Heart Failure, Hospitalization, Humans, International Classification of Diseases, Male, Middle Aged, Predictive Value of Tests, Sensitivity and Specificity, Veterans
Show Abstract · Added July 27, 2018
OBJECTIVES - We aimed to validate an algorithm using both primary discharge diagnosis (International Classification of Diseases Ninth Revision (ICD-9)) and diagnosis-related group (DRG) codes to identify hospitalisations due to decompensated heart failure (HF) in a population of patients with diabetes within the Veterans Health Administration (VHA) system.
DESIGN - Validation study.
SETTING - Veterans Health Administration-Tennessee Valley Healthcare System PARTICIPANTS: We identified and reviewed a stratified, random sample of hospitalisations between 2001 and 2012 within a single VHA healthcare system of adults who received regular VHA care and were initiated on an antidiabetic medication between 2001 and 2008. We sampled 500 hospitalisations; 400 hospitalisations that fulfilled algorithm criteria, 100 that did not. Of these, 497 had adequate information for inclusion. The mean patient age was 66.1 years (SD 11.4). Majority of patients were male (98.8%); 75% were white and 20% were black.
PRIMARY AND SECONDARY OUTCOME MEASURES - To determine if a hospitalisation was due to HF, we performed chart abstraction using Framingham criteria as the referent standard. We calculated the positive predictive value (PPV), negative predictive value (NPV), sensitivity and specificity for the overall algorithm and each component (primary diagnosis code (ICD-9), DRG code or both).
RESULTS - The algorithm had a PPV of 89.7% (95% CI 86.8 to 92.7), NPV of 93.9% (89.1 to 98.6), sensitivity of 45.1% (25.1 to 65.1) and specificity of 99.4% (99.2 to 99.6). The PPV was highest for hospitalisations that fulfilled both the ICD-9 and DRG algorithm criteria (92.1% (89.1 to 95.1)) and lowest for hospitalisations that fulfilled only DRG algorithm criteria (62.5% (28.4 to 96.6)).
CONCLUSIONS - Our algorithm, which included primary discharge diagnosis and DRG codes, demonstrated excellent PPV for identification of hospitalisations due to decompensated HF among patients with diabetes in the VHA system.
© Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
0 Communities
1 Members
0 Resources
16 MeSH Terms
Development of an automated phenotyping algorithm for hepatorenal syndrome.
Koola JD, Davis SE, Al-Nimri O, Parr SK, Fabbri D, Malin BA, Ho SB, Matheny ME
(2018) J Biomed Inform 80: 87-95
MeSH Terms: Acute Kidney Injury, Aged, Algorithms, Diagnosis, Computer-Assisted, Electronic Health Records, Female, Hepatorenal Syndrome, Humans, Liver Cirrhosis, Male, Middle Aged, Natural Language Processing, Odds Ratio, Phenotype, ROC Curve, Retrospective Studies, Support Vector Machine
Show Abstract · Added April 10, 2018
OBJECTIVE - Hepatorenal Syndrome (HRS) is a devastating form of acute kidney injury (AKI) in advanced liver disease patients with high morbidity and mortality, but phenotyping algorithms have not yet been developed using large electronic health record (EHR) databases. We evaluated and compared multiple phenotyping methods to achieve an accurate algorithm for HRS identification.
MATERIALS AND METHODS - A national retrospective cohort of patients with cirrhosis and AKI admitted to 124 Veterans Affairs hospitals was assembled from electronic health record data collected from 2005 to 2013. AKI was defined by the Kidney Disease: Improving Global Outcomes criteria. Five hundred and four hospitalizations were selected for manual chart review and served as the gold standard. Electronic Health Record based predictors were identified using structured and free text clinical data, subjected through NLP from the clinical Text Analysis Knowledge Extraction System. We explored several dimension reduction techniques for the NLP data, including newer high-throughput phenotyping and word embedding methods, and ascertained their effectiveness in identifying the phenotype without structured predictor variables. With the combined structured and NLP variables, we analyzed five phenotyping algorithms: penalized logistic regression, naïve Bayes, support vector machines, random forest, and gradient boosting. Calibration and discrimination metrics were calculated using 100 bootstrap iterations. In the final model, we report odds ratios and 95% confidence intervals.
RESULTS - The area under the receiver operating characteristic curve (AUC) for the different models ranged from 0.73 to 0.93; with penalized logistic regression having the best discriminatory performance. Calibration for logistic regression was modest, but gradient boosting and support vector machines were superior. NLP identified 6985 variables; a priori variable selection performed similarly to dimensionality reduction using high-throughput phenotyping and semantic similarity informed clustering (AUC of 0.81 - 0.82).
CONCLUSION - This study demonstrated improved phenotyping of a challenging AKI etiology, HRS, over ICD-9 coding. We also compared performance among multiple approaches to EHR-derived phenotyping, and found similar results between methods. Lastly, we showed that automated NLP dimension reduction is viable for acute illness.
Copyright © 2018 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
17 MeSH Terms
Integrated Structural Biology for α-Helical Membrane Protein Structure Determination.
Xia Y, Fischer AW, Teixeira P, Weiner B, Meiler J
(2018) Structure 26: 657-666.e2
MeSH Terms: Algorithms, Binding Sites, Electron Spin Resonance Spectroscopy, Humans, Membrane Proteins, Microscopy, Electron, Models, Molecular, Monte Carlo Method, Nuclear Magnetic Resonance, Biomolecular, Protein Binding, Protein Conformation, alpha-Helical, Protein Folding, Protein Interaction Domains and Motifs, Rhodopsin, Thermodynamics
Show Abstract · Added March 17, 2018
While great progress has been made, only 10% of the nearly 1,000 integral, α-helical, multi-span membrane protein families are represented by at least one experimentally determined structure in the PDB. Previously, we developed the algorithm BCL::MP-Fold, which samples the large conformational space of membrane proteins de novo by assembling predicted secondary structure elements guided by knowledge-based potentials. Here, we present a case study of rhodopsin fold determination by integrating sparse and/or low-resolution restraints from multiple experimental techniques including electron microscopy, electron paramagnetic resonance spectroscopy, and nuclear magnetic resonance spectroscopy. Simultaneous incorporation of orthogonal experimental restraints not only significantly improved the sampling accuracy but also allowed identification of the correct fold, which is demonstrated by a protein size-normalized transmembrane root-mean-square deviation as low as 1.2 Å. The protocol developed in this case study can be used for the determination of unknown membrane protein folds when limited experimental restraints are available.
Copyright © 2018 Elsevier Ltd. All rights reserved.
0 Communities
1 Members
0 Resources
15 MeSH Terms
Integrating linear optimization with structural modeling to increase HIV neutralization breadth.
Sevy AM, Panda S, Crowe JE, Meiler J, Vorobeychik Y
(2018) PLoS Comput Biol 14: e1005999
MeSH Terms: Algorithms, Amino Acid Motifs, Antibodies, Neutralizing, Computational Biology, Epitopes, HIV Antibodies, HIV Infections, HIV-1, Humans, Linear Models, Machine Learning, Regression Analysis, Software, Support Vector Machine
Show Abstract · Added March 14, 2018
Computational protein design has been successful in modeling fixed backbone proteins in a single conformation. However, when modeling large ensembles of flexible proteins, current methods in protein design have been insufficient. Large barriers in the energy landscape are difficult to traverse while redesigning a protein sequence, and as a result current design methods only sample a fraction of available sequence space. We propose a new computational approach that combines traditional structure-based modeling using the Rosetta software suite with machine learning and integer linear programming to overcome limitations in the Rosetta sampling methods. We demonstrate the effectiveness of this method, which we call BROAD, by benchmarking the performance on increasing predicted breadth of anti-HIV antibodies. We use this novel method to increase predicted breadth of naturally-occurring antibody VRC23 against a panel of 180 divergent HIV viral strains and achieve 100% predicted binding against the panel. In addition, we compare the performance of this method to state-of-the-art multistate design in Rosetta and show that we can outperform the existing method significantly. We further demonstrate that sequences recovered by this method recover known binding motifs of broadly neutralizing anti-HIV antibodies. Finally, our approach is general and can be extended easily to other protein systems. Although our modeled antibodies were not tested in vitro, we predict that these variants would have greatly increased breadth compared to the wild-type antibody.
0 Communities
2 Members
0 Resources
14 MeSH Terms
A modern epilepsy surgery treatment algorithm: Incorporating traditional and emerging technologies.
Englot DJ
(2018) Epilepsy Behav 80: 68-74
MeSH Terms: Algorithms, Drug Resistant Epilepsy, Electroencephalography, Epilepsy, Epilepsy, Generalized, Humans, Imaging, Three-Dimensional, Minimally Invasive Surgical Procedures, Quality of Life, Radiosurgery, Treatment Outcome
Show Abstract · Added September 25, 2018
Epilepsy surgery has seen numerous technological advances in both diagnostic and therapeutic procedures in recent years. This has increased the number of patients who may be candidates for intervention and potential improvement in quality of life. However, the expansion of the field also necessitates a broader understanding of how to incorporate both traditional and emerging technologies into the care provided at comprehensive epilepsy centers. This review summarizes both old and new surgical procedures in epilepsy using an example algorithm. While treatment algorithms are inherently oversimplified, incomplete, and reflect personal bias, they provide a general framework that can be customized to each center and each patient, incorporating differences in provider opinion, patient preference, and the institutional availability of technologies. For instance, the use of minimally invasive stereotactic electroencephalography (SEEG) has increased dramatically over the past decade, but many cases still benefit from invasive recordings using subdural grids. Furthermore, although surgical resection remains the gold-standard treatment for focal mesial temporal or neocortical epilepsy, ablative procedures such as laser interstitial thermal therapy (LITT) or stereotactic radiosurgery (SRS) may be appropriate and avoid craniotomy in many cases. Furthermore, while palliative surgical procedures were once limited to disconnection surgeries, several neurostimulation treatments are now available to treat eloquent cortical, bitemporal, and even multifocal or generalized epilepsy syndromes. An updated perspective in epilepsy surgery will help guide surgical decision making and lay the groundwork for data collection needed in future studies and trials.
Copyright © 2018 Elsevier Inc. All rights reserved.
0 Communities
1 Members
0 Resources
11 MeSH Terms