The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
The genetic architecture of psychiatric disorders is characterized by a large number of small-effect variants located primarily in non-coding regions, suggesting that the underlying causal effects may influence disease risk by modulating gene expression. We provide comprehensive analyses using transcriptome data from an unprecedented collection of tissues to gain pathophysiological insights into the role of the brain, neuroendocrine factors (adrenal gland) and gastrointestinal systems (colon) in psychiatric disorders. In each tissue, we perform PrediXcan analysis and identify trait-associated genes for schizophrenia (n associations = 499; n unique genes = 275), bipolar disorder (n associations = 17; n unique genes = 13), attention deficit hyperactivity disorder (n associations = 19; n unique genes = 12) and broad depression (n associations = 41; n unique genes = 31). Importantly, both PrediXcan and summary-data-based Mendelian randomization/heterogeneity in dependent instruments analyses suggest potentially causal genes in non-brain tissues, showing the utility of these tissues for mapping psychiatric disease genetic predisposition. Our analyses further highlight the importance of joint tissue approaches as 76% of the genes were detected only in difficult-to-acquire tissues.
This chapter provides a broad overview of ion mobility-mass spectrometry (IM-MS) and its applications in separation science, with a focus on pharmaceutical applications. A general overview of fundamental ion mobility (IM) theory is provided with descriptions of several contemporary instrument platforms which are available commercially (i.e., drift tube and traveling wave IM). Recent applications of IM-MS toward the evaluation of structural isomers are highlighted and placed in the context of both a separation and characterization perspective. We conclude this chapter with a guided reference protocol for obtaining routine IM-MS spectra on a commercially available uniform-field IM-MS.
Diffusion MRI (dMRI) fiber tractography has become a pillar of the neuroimaging community due to its ability to noninvasively map the structural connectivity of the brain. Despite widespread use in clinical and research domains, these methods suffer from several potential drawbacks or limitations. Thus, validating the accuracy and reproducibility of techniques is critical for sound scientific conclusions and effective clinical outcomes. Towards this end, a number of international benchmark competitions, or "challenges", has been organized by the diffusion MRI community in order to investigate the reliability of the tractography process by providing a platform to compare algorithms and results in a fair manner, and evaluate common and emerging algorithms in an effort to advance the state of the field. In this paper, we summarize the lessons from a decade of challenges in tractography, and give perspective on the past, present, and future "challenges" that the field of diffusion tractography faces.
Copyright © 2018 Elsevier Inc. All rights reserved.
For two decades diffusion fiber tractography has been used to probe both the spatial extent of white matter pathways and the region to region connectivity of the brain. In both cases, anatomical accuracy of tractography is critical for sound scientific conclusions. Here we assess and validate the algorithms and tractography implementations that have been most widely used - often because of ease of use, algorithm simplicity, or availability offered in open source software. Comparing forty tractography results to a ground truth defined by histological tracers in the primary motor cortex on the same squirrel monkey brains, we assess tract fidelity on the scale of voxels as well as over larger spatial domains or regional connectivity. No algorithms are successful in all metrics, and, in fact, some implementations fail to reconstruct large portions of pathways or identify major points of connectivity. The accuracy is most dependent on reconstruction method and tracking algorithm, as well as the seed region and how this region is utilized. We also note a tremendous variability in the results, even though the same MR images act as inputs to all algorithms. In addition, anatomical accuracy is significantly decreased at increased distances from the seed. An analysis of the spatial errors in tractography reveals that many techniques have trouble properly leaving the gray matter, and many only reveal connectivity to adjacent regions of interest. These results show that the most commonly implemented algorithms have several shortcomings and limitations, and choices in implementations lead to very different results. This study should provide guidance for algorithm choices based on study requirements for sensitivity, specificity, or the need to identify particular connections, and should serve as a heuristic for future developments in tractography.
Copyright © 2018 Elsevier Inc. All rights reserved.
State-of-the-art strategies for proteomics are not able to rapidly interrogate complex peptide mixtures in an untargeted manner with sensitive peptide and protein identification rates. We describe a data-independent acquisition (DIA) approach, microDIA (μDIA), that applies a novel tandem mass spectrometry (MS/MS) mass spectral deconvolution method to increase the specificity of tandem mass spectra acquired during proteomics experiments. Using the μDIA approach with a 10 min liquid chromatography gradient allowed detection of 3.1-fold more HeLa proteins than the results obtained from data-dependent acquisition (DDA) of the same samples. Additionally, we found the μDIA MS/MS deconvolution procedure is critical for resolving modified peptides with relatively small precursor mass shifts that cause the same peptide sequence in modified and unmodified forms to theoretically cofragment in the same raw MS/MS spectra. The μDIA workflow is implemented in the PROTALIZER software tool which fully automates tandem mass spectral deconvolution, queries every peptide with a library-free search algorithm against a user-defined protein database, and confidently identifies multiple peptides in a single tandem mass spectrum. We also benchmarked μDIA against DDA using a 90 min gradient analysis of HeLa and Escherichia coli peptides that were mixed in predefined quantitative ratios, and our results showed μDIA provided 24% more true positives at the same false positive rate.
OBJECTIVES - We aimed to validate an algorithm using both primary discharge diagnosis (International Classification of Diseases Ninth Revision (ICD-9)) and diagnosis-related group (DRG) codes to identify hospitalisations due to decompensated heart failure (HF) in a population of patients with diabetes within the Veterans Health Administration (VHA) system.
DESIGN - Validation study.
SETTING - Veterans Health Administration-Tennessee Valley Healthcare System PARTICIPANTS: We identified and reviewed a stratified, random sample of hospitalisations between 2001 and 2012 within a single VHA healthcare system of adults who received regular VHA care and were initiated on an antidiabetic medication between 2001 and 2008. We sampled 500 hospitalisations; 400 hospitalisations that fulfilled algorithm criteria, 100 that did not. Of these, 497 had adequate information for inclusion. The mean patient age was 66.1 years (SD 11.4). Majority of patients were male (98.8%); 75% were white and 20% were black.
PRIMARY AND SECONDARY OUTCOME MEASURES - To determine if a hospitalisation was due to HF, we performed chart abstraction using Framingham criteria as the referent standard. We calculated the positive predictive value (PPV), negative predictive value (NPV), sensitivity and specificity for the overall algorithm and each component (primary diagnosis code (ICD-9), DRG code or both).
RESULTS - The algorithm had a PPV of 89.7% (95% CI 86.8 to 92.7), NPV of 93.9% (89.1 to 98.6), sensitivity of 45.1% (25.1 to 65.1) and specificity of 99.4% (99.2 to 99.6). The PPV was highest for hospitalisations that fulfilled both the ICD-9 and DRG algorithm criteria (92.1% (89.1 to 95.1)) and lowest for hospitalisations that fulfilled only DRG algorithm criteria (62.5% (28.4 to 96.6)).
CONCLUSIONS - Our algorithm, which included primary discharge diagnosis and DRG codes, demonstrated excellent PPV for identification of hospitalisations due to decompensated HF among patients with diabetes in the VHA system.
© Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
OBJECTIVE - Hepatorenal Syndrome (HRS) is a devastating form of acute kidney injury (AKI) in advanced liver disease patients with high morbidity and mortality, but phenotyping algorithms have not yet been developed using large electronic health record (EHR) databases. We evaluated and compared multiple phenotyping methods to achieve an accurate algorithm for HRS identification.
MATERIALS AND METHODS - A national retrospective cohort of patients with cirrhosis and AKI admitted to 124 Veterans Affairs hospitals was assembled from electronic health record data collected from 2005 to 2013. AKI was defined by the Kidney Disease: Improving Global Outcomes criteria. Five hundred and four hospitalizations were selected for manual chart review and served as the gold standard. Electronic Health Record based predictors were identified using structured and free text clinical data, subjected through NLP from the clinical Text Analysis Knowledge Extraction System. We explored several dimension reduction techniques for the NLP data, including newer high-throughput phenotyping and word embedding methods, and ascertained their effectiveness in identifying the phenotype without structured predictor variables. With the combined structured and NLP variables, we analyzed five phenotyping algorithms: penalized logistic regression, naïve Bayes, support vector machines, random forest, and gradient boosting. Calibration and discrimination metrics were calculated using 100 bootstrap iterations. In the final model, we report odds ratios and 95% confidence intervals.
RESULTS - The area under the receiver operating characteristic curve (AUC) for the different models ranged from 0.73 to 0.93; with penalized logistic regression having the best discriminatory performance. Calibration for logistic regression was modest, but gradient boosting and support vector machines were superior. NLP identified 6985 variables; a priori variable selection performed similarly to dimensionality reduction using high-throughput phenotyping and semantic similarity informed clustering (AUC of 0.81 - 0.82).
CONCLUSION - This study demonstrated improved phenotyping of a challenging AKI etiology, HRS, over ICD-9 coding. We also compared performance among multiple approaches to EHR-derived phenotyping, and found similar results between methods. Lastly, we showed that automated NLP dimension reduction is viable for acute illness.
Copyright © 2018 Elsevier Inc. All rights reserved.
While great progress has been made, only 10% of the nearly 1,000 integral, α-helical, multi-span membrane protein families are represented by at least one experimentally determined structure in the PDB. Previously, we developed the algorithm BCL::MP-Fold, which samples the large conformational space of membrane proteins de novo by assembling predicted secondary structure elements guided by knowledge-based potentials. Here, we present a case study of rhodopsin fold determination by integrating sparse and/or low-resolution restraints from multiple experimental techniques including electron microscopy, electron paramagnetic resonance spectroscopy, and nuclear magnetic resonance spectroscopy. Simultaneous incorporation of orthogonal experimental restraints not only significantly improved the sampling accuracy but also allowed identification of the correct fold, which is demonstrated by a protein size-normalized transmembrane root-mean-square deviation as low as 1.2 Å. The protocol developed in this case study can be used for the determination of unknown membrane protein folds when limited experimental restraints are available.
Copyright © 2018 Elsevier Ltd. All rights reserved.
Computational protein design has been successful in modeling fixed backbone proteins in a single conformation. However, when modeling large ensembles of flexible proteins, current methods in protein design have been insufficient. Large barriers in the energy landscape are difficult to traverse while redesigning a protein sequence, and as a result current design methods only sample a fraction of available sequence space. We propose a new computational approach that combines traditional structure-based modeling using the Rosetta software suite with machine learning and integer linear programming to overcome limitations in the Rosetta sampling methods. We demonstrate the effectiveness of this method, which we call BROAD, by benchmarking the performance on increasing predicted breadth of anti-HIV antibodies. We use this novel method to increase predicted breadth of naturally-occurring antibody VRC23 against a panel of 180 divergent HIV viral strains and achieve 100% predicted binding against the panel. In addition, we compare the performance of this method to state-of-the-art multistate design in Rosetta and show that we can outperform the existing method significantly. We further demonstrate that sequences recovered by this method recover known binding motifs of broadly neutralizing anti-HIV antibodies. Finally, our approach is general and can be extended easily to other protein systems. Although our modeled antibodies were not tested in vitro, we predict that these variants would have greatly increased breadth compared to the wild-type antibody.
Epilepsy surgery has seen numerous technological advances in both diagnostic and therapeutic procedures in recent years. This has increased the number of patients who may be candidates for intervention and potential improvement in quality of life. However, the expansion of the field also necessitates a broader understanding of how to incorporate both traditional and emerging technologies into the care provided at comprehensive epilepsy centers. This review summarizes both old and new surgical procedures in epilepsy using an example algorithm. While treatment algorithms are inherently oversimplified, incomplete, and reflect personal bias, they provide a general framework that can be customized to each center and each patient, incorporating differences in provider opinion, patient preference, and the institutional availability of technologies. For instance, the use of minimally invasive stereotactic electroencephalography (SEEG) has increased dramatically over the past decade, but many cases still benefit from invasive recordings using subdural grids. Furthermore, although surgical resection remains the gold-standard treatment for focal mesial temporal or neocortical epilepsy, ablative procedures such as laser interstitial thermal therapy (LITT) or stereotactic radiosurgery (SRS) may be appropriate and avoid craniotomy in many cases. Furthermore, while palliative surgical procedures were once limited to disconnection surgeries, several neurostimulation treatments are now available to treat eloquent cortical, bitemporal, and even multifocal or generalized epilepsy syndromes. An updated perspective in epilepsy surgery will help guide surgical decision making and lay the groundwork for data collection needed in future studies and trials.
Copyright © 2018 Elsevier Inc. All rights reserved.