The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
BACKGROUND - Convolutional neural networks (CNNs) are advanced artificial intelligence algorithms well suited to image classification tasks with variable features. These have been used to great effect in various real-world applications including handwriting recognition, face detection, image search, and fraud prevention. We sought to retrain a robust CNN with coronal computed tomography (CT) images to classify osteomeatal complex (OMC) occlusion and assess the performance of this technology with rhinologic data.
METHODS - The Google Inception-V3 CNN trained with 1.28 million images was used as the base model. Preoperative coronal sections through the OMC were obtained from 239 patients enrolled in 2 prospective chronic rhinosinusitis (CRS) outcomes studies, labeled according to OMC status, and mirrored to obtain a set of 956 images. Using this data, the classification layer of Inception-V3 was retrained in Python using a transfer learning method to adapt the CNN to the task of interpreting sinonasal CT images.
RESULTS - The retrained neural network achieved 85% classification accuracy for OMC occlusion, with a 95% confidence interval for algorithm accuracy of 78% to 92%. Receiver operating characteristic (ROC) curve analysis on the test set confirmed good classification ability of the CNN with an area under the ROC curve (AUC) of 0.87, significantly different than both random guessing and a dominant classifier that predicts the most common class (p < 0.0001).
CONCLUSION - Current state-of-the-art CNNs may be able to learn clinically relevant information from 2-dimensional sinonasal CT images with minimal supervision. Future work will extend this approach to 3-dimensional images in order to further refine this technology for possible clinical applications.
© 2018 ARS-AAOA, LLC.
Acute coronary syndrome (ACS) accounts for 1.36 million hospitalizations and billions of dollars in costs in the United States alone. A major challenge to diagnosing and treating patients with suspected ACS is the significant symptom overlap between patients with and without ACS. There is a high cost to over- and under-treatment. Guidelines recommend early risk stratification of patients, but many tools lack sufficient accuracy for use in clinical practice. Prognostic indices often misrepresent clinical populations and rely on curated data. We used random forest and elastic net on 20,078 deidentified records with significant missing and noisy values to develop models that outperform existing ACS risk prediction tools. We found that the random forest (AUC = 0.848) significantly outperformed elastic net (AUC=0.818), ridge regression (AUC = 0.810), and the TIMI (AUC = 0.745) and GRACE (AUC = 0.623) scores. Our findings show that random forest applied to noisy and sparse data can perform on par with previously developed scoring metrics.
Competence is essential for health care professionals. Current methods to assess competency, however, do not efficiently capture medical students' experience. In this preliminary study, we used machine learning and natural language processing (NLP) to identify geriatric competency exposures from students' clinical notes. The system applied NLP to generate the concepts and related features from notes. We extracted a refined list of concepts associated with corresponding competencies. This system was evaluated through 10-fold cross validation for six geriatric competency domains: "medication management (MedMgmt)", "cognitive and behavioral disorders (CBD)", "falls, balance, gait disorders (Falls)", "self-care capacity (SCC)", "palliative care (PC)", "hospital care for elders (HCE)" - each an American Association of Medical Colleges competency for medical students. The systems could accurately assess MedMgmt, SCC, HCE, and Falls competencies with F-measures of 0.94, 0.86, 0.85, and 0.84, respectively, but did not attain good performance for PC and CBD (0.69 and 0.62 in F-measure, respectively).
In clinical notes, physicians commonly describe reasons why certain treatments are given. However, this information is not typically available in a computable form. We describe a supervised learning system that is able to predict whether or not a treatment relation exists between any two medical concepts mentioned in clinical notes. To train our prediction model, we manually annotated 958 treatment relations in sentences selected from 6,864 discharge summaries. The features used to indicate the existence of a treatment relation between two medical concepts consisted of lexical and semantic information associated with the two concepts as well as information derived from the MEDication Indication (MEDI) resource and SemRep. The best F1-measure results of our supervised learning system (84.90) were significantly better than the F1-measure results achieved by SemRep (72.34).
Selective and iterative method for performance level estimation (SIMPLE) is a multi-atlas segmentation technique that integrates atlas selection and label fusion that has proven effective for radiotherapy planning. Herein, we revisit atlas selection and fusion techniques in the context of segmenting the spleen in metastatic liver cancer patients with possible splenomegaly using clinically acquired computed tomography (CT). We re-derive the SIMPLE algorithm in the context of the statistical literature, and show that the atlas selection criteria rest on newly presented principled likelihood models. We show that SIMPLE performance can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion approach to reduce the impact of correlated errors among selected atlases. In a study of 65 subjects, the spleen was segmented with median Dice similarity coefficient of 0.93 and a mean surface distance error of 2.2 mm.
Cochlear Implants (CI) are surgically implanted neural prosthetic devices used to treat severe-to-profound hearing loss. Recent studies have suggested that hearing outcomes with CIs are correlated with the location where individual electrodes in the implanted electrode array are placed, but techniques proposed for determining electrode location have been too coarse and labor intensive to permit detailed analysis on large numbers of datasets. In this paper, we present a fully automatic snake-based method for accurately localizing CI electrodes in clinical post-implantation CTs. Our results show that average electrode localization errors with the method are 0.21 millimeters. These results indicate that our method could be used in future large scale studies to analyze the relationship between electrode position and hearing outcome, which potentially could lead to technological advances that improve hearing outcomes with CIs.
Gene-regulatory enhancers have been identified using various approaches, including evolutionary conservation, regulatory protein binding, chromatin modifications, and DNA sequence motifs. To integrate these different approaches, we developed EnhancerFinder, a two-step method for distinguishing developmental enhancers from the genomic background and then predicting their tissue specificity. EnhancerFinder uses a multiple kernel learning approach to integrate DNA sequence motifs, evolutionary patterns, and diverse functional genomics datasets from a variety of cell types. In contrast with prediction approaches that define enhancers based on histone marks or p300 sites from a single cell line, we trained EnhancerFinder on hundreds of experimentally verified human developmental enhancers from the VISTA Enhancer Browser. We comprehensively evaluated EnhancerFinder using cross validation and found that our integrative method improves the identification of enhancers over approaches that consider a single type of data, such as sequence motifs, evolutionary conservation, or the binding of enhancer-associated proteins. We find that VISTA enhancers active in embryonic heart are easier to identify than enhancers active in several other embryonic tissues, likely due to their uniquely high GC content. We applied EnhancerFinder to the entire human genome and predicted 84,301 developmental enhancers and their tissue specificity. These predictions provide specific functional annotations for large amounts of human non-coding DNA, and are significantly enriched near genes with annotated roles in their predicted tissues and lead SNPs from genome-wide association studies. We demonstrate the utility of EnhancerFinder predictions through in vivo validation of novel embryonic gene regulatory enhancers from three developmental transcription factor loci. Our genome-wide developmental enhancer predictions are freely available as a UCSC Genome Browser track, which we hope will enable researchers to further investigate questions in developmental biology.
β-Cell mass is a parameter commonly measured in studies of islet biology and diabetes. However, the rigorous quantification of pancreatic β-cell mass using conventional histological methods is a time-consuming process. Rapidly evolving virtual slide technology with high-resolution slide scanners and newly developed image analysis tools has the potential to transform β-cell mass measurement. To test the effectiveness and accuracy of this new approach, we assessed pancreata from normal C57Bl/6J mice and from mouse models of β-cell ablation (streptozotocin-treated mice) and β-cell hyperplasia (leptin-deficient mice), using a standardized systematic sampling of pancreatic specimens. Our data indicate that automated analysis of virtual pancreatic slides is highly reliable and yields results consistent with those obtained by conventional morphometric analysis. This new methodology will allow investigators to dramatically reduce the time required for β-cell mass measurement by automating high-resolution image capture and analysis of entire pancreatic sections.
A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve using an electrode array that is implanted in the cochlea. In CI surgery, the surgeon accesses the cochlea and makes an opening where he/she inserts the electrode array blind to internal structures of the cochlea. Because of this, the final position of the electrode array relative to intra-cochlear anatomy is generally unknown. We have recently developed an approach for determining electrode array position relative to intra-cochlear anatomy using a pre- and a post-implantation CT. The approach is to segment the intra-cochlear anatomy in the pre-implantation CT, localize the electrodes in the post-implantation CT, and register the two CTs to determine relative electrode array position information. Currently, we are using this approach to develop a CI programming technique that uses patient-specific spatial information to create patient-customized sound processing strategies. However, this technique cannot be used for many CI users because it requires a pre-implantation CT that is not always acquired prior to implantation. In this study, we propose a method for automatic segmentation of intra-cochlear anatomy in post-implantation CT of unilateral recipients, thus eliminating the need for pre-implantation CTs in this population. The method is to segment the intra-cochlear anatomy in the implanted ear using information extracted from the normal contralateral ear and to exploit the intra-subject symmetry in cochlear anatomy across ears. To validate our method, we performed experiments on 30 ears for which both a pre- and a post-implantation CT are available. The mean and the maximum segmentation errors are 0.224 and 0.734mm, respectively. These results indicate that our automatic segmentation method is accurate enough for developing patient-customized CI sound processing strategies for unilateral CI recipients using a post-implantation CT alone.
Copyright © 2014 Elsevier B.V. All rights reserved.
OBJECTIVE - Drug-drug interactions (DDIs) are an important consideration in both drug development and clinical application, especially for co-administered medications. While it is necessary to identify all possible DDIs during clinical trials, DDIs are frequently reported after the drugs are approved for clinical use, and they are a common cause of adverse drug reactions (ADR) and increasing healthcare costs. Computational prediction may assist in identifying potential DDIs during clinical trials.
METHODS - Here we propose a heterogeneous network-assisted inference (HNAI) framework to assist with the prediction of DDIs. First, we constructed a comprehensive DDI network that contained 6946 unique DDI pairs connecting 721 approved drugs based on DrugBank data. Next, we calculated drug-drug pair similarities using four features: phenotypic similarity based on a comprehensive drug-ADR network, therapeutic similarity based on the drug Anatomical Therapeutic Chemical classification system, chemical structural similarity from SMILES data, and genomic similarity based on a large drug-target interaction network built using the DrugBank and Therapeutic Target Database. Finally, we applied five predictive models in the HNAI framework: naive Bayes, decision tree, k-nearest neighbor, logistic regression, and support vector machine, respectively.
RESULTS - The area under the receiver operating characteristic curve of the HNAI models is 0.67 as evaluated using fivefold cross-validation. Using antipsychotic drugs as an example, several HNAI-predicted DDIs that involve weight gain and cytochrome P450 inhibition were supported by literature resources.
CONCLUSIONS - Through machine learning-based integration of drug phenotypic, therapeutic, structural, and genomic similarities, we demonstrated that HNAI is promising for uncovering DDIs in drug development and postmarketing surveillance.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.