Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 59

Publication Record

Connections

Automated classification of osteomeatal complex inflammation on computed tomography using convolutional neural networks.
Chowdhury NI, Smith TL, Chandra RK, Turner JH
(2019) Int Forum Allergy Rhinol 9: 46-52
MeSH Terms: Adult, Artificial Intelligence, Automation, Laboratory, Chronic Disease, Female, Humans, Inflammation, Male, Nasal Obstruction, Neural Networks, Computer, Paranasal Sinuses, Prospective Studies, Rhinitis, Sinusitis, Tomography, X-Ray Computed
Show Abstract · Added July 23, 2020
BACKGROUND - Convolutional neural networks (CNNs) are advanced artificial intelligence algorithms well suited to image classification tasks with variable features. These have been used to great effect in various real-world applications including handwriting recognition, face detection, image search, and fraud prevention. We sought to retrain a robust CNN with coronal computed tomography (CT) images to classify osteomeatal complex (OMC) occlusion and assess the performance of this technology with rhinologic data.
METHODS - The Google Inception-V3 CNN trained with 1.28 million images was used as the base model. Preoperative coronal sections through the OMC were obtained from 239 patients enrolled in 2 prospective chronic rhinosinusitis (CRS) outcomes studies, labeled according to OMC status, and mirrored to obtain a set of 956 images. Using this data, the classification layer of Inception-V3 was retrained in Python using a transfer learning method to adapt the CNN to the task of interpreting sinonasal CT images.
RESULTS - The retrained neural network achieved 85% classification accuracy for OMC occlusion, with a 95% confidence interval for algorithm accuracy of 78% to 92%. Receiver operating characteristic (ROC) curve analysis on the test set confirmed good classification ability of the CNN with an area under the ROC curve (AUC) of 0.87, significantly different than both random guessing and a dominant classifier that predicts the most common class (p < 0.0001).
CONCLUSION - Current state-of-the-art CNNs may be able to learn clinically relevant information from 2-dimensional sinonasal CT images with minimal supervision. Future work will extend this approach to 3-dimensional images in order to further refine this technology for possible clinical applications.
© 2018 ARS-AAOA, LLC.
0 Communities
1 Members
0 Resources
MeSH Terms
Machine learning for risk prediction of acute coronary syndrome.
VanHouten JP, Starmer JM, Lorenzi NM, Maron DJ, Lasko TA
(2014) AMIA Annu Symp Proc 2014: 1940-9
MeSH Terms: Acute Coronary Syndrome, Algorithms, Area Under Curve, Artificial Intelligence, Diagnostic Errors, Humans, Logistic Models, Prognosis, ROC Curve, Risk Assessment
Show Abstract · Added April 7, 2017
Acute coronary syndrome (ACS) accounts for 1.36 million hospitalizations and billions of dollars in costs in the United States alone. A major challenge to diagnosing and treating patients with suspected ACS is the significant symptom overlap between patients with and without ACS. There is a high cost to over- and under-treatment. Guidelines recommend early risk stratification of patients, but many tools lack sufficient accuracy for use in clinical practice. Prognostic indices often misrepresent clinical populations and rely on curated data. We used random forest and elastic net on 20,078 deidentified records with significant missing and noisy values to develop models that outperform existing ACS risk prediction tools. We found that the random forest (AUC = 0.848) significantly outperformed elastic net (AUC=0.818), ridge regression (AUC = 0.810), and the TIMI (AUC = 0.745) and GRACE (AUC = 0.623) scores. Our findings show that random forest applied to noisy and sparse data can perform on par with previously developed scoring metrics.
0 Communities
1 Members
0 Resources
10 MeSH Terms
Automated Assessment of Medical Students' Clinical Exposures according to AAMC Geriatric Competencies.
Chen Y, Wrenn J, Xu H, Spickard A, Habermann R, Powers J, Denny JC
(2014) AMIA Annu Symp Proc 2014: 375-84
MeSH Terms: Area Under Curve, Artificial Intelligence, Clinical Competence, Education, Medical, Undergraduate, Educational Measurement, Geriatrics, Humans, Natural Language Processing, Students, Medical, Tennessee
Show Abstract · Added March 14, 2018
Competence is essential for health care professionals. Current methods to assess competency, however, do not efficiently capture medical students' experience. In this preliminary study, we used machine learning and natural language processing (NLP) to identify geriatric competency exposures from students' clinical notes. The system applied NLP to generate the concepts and related features from notes. We extracted a refined list of concepts associated with corresponding competencies. This system was evaluated through 10-fold cross validation for six geriatric competency domains: "medication management (MedMgmt)", "cognitive and behavioral disorders (CBD)", "falls, balance, gait disorders (Falls)", "self-care capacity (SCC)", "palliative care (PC)", "hospital care for elders (HCE)" - each an American Association of Medical Colleges competency for medical students. The systems could accurately assess MedMgmt, SCC, HCE, and Falls competencies with F-measures of 0.94, 0.86, 0.85, and 0.84, respectively, but did not attain good performance for PC and CBD (0.69 and 0.62 in F-measure, respectively).
0 Communities
1 Members
0 Resources
10 MeSH Terms
Learning to identify treatment relations in clinical text.
Bejan CA, Denny JC
(2014) AMIA Annu Symp Proc 2014: 282-8
MeSH Terms: Artificial Intelligence, Databases as Topic, Electronic Health Records, Humans, Information Storage and Retrieval, Natural Language Processing, Semantics, Therapeutics
Show Abstract · Added March 14, 2018
In clinical notes, physicians commonly describe reasons why certain treatments are given. However, this information is not typically available in a computable form. We describe a supervised learning system that is able to predict whether or not a treatment relation exists between any two medical concepts mentioned in clinical notes. To train our prediction model, we manually annotated 958 treatment relations in sentences selected from 6,864 discharge summaries. The features used to indicate the existence of a treatment relation between two medical concepts consisted of lexical and semantic information associated with the two concepts as well as information derived from the MEDication Indication (MEDI) resource and SemRep. The best F1-measure results of our supervised learning system (84.90) were significantly better than the F1-measure results achieved by SemRep (72.34).
0 Communities
1 Members
0 Resources
8 MeSH Terms
SIMPLE is a good idea (and better with context learning).
Xu Z, Asman AJ, Shanahan PL, Abramson RG, Landman BA
(2014) Med Image Comput Comput Assist Interv 17: 364-71
MeSH Terms: Algorithms, Artificial Intelligence, Humans, Image Enhancement, Liver Neoplasms, Pattern Recognition, Automated, Radiographic Image Interpretation, Computer-Assisted, Reproducibility of Results, Sensitivity and Specificity, Splenic Neoplasms, Tomography, X-Ray Computed
Show Abstract · Added February 13, 2015
Selective and iterative method for performance level estimation (SIMPLE) is a multi-atlas segmentation technique that integrates atlas selection and label fusion that has proven effective for radiotherapy planning. Herein, we revisit atlas selection and fusion techniques in the context of segmenting the spleen in metastatic liver cancer patients with possible splenomegaly using clinically acquired computed tomography (CT). We re-derive the SIMPLE algorithm in the context of the statistical literature, and show that the atlas selection criteria rest on newly presented principled likelihood models. We show that SIMPLE performance can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion approach to reduce the impact of correlated errors among selected atlases. In a study of 65 subjects, the spleen was segmented with median Dice similarity coefficient of 0.93 and a mean surface distance error of 2.2 mm.
0 Communities
1 Members
0 Resources
11 MeSH Terms
Automatic localization of cochlear implant electrodes in CT.
Zhao Y, Dawant BM, Labadie RF, Noble JH
(2014) Med Image Comput Comput Assist Interv 17: 331-8
MeSH Terms: Algorithms, Artificial Intelligence, Cochlear Implantation, Cochlear Implants, Humans, Pattern Recognition, Automated, Prosthesis Fitting, Radiographic Image Enhancement, Radiographic Image Interpretation, Computer-Assisted, Radiography, Interventional, Reproducibility of Results, Sensitivity and Specificity, Tomography, X-Ray Computed, Treatment Outcome
Show Abstract · Added February 19, 2015
Cochlear Implants (CI) are surgically implanted neural prosthetic devices used to treat severe-to-profound hearing loss. Recent studies have suggested that hearing outcomes with CIs are correlated with the location where individual electrodes in the implanted electrode array are placed, but techniques proposed for determining electrode location have been too coarse and labor intensive to permit detailed analysis on large numbers of datasets. In this paper, we present a fully automatic snake-based method for accurately localizing CI electrodes in clinical post-implantation CTs. Our results show that average electrode localization errors with the method are 0.21 millimeters. These results indicate that our method could be used in future large scale studies to analyze the relationship between electrode position and hearing outcome, which potentially could lead to technological advances that improve hearing outcomes with CIs.
0 Communities
1 Members
0 Resources
14 MeSH Terms
Integrating diverse datasets improves developmental enhancer prediction.
Erwin GD, Oksenberg N, Truty RM, Kostka D, Murphy KK, Ahituv N, Pollard KS, Capra JA
(2014) PLoS Comput Biol 10: e1003677
MeSH Terms: Animals, Artificial Intelligence, Databases, Genetic, Enhancer Elements, Genetic, Genome-Wide Association Study, Genomics, Histones, Humans, Mice, Mice, Transgenic, Models, Statistical, Organ Specificity
Show Abstract · Added April 18, 2017
Gene-regulatory enhancers have been identified using various approaches, including evolutionary conservation, regulatory protein binding, chromatin modifications, and DNA sequence motifs. To integrate these different approaches, we developed EnhancerFinder, a two-step method for distinguishing developmental enhancers from the genomic background and then predicting their tissue specificity. EnhancerFinder uses a multiple kernel learning approach to integrate DNA sequence motifs, evolutionary patterns, and diverse functional genomics datasets from a variety of cell types. In contrast with prediction approaches that define enhancers based on histone marks or p300 sites from a single cell line, we trained EnhancerFinder on hundreds of experimentally verified human developmental enhancers from the VISTA Enhancer Browser. We comprehensively evaluated EnhancerFinder using cross validation and found that our integrative method improves the identification of enhancers over approaches that consider a single type of data, such as sequence motifs, evolutionary conservation, or the binding of enhancer-associated proteins. We find that VISTA enhancers active in embryonic heart are easier to identify than enhancers active in several other embryonic tissues, likely due to their uniquely high GC content. We applied EnhancerFinder to the entire human genome and predicted 84,301 developmental enhancers and their tissue specificity. These predictions provide specific functional annotations for large amounts of human non-coding DNA, and are significantly enriched near genes with annotated roles in their predicted tissues and lead SNPs from genome-wide association studies. We demonstrate the utility of EnhancerFinder predictions through in vivo validation of novel embryonic gene regulatory enhancers from three developmental transcription factor loci. Our genome-wide developmental enhancer predictions are freely available as a UCSC Genome Browser track, which we hope will enable researchers to further investigate questions in developmental biology.
0 Communities
1 Members
0 Resources
12 MeSH Terms
Automated quantification of pancreatic β-cell mass.
Golson ML, Bush WS, Brissova M
(2014) Am J Physiol Endocrinol Metab 306: E1460-7
MeSH Terms: Animals, Artificial Intelligence, Automation, Laboratory, Cell Size, Computational Biology, Diabetes Mellitus, Experimental, Expert Systems, Hyperplasia, Image Processing, Computer-Assisted, Insulin-Secreting Cells, Mice, Mice, Inbred C57BL, Mice, Inbred Strains, Mice, Obese, Microtomy, Models, Biological, Obesity, Pancreas, Reproducibility of Results, Software
Show Abstract · Added July 15, 2015
β-Cell mass is a parameter commonly measured in studies of islet biology and diabetes. However, the rigorous quantification of pancreatic β-cell mass using conventional histological methods is a time-consuming process. Rapidly evolving virtual slide technology with high-resolution slide scanners and newly developed image analysis tools has the potential to transform β-cell mass measurement. To test the effectiveness and accuracy of this new approach, we assessed pancreata from normal C57Bl/6J mice and from mouse models of β-cell ablation (streptozotocin-treated mice) and β-cell hyperplasia (leptin-deficient mice), using a standardized systematic sampling of pancreatic specimens. Our data indicate that automated analysis of virtual pancreatic slides is highly reliable and yields results consistent with those obtained by conventional morphometric analysis. This new methodology will allow investigators to dramatically reduce the time required for β-cell mass measurement by automating high-resolution image capture and analysis of entire pancreatic sections.
1 Communities
1 Members
1 Resources
20 MeSH Terms
Automatic segmentation of intra-cochlear anatomy in post-implantation CT of unilateral cochlear implant recipients.
Reda FA, McRackan TR, Labadie RF, Dawant BM, Noble JH
(2014) Med Image Anal 18: 605-15
MeSH Terms: Algorithms, Artificial Intelligence, Cochlea, Cochlear Implantation, Humans, Pattern Recognition, Automated, Radiographic Image Enhancement, Radiographic Image Interpretation, Computer-Assisted, Reproducibility of Results, Sensitivity and Specificity, Surgery, Computer-Assisted, Tomography, X-Ray Computed, Treatment Outcome
Show Abstract · Added February 19, 2015
A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve using an electrode array that is implanted in the cochlea. In CI surgery, the surgeon accesses the cochlea and makes an opening where he/she inserts the electrode array blind to internal structures of the cochlea. Because of this, the final position of the electrode array relative to intra-cochlear anatomy is generally unknown. We have recently developed an approach for determining electrode array position relative to intra-cochlear anatomy using a pre- and a post-implantation CT. The approach is to segment the intra-cochlear anatomy in the pre-implantation CT, localize the electrodes in the post-implantation CT, and register the two CTs to determine relative electrode array position information. Currently, we are using this approach to develop a CI programming technique that uses patient-specific spatial information to create patient-customized sound processing strategies. However, this technique cannot be used for many CI users because it requires a pre-implantation CT that is not always acquired prior to implantation. In this study, we propose a method for automatic segmentation of intra-cochlear anatomy in post-implantation CT of unilateral recipients, thus eliminating the need for pre-implantation CTs in this population. The method is to segment the intra-cochlear anatomy in the implanted ear using information extracted from the normal contralateral ear and to exploit the intra-subject symmetry in cochlear anatomy across ears. To validate our method, we performed experiments on 30 ears for which both a pre- and a post-implantation CT are available. The mean and the maximum segmentation errors are 0.224 and 0.734mm, respectively. These results indicate that our automatic segmentation method is accurate enough for developing patient-customized CI sound processing strategies for unilateral CI recipients using a post-implantation CT alone.
Copyright © 2014 Elsevier B.V. All rights reserved.
0 Communities
1 Members
0 Resources
13 MeSH Terms
Machine learning-based prediction of drug-drug interactions by integrating drug phenotypic, therapeutic, chemical, and genomic properties.
Cheng F, Zhao Z
(2014) J Am Med Inform Assoc 21: e278-86
MeSH Terms: Artificial Intelligence, Bayes Theorem, Decision Trees, Drug Interactions, Humans, Logistic Models, Models, Theoretical, Molecular Structure, Pharmaceutical Preparations, Pharmacokinetics, ROC Curve, Support Vector Machine
Show Abstract · Added May 27, 2014
OBJECTIVE - Drug-drug interactions (DDIs) are an important consideration in both drug development and clinical application, especially for co-administered medications. While it is necessary to identify all possible DDIs during clinical trials, DDIs are frequently reported after the drugs are approved for clinical use, and they are a common cause of adverse drug reactions (ADR) and increasing healthcare costs. Computational prediction may assist in identifying potential DDIs during clinical trials.
METHODS - Here we propose a heterogeneous network-assisted inference (HNAI) framework to assist with the prediction of DDIs. First, we constructed a comprehensive DDI network that contained 6946 unique DDI pairs connecting 721 approved drugs based on DrugBank data. Next, we calculated drug-drug pair similarities using four features: phenotypic similarity based on a comprehensive drug-ADR network, therapeutic similarity based on the drug Anatomical Therapeutic Chemical classification system, chemical structural similarity from SMILES data, and genomic similarity based on a large drug-target interaction network built using the DrugBank and Therapeutic Target Database. Finally, we applied five predictive models in the HNAI framework: naive Bayes, decision tree, k-nearest neighbor, logistic regression, and support vector machine, respectively.
RESULTS - The area under the receiver operating characteristic curve of the HNAI models is 0.67 as evaluated using fivefold cross-validation. Using antipsychotic drugs as an example, several HNAI-predicted DDIs that involve weight gain and cytochrome P450 inhibition were supported by literature resources.
CONCLUSIONS - Through machine learning-based integration of drug phenotypic, therapeutic, structural, and genomic similarities, we demonstrated that HNAI is promising for uncovering DDIs in drug development and postmarketing surveillance.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
0 Communities
1 Members
0 Resources
12 MeSH Terms