The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
PURPOSE - Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion.
METHODS - The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes.
RESULTS - Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately.
CONCLUSIONS - The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion.
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.
Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
BACKGROUND - Anesthesiology residencies are developing trainee assessment tools to evaluate 25 milestones that map to the six core competencies. The effort will be facilitated by development of automated methods to capture, assess, and report trainee performance to program directors, the Accreditation Council for Graduate Medical Education and the trainees themselves.
METHODS - The authors leveraged a perioperative information management system to develop an automated, near-real-time performance capture and feedback tool that provides objective data on clinical performance and requires minimal administrative effort. Before development, the authors surveyed trainees about satisfaction with clinical performance feedback and about preferences for future feedback.
RESULTS - Resident performance on 24,154 completed cases has been incorporated into the authors' automated dashboard, and trainees now have access to their own performance data. Eighty percent (48 of 60) of the residents responded to the feedback survey. Overall, residents "agreed/strongly agreed" that they desire frequent updates on their clinical performance on defined quality metrics and that they desired to see how they compared with the residency as a whole. Before deployment of the new tool, they "disagreed" that they were receiving feedback in a timely manner. Survey results were used to guide the format of the feedback tool that has been implemented.
CONCLUSION - The authors demonstrate the implementation of a system that provides near-real-time feedback concerning resident performance on an extensible series of quality metrics, and which is responsive to requests arising from resident feedback about desired reporting mechanisms.
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work, we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to how validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.
Depletion of interstitial cell of Cajal (ICC) networks is known to occur in various gastrointestinal (GI) motility disorders. Although techniques for quantifying the structure of ICC networks are available, the ICC network structure-function relationships are yet to be well elucidated. Existing methods of relating ICC structure to function are computationally expensive, and it is difficult to up-scale them to larger multiscale simulations. A new cellular automaton model for simulating tissue-specific slow wave propagation was developed, and in preliminary studies the automaton model was applied on jejunal ICC network structures from wild-type and 5-HT2B receptor knockout (ICC depleted) mice. Two metrics were also developed to quantify the simulated propagation patterns: 1) ICC and 2) non-ICC activation lag metrics. These metrics measured the average delay in time taken for the slow wave to propagate across the ICC and non-ICC domain throughout the entire network compared to the theoretical fastest propagation, respectively. Slow wave propagation was successfully simulated across the ICC networks with greatly reduced computational time compared to previous methods, and the propagation pattern metrics quantitatively revealed an impaired propagation during ICC depletion. In conclusion, the developed slow wave propagation model and propagation pattern metrics offer a computationally efficient framework for relating ICC structure to function. These tools can now be further applied to define ICC structure-function relationships across various spatial and temporal scales.
OBJECTIVE - Pediatric dose rounding is a unique and complex process whose complexity is rarely supported by e-prescribing systems, though amenable to automation and deployment from a central service provider. The goal of this project was to validate an automated dose-rounding algorithm for pediatric dose rounding.
METHODS - We developed a dose-rounding algorithm, STEPSTools, based on expert consensus about the rounding process and knowledge about the therapeutic/toxic window for each medication. We then used a 60% subsample of electronically-generated prescriptions from one academic medical center to further refine the web services. Once all issues were resolved, we used the remaining 40% of the prescriptions as a test sample and assessed the degree of concordance between automatically calculated optimal doses and the doses in the test sample. Cases with discrepant doses were compiled in a survey and assessed by pediatricians from two academic centers. The response rate for the survey was 25%.
RESULTS - Seventy-nine test cases were tested for concordance. For 20 cases, STEPSTools was unable to provide a recommended dose. The dose recommendation provided by STEPSTools was identical to that of the test prescription for 31 cases. For 14 out of the 24 discrepant cases included in the survey, respondents significantly preferred STEPSTools recommendations (p<0.05, binomial test). Overall, when combined with the data from all test cases, STEPSTools either matched or exceeded the performance of the test cases in 45/59 (76%) of the cases. The majority of other cases were challenged by the need to provide an extremely small dose. We estimated that with the addition of two dose-selection rules, STEPSTools would achieve an overall performance of 82% or higher.
CONCLUSIONS - Results of this pilot study suggest that automated dose rounding is a feasible mechanism for providing guidance to e-prescribing systems. These results also demonstrate the need for validating decision-support systems to support targeted and iterative improvement in performance.
Copyright © 2013 Elsevier Inc. All rights reserved.
BACKGROUND - Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up.
FINDINGS - Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput.
CONCLUSIONS - ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.
Current microscopy-based approaches for immunofluorescence detection of viral infectivity are time consuming and labor intensive and can yield variable results subject to observer bias. To circumvent these problems, we developed a rapid and automated infrared immunofluorescence imager-based infectivity assay for both rotavirus and reovirus that can be used to quantify viral infectivity and infectivity inhibition. For rotavirus, monolayers of MA104 cells were infected with simian strain SA-11 or SA-11 preincubated with rotavirus-specific human IgA. For reovirus, monolayers of either HeLa S3 cells or L929 cells were infected with strains type 1 Lang (T1L), type 3 Dearing (T3D), or either virus preincubated with a serotype-specific neutralizing monoclonal antibody (mAb). Infected cells were fixed and incubated with virus-specific polyclonal antiserum, followed by an infrared fluorescence-conjugated secondary antibody. Well-to-well variation in cell number was normalized using fluorescent reagents that stain fixed cells. Virus-infected cells were detected by scanning plates using an infrared imager, and results were obtained as a percent response of fluorescence intensity relative to a virus-specific standard. An expected dose-dependent inhibition of both SA-11 infectivity with rotavirus-specific human IgA and reovirus infectivity with T1L-specific mAb 5C6 and T3D-specific mAb 9BG5 was observed, confirming the utility of this assay for quantification of viral infectivity and infectivity blockade. The imager-based viral infectivity assay fully automates data collection and provides an important advance in technology for applications such as screening for novel modulators of viral infectivity. This basic platform can be adapted for use with multiple viruses and cell types.
Copyright © 2012 Elsevier B.V. All rights reserved.