, a bio/informatics shared resource is still "open for business" - Visit the CDS website
The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
Objective - Biomedical science is driven by datasets that are being accumulated at an unprecedented rate, with ever-growing volume and richness. There are various initiatives to make these datasets more widely available to recipients who sign Data Use Certificate agreements, whereby penalties are levied for violations. A particularly popular penalty is the temporary revocation, often for several months, of the recipient's data usage rights. This policy is based on the assumption that the value of biomedical research data depreciates significantly over time; however, no studies have been performed to substantiate this belief. This study investigates whether this assumption holds true and the data science policy implications.
Methods - This study tests the hypothesis that the value of data for scientific investigators, in terms of the impact of the publications based on the data, decreases over time. The hypothesis is tested formally through a mixed linear effects model using approximately 1200 publications between 2007 and 2013 that used datasets from the Database of Genotypes and Phenotypes, a data-sharing initiative of the National Institutes of Health.
Results - The analysis shows that the impact factors for publications based on Database of Genotypes and Phenotypes datasets depreciate in a statistically significant manner. However, we further discover that the depreciation rate is slow, only ∼10% per year, on average.
Conclusion - The enduring value of data for subsequent studies implies that revoking usage for short periods of time may not sufficiently deter those who would violate Data Use Certificate agreements and that alternative penalty mechanisms may need to be invoked.
© The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: firstname.lastname@example.org
Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of the original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. It is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.
OBJECT - Various bibliometric indices based on the citations accumulated by scholarly articles, including the h-index, g-index, e-index, and Google's i10-index, may be used to evaluate academic productivity in neurological surgery. The present article provides a comprehensive assessment of recent academic publishing output from 103 US neurosurgical residency programs and investigates intradepartmental publishing equality among faculty members.
METHODS - Each institution was considered a single entity, with the 5-year academic yield of every neurosurgical faculty member compiled to compute the following indices: ih(5), cumulative h, ig(5), ie(5), and i10(5) (based on publications and citations from 2009 through 2013). Intradepartmental comparison of productivity among faculty members yielded Gini coefficients for publications and citations. National and regional comparisons, institutional rankings, and intradepartmental publishing equality measures are presented.
RESULTS - The median numbers of departmental faculty, total publications and citations, ih(5), summed h, ig(5), ie(5), i10(5), and Gini coefficients for publications and citations were 13, 82, 716, 12, 144, 23, 16, 17, 0.57, and 0.71, respectively. The top 5 most academically productive neurosurgical programs based on ih(5)-index were University of California, San Francisco, University of California, Los Angeles, University of Pittsburgh, Brigham & Women's Hospital, and Johns Hopkins University. The Western US region was most academically productive and displayed greater intradepartmental publishing equality (median ih-index = 18, median Ginipub = 0.56). In all regions, large departments with relative intradepartmental publishing equality tend to be the most academically productive. Multivariable logistic regression analysis identified the ih(5)-index as the only independent predictor of intradepartmental publishing equality (Ginipub ≤ 0.5 [OR 1.20, 95% CI 1.20-1.40, p = 0.03]).
CONCLUSIONS - The ih(5)-index is a novel, simple, and intuitive metric capable of accurately comparing the recent scholarly efforts of neurosurgical programs and accurately predicting intradepartmental publication equality. The ih(5)-index is relatively insensitive to factors such as isolated highly productive and/or no longer academically active senior faculty, which tend to distort other bibliometric indices and mask the accurate identification of currently productive academic environments. Institutional ranking by ih(5)-index may provide information of use to faculty and trainee applicants, research funding institutions, program leaders, and other stakeholders.
Peer-reviewed publications are one measure of scientific productivity. From a project, program, or institutional perspective, publication tracking provides the quantitative data necessary to guide the prudent stewardship of federal, foundation, and institutional investments by identifying the scientific return for the types of support provided. In this article, the authors describe the Vanderbilt Institute for Clinical and Translational Research's (VICTR's) development and implementation of a semiautomated process through which publications are automatically detected in PubMed and adjudicated using a "just-in-time" workflow by a known pool of researchers (from Vanderbilt University School of Medicine and Meharry Medical College) who receive support from Vanderbilt's Clinical and Translational Science Award. Since implementation, the authors have (1) seen a marked increase in the number of publications citing VICTR support, (2) captured at a more granular level the relationship between specific resources/services and scientific output, (3) increased awareness of VICTR's scientific portfolio, and (4) increased efficiency in complying with annual National Institutes of Health progress reports. They present the methodological framework and workflow, measures of impact for the first 30 months, and a set of practical lessons learned to inform others considering a systems-based approach for resource and publication tracking. They learned that contacting multiple authors from a single publication can increase the accuracy of the resource attribution process in the case of multidisciplinary scientific projects. They also found that combining positive (e.g., congratulatory e-mails) and negative (e.g., not allowing future resource requests until adjudication is complete) triggers can increase compliance with publication attribution requests.
PURPOSE - To assess the extent and types of publication misrepresentation among medical students applying to the urology residency program at the University of Washington. Research experience and publications are the selection criteria used to judge and rank urology residency applicants.
METHODS - Electronic Residency Application Service (ERAS) applications submitted for the incoming class of 2011 for urology residency at the University of Washington were reviewed. All listed publications were verified against PubMed and Google search engines. Misrepresentation was defined as non-authorship of an existing article, authorship claimed of a nonexistent article, or first-authorship listed incorrectly.
RESULTS - Of the 198 total applications, 124 (63 %) applicants reported 541 publications, including 112 abstracts and 429 journal articles. 347 (65 %) articles and abstracts were verifiable. Misrepresentation of 12 (3.5 %) published articles was found in 9 applicants (7 %), which included self-promotion to first-authorship (6), followed by non-existent articles (5), and a repeated publication listing (1). On univariate analysis, higher age (p = 0.008), higher number of total publications reported (p < 0.001), additional graduate degree (p < 0.001), and foreign medical graduate (FMG) status (p < 0.001) were associated with misrepresentation. Due to the low incidence, the study was not adequately powered to perform a multivariate analysis.
CONCLUSIONS - Misrepresentation of publications listed in ERAS among urology applicants remains significant. Residency program directors should require applicants to submit copies of all of their publications, whether in print, in-press, or submitted to be placed as part of their application file.
OBJECTIVES - : Results from European Study Group for Pancreatic Cancer (ESPAC)-1, first published in 2001, suggested that postoperative radiation therapy (PORT) was detrimental in pancreatic patients. The potential association between the publication of ESPAC-1 and the use of PORT in the United States is examined in this study.
METHODS - : Data from the Surveillance, Epidemiology, and End Results program were used to identify pancreatic patients treated with surgical resection followed by PORT. The use of PORT was examined in the 5-year time period preceding and after the publication of ESPAC-1.
RESULTS - : Univariable analysis of the use of PORT found significantly less use of PORT in the postpublication period [odds ratio (OR) for the use of PORT in prepublication period=1.19, 95% confidence interval (CI), 1.04-1.35]. A multivariable analysis, performed to account for imbalances in clinical and demographic variables between the 2 time periods, found similar results (OR=1.18, 95% CI, 1.03-1.35). When other types of radiation, such as preoperative radiation were included, no significant difference between time periods was found (OR=0.99, 95% CI, 0.76-1.30).
CONCLUSIONS - : Although there continues to be frequent use of PORT in the United States, the publication of ESPAC-1 seems to be associated with a small but significant change in its use. However, it is important to note that further analyses suggest that a small shift toward more preoperative radiation may also account for the decrease in PORT.
Progress in biomedical research depends in part on being able to build on the findings of other researchers - and thereby on being able to apply others' methods to your own research. However, most of us have struggled to understand how to repeat or adapt another researcher's study because of minimal or missing details in the Methods section of a published paper. In expensive and complex experiments involving animal models, clear descriptions of the methods are particularly important. In this and the accompanying Editorial in this issue, we discuss how crucial the Methods section is to the integrity and utility of a biomedical research paper, and encourage researchers working with animal models to follow the recently released ARRIVE guidelines when preparing their studies for publication.
Epidemiologic studies contribute greatly to evidence-based medicine by identifying risk factors for diseases and determining optimal treatments for clinical practice. However, there is very limited effort on automatic extraction of knowledge from epidemiologic articles, such as exposures, outcomes, and their relations. In this initial study, we developed a system that consists of a natural language processing (NLP) engine and a rule-based classifier, to automatically extract exposure-related terms from titles of epidemiologic articles. The evaluation using 450 titles annotated by an epidemiologist showed the highest F-measure of 0.646 (Precision 0.610 and Recall 0.688) using in-exact matching, which indicated the feasibility of automated methods on mining epidemiologic literature. Further analysis of terms related to epidemiologic exposures suggested that although UMLS would have reasonable coverage, more appropriate semantic classifications of epidemiologic exposures would be required.
AIMS - To conduct an internet-based study using virtual slides (VS) of sterotactic core biopsy specimens of non-palpable breast lesions in order to evaluate interobserver reproducibility between pathologists.
METHODS AND RESULTS - A total of 18 breast lesions, determined to be histologically complex by two pathologists, were selected. Digitized VSs were then created using QuickTime Virtual Reality technology (Apple, Cupertino, CA, USA) and posted on the world-wide web. In all, 10 pathologists completed the evaluations of 18 VSs using the five diagnostic categories (B1-B5) from the European guidelines for quality assurance in breast cancer screening and diagnosis. Their results were compared with those of every other participating pathologist, and were then individually compared with the results of a highly experienced breast pathologist (referee). Of the 18 cases, 10 (56%) were classified by the referee as borderline (B3 and B4). Comparisons with reference values showed a less than satisfactory level of reproducibility (median kappa(w) = 0.60). As regards interobserver reproducibility, results showed that, in general, the level of agreement was not satisfactory (median kappa(w) = 0.53).
CONCLUSIONS - Overall, the findings are comparable to those quality control studies using circulating slides when analysis is done on borderline cases.