Other search tools

About this data

The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.

If you have any questions or comments, please contact us.

Results: 1 to 10 of 46

Publication Record


Recommendations towards standards for quantitative MRI (qMRI) and outstanding needs.
Keenan KE, Biller JR, Delfino JG, Boss MA, Does MD, Evelhoch JL, Griswold MA, Gunter JL, Hinks RS, Hoffman SW, Kim G, Lattanzi R, Li X, Marinelli L, Metzger GJ, Mukherjee P, Nordstrom RJ, Peskin AP, Perez E, Russek SE, Sahiner B, Serkova N, Shukla-Dave A, Steckner M, Stupic KF, Wilmes LJ, Wu HH, Zhang H, Jackson EF, Sullivan DC
(2019) J Magn Reson Imaging 49: e26-e39
MeSH Terms: Anthropometry, Breast, Decision Making, Deep Learning, Equipment Design, Female, Humans, Image Interpretation, Computer-Assisted, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Phantoms, Imaging, Precision Medicine, Radiology, Interventional, Reference Standards, Reference Values, Reproducibility of Results, Robotics, Software
Show Abstract · Added March 5, 2020
LEVEL OF EVIDENCE - 5 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2019.
© 2019 International Society for Magnetic Resonance in Medicine.
0 Communities
1 Members
0 Resources
19 MeSH Terms
Redefining pulmonary hypertension.
Maron BA, Brittain EL, Choudhary G, Gladwin MT
(2018) Lancet Respir Med 6: 168-170
MeSH Terms: Arterial Pressure, Diagnostic Errors, Humans, Hypertension, Pulmonary, Pulmonary Artery, Reference Standards, Respiratory Function Tests
Added June 7, 2018
0 Communities
1 Members
0 Resources
7 MeSH Terms
The public health dimension of chronic kidney disease: what we have learnt over the past decade.
Hu JR, Coresh J
(2017) Nephrol Dial Transplant 32: ii113-ii120
MeSH Terms: Albuminuria, Biomarkers, Creatinine, Genome-Wide Association Study, Glomerular Filtration Rate, Humans, Prevalence, Prognosis, Public Health, Reference Standards, Renal Insufficiency, Chronic
Show Abstract · Added April 6, 2019
Much progress has been made in chronic kidney disease (CKD) epidemiology in the last decade to establish CKD as a condition that is common, harmful and treatable. The introduction of the new equations for estimating glomerular filtration rate (GFR) and the publication of international reference standards for creatinine and cystatin measurement paved the way for improved global estimates of CKD prevalence. The addition of albuminuria categories to the staging of CKD paved the way for research linking albuminuria and GFR to a wide range of renal and cardiovascular adverse outcomes. The advent of genome-wide association studies ushered in insights into genetic polymorphisms underpinning some types of CKD. Finally, a number of new randomized clinical trials and meta-analyses have informed evidence-based guidelines for the treatment and prevention of CKD. In this review, we discuss the lessons learnt from epidemiological investigations of the staging, etiology, prevalence and prognosis of CKD between 2007 and 2016.
© The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
0 Communities
1 Members
0 Resources
MeSH Terms
Reference tissue normalization in longitudinal (18)F-florbetapir positron emission tomography of late mild cognitive impairment.
Shokouhi S, Mckay JW, Baker SL, Kang H, Brill AB, Gwirtsman HE, Riddle WR, Claassen DO, Rogers BP, Alzheimer’s Disease Neuroimaging Initiative
(2016) Alzheimers Res Ther 8: 2
MeSH Terms: Aged, Amyloid beta-Peptides, Aniline Compounds, Brain, Cognitive Dysfunction, Ethylene Glycols, Female, Humans, Longitudinal Studies, Male, Peptide Fragments, Positron-Emission Tomography, Reference Standards, Reproducibility of Results
Show Abstract · Added April 6, 2017
BACKGROUND - Semiquantitative methods such as the standardized uptake value ratio (SUVR) require normalization of the radiotracer activity to a reference tissue to monitor changes in the accumulation of amyloid-β (Aβ) plaques measured with positron emission tomography (PET). The objective of this study was to evaluate the effect of reference tissue normalization in a test-retest (18)F-florbetapir SUVR study using cerebellar gray matter, white matter (two different segmentation masks), brainstem, and corpus callosum as reference regions.
METHODS - We calculated the correlation between (18)F-florbetapir PET and concurrent cerebrospinal fluid (CSF) Aβ1-42 levels in a late mild cognitive impairment cohort with longitudinal PET and CSF data over the course of 2 years. In addition to conventional SUVR analysis using mean and median values of normalized brain radiotracer activity, we investigated a new image analysis technique-the weighted two-point correlation function (wS2)-to capture potentially more subtle changes in Aβ-PET data.
RESULTS - Compared with the SUVRs normalized to cerebellar gray matter, all cerebral-to-white matter normalization schemes resulted in a higher inverse correlation between PET and CSF Aβ1-42, while the brainstem normalization gave the best results (high and most stable correlation). Compared with the SUVR mean and median values, the wS2 values were associated with the lowest coefficient of variation and highest inverse correlation to CSF Aβ1-42 levels across all time points and reference regions, including the cerebellar gray matter.
CONCLUSIONS - The selection of reference tissue for normalization and the choice of image analysis method can affect changes in cortical (18)F-florbetapir uptake in longitudinal studies.
0 Communities
1 Members
0 Resources
14 MeSH Terms
Establishment of CYP2D6 reference samples by multiple validated genotyping platforms.
Fang H, Liu X, Ramírez J, Choudhury N, Kubo M, Im HK, Konkashbaev A, Cox NJ, Ratain MJ, Nakamura Y, O'Donnell PH
(2014) Pharmacogenomics J 14: 564-72
MeSH Terms: Alleles, Cytochrome P-450 CYP2D6, Genetic Variation, Genotype, Genotyping Techniques, Humans, Liver, Microsomes, Liver, Reference Standards, Reproducibility of Results
Show Abstract · Added February 22, 2016
Cytochrome P450 2D6 (cytochrome P450, family 2, subfamily D, polypeptide 6 (CYP2D6)), a highly polymorphic drug-metabolizing enzyme, is involved in the metabolism of one-quarter of the most commonly prescribed medications. Here we have applied multiple genotyping methods and Sanger sequencing to assign precise and reproducible CYP2D6 genotypes, including copy numbers, for 48 HapMap samples. Furthermore, by analyzing a set of 50 human liver microsomes using endoxifen formation from N-desmethyl-tamoxifen as the phenotype of interest, we observed a significant positive correlation between CYP2D6 genotype-assigned activity score and endoxifen formation rate (rs = 0.68 by rank correlation test, P = 5.3 × 10(-8)), which corroborated the genotype-phenotype prediction derived from our genotyping methodologies. In the future, these 48 publicly available HapMap samples characterized by multiple substantiated CYP2D6 genotyping platforms could serve as a reference resource for assay development, validation, quality control and proficiency testing for other CYP2D6 genotyping projects and for programs pursuing clinical pharmacogenomic testing implementation.
0 Communities
1 Members
0 Resources
10 MeSH Terms
Standardizing accelerometer-based activity monitor calibration and output reporting.
Coolbaugh CL, Hawkins DA
(2014) J Appl Biomech 30: 594-7
MeSH Terms: Acceleration, Accelerometry, Calibration, Equipment Failure Analysis, Reference Standards, Reproducibility of Results, Sensitivity and Specificity, Transducers, United States
Show Abstract · Added September 15, 2014
Wearable accelerometer-based activity monitors (AMs) are used to estimate energy expenditure and ground reaction forces in free-living environments, but a lack of standardized calibration and data reporting methods limits their utility. The objectives of this study were to (1) design an inexpensive and easily reproducible AM testing system, (2) develop a standardized calibration method for accelerometer-based AMs, and (3) evaluate the utility of the system and accuracy of the calibration method. A centrifuge-type device was constructed to apply known accelerations (0-8g) to each sensitive axis of 30 custom and two commercial AMs. Accelerometer data were recorded and matrix algebra and a least squares solution were then used to determine a calibration matrix for the custom AMs to convert raw accelerometer output to units of g's. Accuracy was tested by comparing applied and calculated accelerations for custom and commercial AMs. AMs were accurate to within 4% of applied accelerations. The relatively inexpensive AM testing system (< $100) and calibration method has the potential to improve the sharing of AM data, the ability to compare data from different studies, and the accuracy of AM-based models to estimate various physiological and biomechanical quantities of interest in field-based assessments of physical activity.
0 Communities
1 Members
0 Resources
9 MeSH Terms
Targeted peptide measurements in biology and medicine: best practices for mass spectrometry-based assay development using a fit-for-purpose approach.
Carr SA, Abbatiello SE, Ackermann BL, Borchers C, Domon B, Deutsch EW, Grant RP, Hoofnagle AN, Hüttenhain R, Koomen JM, Liebler DC, Liu T, MacLean B, Mani DR, Mansfield E, Neubert H, Paulovich AG, Reiter L, Vitek O, Aebersold R, Anderson L, Bethem R, Blonder J, Boja E, Botelho J, Boyne M, Bradshaw RA, Burlingame AL, Chan D, Keshishian H, Kuhn E, Kinsinger C, Lee JS, Lee SW, Moritz R, Oses-Prieto J, Rifai N, Ritchie J, Rodriguez H, Srinivas PR, Townsend RR, Van Eyk J, Whiteley G, Wiita A, Weintraub S
(2014) Mol Cell Proteomics 13: 907-17
MeSH Terms: Animals, Biological Assay, Biology, Guidelines as Topic, Humans, Isotope Labeling, Mass Spectrometry, Medicine, Peptides, Proteomics, Reference Standards, Software
Show Abstract · Added March 20, 2014
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this "fit-for-purpose" approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and recommendations.
0 Communities
1 Members
0 Resources
12 MeSH Terms
Large scale comparison of gene expression levels by microarrays and RNAseq using TCGA data.
Guo Y, Sheng Q, Li J, Ye F, Samuels DC, Shyr Y
(2013) PLoS One 8: e71462
MeSH Terms: Exons, Gene Expression Profiling, Gene Expression Regulation, Neoplastic, Genes, Neoplasm, Genome, Human, Humans, Neoplasms, Oligonucleotide Array Sequence Analysis, Reference Standards, Sequence Analysis, RNA, Statistics, Nonparametric
Show Abstract · Added December 12, 2013
RNAseq and microarray methods are frequently used to measure gene expression level. While similar in purpose, there are fundamental differences between the two technologies. Here, we present the largest comparative study between microarray and RNAseq methods to date using The Cancer Genome Atlas (TCGA) data. We found high correlations between expression data obtained from the Affymetrix one-channel microarray and RNAseq (Spearman correlations coefficients of ∼0.8). We also observed that the low abundance genes had poorer correlations between microarray and RNAseq data than high abundance genes. As expected, due to measurement and normalization differences, Agilent two-channel microarray and RNAseq data were poorly correlated (Spearman correlations coefficients of only ∼0.2). By examining the differentially expressed genes between tumor and normal samples we observed reasonable concordance in directionality between Agilent two-channel microarray and RNAseq data, although a small group of genes were found to have expression changes reported in opposite directions using these two technologies. Overall, RNAseq produces comparable results to microarray technologies in term of expression profiling. The RNAseq normalization methods RPKM and RSEM produce similar results on the gene level and reasonably concordant results on the exon level. Longer exons tended to have better concordance between the two normalization methods than shorter exons.
0 Communities
2 Members
0 Resources
11 MeSH Terms
IDPQuantify: combining precursor intensity with spectral counts for protein and peptide quantification.
Chen YY, Chambers MC, Li M, Ham AJ, Turner JL, Zhang B, Tabb DL
(2013) J Proteome Res 12: 4111-21
MeSH Terms: Fungal Proteins, Humans, Peptide Mapping, Principal Component Analysis, Proteome, Proteomics, Reference Standards, Sensitivity and Specificity, Software, Tandem Mass Spectrometry, Yeasts
Show Abstract · Added March 7, 2014
Differentiating and quantifying protein differences in complex samples produces significant challenges in sensitivity and specificity. Label-free quantification can draw from two different information sources: precursor intensities and spectral counts. Intensities are accurate for calculating protein relative abundance, but values are often missing due to peptides that are identified sporadically. Spectral counting can reliably reproduce difference lists, but differentiating peptides or quantifying all but the most concentrated protein changes is usually beyond its abilities. Here we developed new software, IDPQuantify, to align multiple replicates using principal component analysis, extract accurate precursor intensities from MS data, and combine intensities with spectral counts for significant gains in differentiation and quantification. We have applied IDPQuantify to three comparative proteomic data sets featuring gold standard protein differences spiked in complicated backgrounds. The software is able to associate peptides with peaks that are otherwise left unidentified to increase the efficiency of protein quantification, especially for low-abundance proteins. By combing intensities with spectral counts from IDPicker, it gains an average of 30% more true positive differences among top differential proteins. IDPQuantify quantifies protein relative abundance accurately in these test data sets to produce good correlations between known and measured concentrations.
0 Communities
3 Members
0 Resources
11 MeSH Terms
Validating retinal fundus image analysis algorithms: issues and a proposal.
Trucco E, Ruggeri A, Karnowski T, Giancardo L, Chaum E, Hubschman JP, Al-Diri B, Cheung CY, Wong D, Abràmoff M, Lim G, Kumar D, Burlina P, Bressler NM, Jelinek HF, Meriaudeau F, Quellec G, Macgillivray T, Dhillon B
(2013) Invest Ophthalmol Vis Sci 54: 3546-59
MeSH Terms: Algorithms, Fundus Oculi, Humans, Image Processing, Computer-Assisted, Ophthalmoscopy, Reference Standards, Reproducibility of Results, Retinal Diseases, Software
Show Abstract · Added June 11, 2018
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.
0 Communities
1 Members
0 Resources
MeSH Terms