The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
LEVEL OF EVIDENCE - 5 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2019.
© 2019 International Society for Magnetic Resonance in Medicine.
Much progress has been made in chronic kidney disease (CKD) epidemiology in the last decade to establish CKD as a condition that is common, harmful and treatable. The introduction of the new equations for estimating glomerular filtration rate (GFR) and the publication of international reference standards for creatinine and cystatin measurement paved the way for improved global estimates of CKD prevalence. The addition of albuminuria categories to the staging of CKD paved the way for research linking albuminuria and GFR to a wide range of renal and cardiovascular adverse outcomes. The advent of genome-wide association studies ushered in insights into genetic polymorphisms underpinning some types of CKD. Finally, a number of new randomized clinical trials and meta-analyses have informed evidence-based guidelines for the treatment and prevention of CKD. In this review, we discuss the lessons learnt from epidemiological investigations of the staging, etiology, prevalence and prognosis of CKD between 2007 and 2016.
© The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
BACKGROUND - Semiquantitative methods such as the standardized uptake value ratio (SUVR) require normalization of the radiotracer activity to a reference tissue to monitor changes in the accumulation of amyloid-β (Aβ) plaques measured with positron emission tomography (PET). The objective of this study was to evaluate the effect of reference tissue normalization in a test-retest (18)F-florbetapir SUVR study using cerebellar gray matter, white matter (two different segmentation masks), brainstem, and corpus callosum as reference regions.
METHODS - We calculated the correlation between (18)F-florbetapir PET and concurrent cerebrospinal fluid (CSF) Aβ1-42 levels in a late mild cognitive impairment cohort with longitudinal PET and CSF data over the course of 2 years. In addition to conventional SUVR analysis using mean and median values of normalized brain radiotracer activity, we investigated a new image analysis technique-the weighted two-point correlation function (wS2)-to capture potentially more subtle changes in Aβ-PET data.
RESULTS - Compared with the SUVRs normalized to cerebellar gray matter, all cerebral-to-white matter normalization schemes resulted in a higher inverse correlation between PET and CSF Aβ1-42, while the brainstem normalization gave the best results (high and most stable correlation). Compared with the SUVR mean and median values, the wS2 values were associated with the lowest coefficient of variation and highest inverse correlation to CSF Aβ1-42 levels across all time points and reference regions, including the cerebellar gray matter.
CONCLUSIONS - The selection of reference tissue for normalization and the choice of image analysis method can affect changes in cortical (18)F-florbetapir uptake in longitudinal studies.
Cytochrome P450 2D6 (cytochrome P450, family 2, subfamily D, polypeptide 6 (CYP2D6)), a highly polymorphic drug-metabolizing enzyme, is involved in the metabolism of one-quarter of the most commonly prescribed medications. Here we have applied multiple genotyping methods and Sanger sequencing to assign precise and reproducible CYP2D6 genotypes, including copy numbers, for 48 HapMap samples. Furthermore, by analyzing a set of 50 human liver microsomes using endoxifen formation from N-desmethyl-tamoxifen as the phenotype of interest, we observed a significant positive correlation between CYP2D6 genotype-assigned activity score and endoxifen formation rate (rs = 0.68 by rank correlation test, P = 5.3 × 10(-8)), which corroborated the genotype-phenotype prediction derived from our genotyping methodologies. In the future, these 48 publicly available HapMap samples characterized by multiple substantiated CYP2D6 genotyping platforms could serve as a reference resource for assay development, validation, quality control and proficiency testing for other CYP2D6 genotyping projects and for programs pursuing clinical pharmacogenomic testing implementation.
Wearable accelerometer-based activity monitors (AMs) are used to estimate energy expenditure and ground reaction forces in free-living environments, but a lack of standardized calibration and data reporting methods limits their utility. The objectives of this study were to (1) design an inexpensive and easily reproducible AM testing system, (2) develop a standardized calibration method for accelerometer-based AMs, and (3) evaluate the utility of the system and accuracy of the calibration method. A centrifuge-type device was constructed to apply known accelerations (0-8g) to each sensitive axis of 30 custom and two commercial AMs. Accelerometer data were recorded and matrix algebra and a least squares solution were then used to determine a calibration matrix for the custom AMs to convert raw accelerometer output to units of g's. Accuracy was tested by comparing applied and calculated accelerations for custom and commercial AMs. AMs were accurate to within 4% of applied accelerations. The relatively inexpensive AM testing system (< $100) and calibration method has the potential to improve the sharing of AM data, the ability to compare data from different studies, and the accuracy of AM-based models to estimate various physiological and biomechanical quantities of interest in field-based assessments of physical activity.
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this "fit-for-purpose" approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and recommendations.
RNAseq and microarray methods are frequently used to measure gene expression level. While similar in purpose, there are fundamental differences between the two technologies. Here, we present the largest comparative study between microarray and RNAseq methods to date using The Cancer Genome Atlas (TCGA) data. We found high correlations between expression data obtained from the Affymetrix one-channel microarray and RNAseq (Spearman correlations coefficients of ∼0.8). We also observed that the low abundance genes had poorer correlations between microarray and RNAseq data than high abundance genes. As expected, due to measurement and normalization differences, Agilent two-channel microarray and RNAseq data were poorly correlated (Spearman correlations coefficients of only ∼0.2). By examining the differentially expressed genes between tumor and normal samples we observed reasonable concordance in directionality between Agilent two-channel microarray and RNAseq data, although a small group of genes were found to have expression changes reported in opposite directions using these two technologies. Overall, RNAseq produces comparable results to microarray technologies in term of expression profiling. The RNAseq normalization methods RPKM and RSEM produce similar results on the gene level and reasonably concordant results on the exon level. Longer exons tended to have better concordance between the two normalization methods than shorter exons.
Differentiating and quantifying protein differences in complex samples produces significant challenges in sensitivity and specificity. Label-free quantification can draw from two different information sources: precursor intensities and spectral counts. Intensities are accurate for calculating protein relative abundance, but values are often missing due to peptides that are identified sporadically. Spectral counting can reliably reproduce difference lists, but differentiating peptides or quantifying all but the most concentrated protein changes is usually beyond its abilities. Here we developed new software, IDPQuantify, to align multiple replicates using principal component analysis, extract accurate precursor intensities from MS data, and combine intensities with spectral counts for significant gains in differentiation and quantification. We have applied IDPQuantify to three comparative proteomic data sets featuring gold standard protein differences spiked in complicated backgrounds. The software is able to associate peptides with peaks that are otherwise left unidentified to increase the efficiency of protein quantification, especially for low-abundance proteins. By combing intensities with spectral counts from IDPicker, it gains an average of 30% more true positive differences among top differential proteins. IDPQuantify quantifies protein relative abundance accurately in these test data sets to produce good correlations between known and measured concentrations.
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.