The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
Importance - Polygenic risk scores comprising millions of single-nucleotide polymorphisms (SNPs) could be useful for population-wide coronary heart disease (CHD) screening.
Objective - To determine whether a polygenic risk score improves prediction of CHD compared with a guideline-recommended clinical risk equation.
Design, Setting, and Participants - A retrospective cohort study of the predictive accuracy of a previously validated polygenic risk score was assessed among 4847 adults of white European ancestry, aged 45 through 79 years, participating in the Atherosclerosis Risk in Communities (ARIC) study and 2390 participating in the Multi-Ethnic Study of Atherosclerosis (MESA) from 1996 through December 31, 2015, the final day of follow-up. The performance of the polygenic risk score was compared with that of the 2013 American College of Cardiology and American Heart Association pooled cohort equations.
Exposures - Genetic risk was computed for each participant by summing the product of the weights and allele dosage across 6 630 149 SNPs. Weights were based on an international genome-wide association study.
Main Outcomes and Measures - Prediction of 10-year first CHD events (including myocardial infarctions, fatal coronary events, silent infarctions, revascularization procedures, or resuscitated cardiac arrest) assessed using measures of model discrimination, calibration, and net reclassification improvement (NRI).
Results - The study population included 4847 adults from the ARIC study (mean [SD] age, 62.9 [5.6] years; 56.4% women) and 2390 adults from the MESA cohort (mean [SD] age, 61.8 [9.6] years; 52.2% women). Incident CHD events occurred in 696 participants (14.4%) and 227 participants (9.5%), respectively, over median follow-up of 15.5 years (interquartile range [IQR], 6.3 years) and 14.2 (IQR, 2.5 years) years. The polygenic risk score was significantly associated with 10-year CHD incidence in ARIC with hazard ratios per SD increment of 1.24 (95% CI, 1.15 to 1.34) and in MESA, 1.38 (95% CI, 1.21 to 1.58). Addition of the polygenic risk score to the pooled cohort equations did not significantly increase the C statistic in either cohort (ARIC, change in C statistic, -0.001; 95% CI, -0.009 to 0.006; MESA, 0.021; 95% CI, -0.0004 to 0.043). At the 10-year risk threshold of 7.5%, the addition of the polygenic risk score to the pooled cohort equations did not provide significant improvement in reclassification in either ARIC (NRI, 0.018, 95% CI, -0.012 to 0.036) or MESA (NRI, 0.001, 95% CI, -0.038 to 0.076). The polygenic risk score did not significantly improve calibration in either cohort.
Conclusions and Relevance - In this analysis of 2 cohorts of US adults, the polygenic risk score was associated with incident coronary heart disease events but did not significantly improve discrimination, calibration, or risk reclassification compared with conventional predictors. These findings suggest that a polygenic risk score may not enhance risk prediction in a general, white middle-aged population.
OBJECTIVES - Proton pump inhibitors (PPIs) are often used in pediatrics to treat common gastrointestinal disorders, and there are growing concerns for infectious adverse events. Because CYP2C19 inactivates PPIs, genetic variants that increase CYP2C19 function may decrease PPI exposure and infections. We tested the hypothesis that CYP2C19 metabolizer phenotypes are associated with infection event rates in children exposed to PPIs.
METHODS - This retrospective biorepository cohort study included individuals aged 0 to 36 months at the time of PPI exposure. Respiratory tract and gastrointestinal tract infection events were identified by using codes in the year after the first PPI mention. Variants defining , , , , , and were genotyped, and all individuals were classified as CYP2C19 poor or intermediate, normal metabolizers (NMs), or rapid or ultrarapid metabolizers (RM/UMs). Infection rates were compared by using univariate and multivariate analyses.
RESULTS - In all, 670 individuals were included (median age 7 months; 44% girls). CYP2C19 NMs ( = 267; 40%) had a higher infection rate than RM/UMs ( = 220; 33%; median 2 vs 1 infections per person per year; = .03). There was no difference between poor or intermediate ( = 183; 27%) and NMs. In multivariable analysis of NMs and RM/UMs adjusting for age, sex, PPI dose, and comorbidities, CYP2C19 metabolizer status remained a significant risk factor for infection events (odds ratio 0.70 [95% confidence interval 0.50-0.97] for RM/UMs versus NMs).
CONCLUSIONS - PPI therapy is associated with higher infection rates in children with normal CYP2C19 function than in those with increased CYP2C19 function, highlighting this adverse effect of PPI therapy and the relevance of genotypes to PPI therapeutic decision-making.
Copyright © 2019 by the American Academy of Pediatrics.
BACKGROUND - Abiraterone and enzalutamide are recently-approved androgen deprivation therapies (ADTs) for metastatic prostate cancer, with unknown cardiac safety profiles. Abiraterone has a propensity to hypermineralocorticism on top of androgen deprivation, so might carry an additional risk for atrial tachyarrhythmia (AT) and heart failure (HF) compared with other ADTs.
AIM - To determine if abiraterone was associated with an increased proportion of AT and HF reports among all suspected adverse drug reactions (ADRs) reported in several pharmacovigilance databases compared with enzalutamide, other ADTs and all other drugs.
METHODS - In this observational retrospective pharmacovigilance study, we performed a disproportionality analysis of reports of suspected ADRs in men in the French pharmacovigilance database, the European pharmacovigilance database and the international pharmacovigilance database VigiBase, to evaluate the reporting odds ratios (RORs) of AT and HF for abiraterone compared with enzalutamide, other ADTs and all other drugs.
RESULTS - In the 5,759,781 ADR reports in men in VigiBase, 55,070 pertained to ADTs. The RORs for AT for abiraterone versus enzalutamide, other ADTs and all other drugs were 4.1 (95% confidence interval 3.1-5.3), 3.7 (3-4.5) and 3.2 (2.7-3.7), respectively (P<0.0001 for all). The corresponding RORs for HF were 2.5 (2-3), 1.5 (1.3-1.7) and 2 (1.7-2.3), respectively (P<0.0001 for all). These results were concordant with the French and European pharmacovigilance databases. Mean times to AT and HF onset were shorter with abiraterone (5.2±0.8 and 4.5±0.6 months, respectively) versus other ADTs (13.3±3.2 and 9.2±1.1 months, respectively) (both P<0.05). Cases on abiraterone versus other ADTs were more frequently associated with at least two ADR terms, including AT, HF, hypokalaemia, hypertension and oedema (13.6% vs 6%; P<0.0001). For abiraterone, age, but not dose, was associated with reporting of AT and HF versus any other ADR.
CONCLUSIONS - Compared with other ADTs, abiraterone was associated with higher reporting of AT and HF, associated with hypokalaemia, hypertension and oedema. These findings are consistent with the hypermineralocorticism induced by abiraterone, but not by other ADTs.
Copyright © 2019 Elsevier Masson SAS. All rights reserved.
BACKGROUND - Ibrutinib has revolutionized treatment for several B-cell malignancies. However, a recent clinical trial where ibrutinib was used in a front-line setting showed increased mortality during treatment compared with conventional chemotherapy. Cardiovascular toxicities were suspected as the culprit but not directly assessed in the study.
OBJECTIVES - The purpose of this study was to identify and characterize cardiovascular adverse drug reactions (CV-ADR) associated with ibrutinib.
METHODS - This study utilized VigiBase (International pharmacovigilance database) and performed a disproportionality analysis using reporting odds ratios (ROR) and information component (IC) to determine whether CV-ADR and CV-ADR deaths were associated with ibrutinib. IC compares observed and expected values to find associations between drugs and adverse drug reactions using disproportionate Bayesian-reporting; IC (lower end of the IC 95% credibility interval) >0 is significant.
RESULTS - This study identified 303 ibrutinib-associated cardiovascular deaths. Ibrutinib was associated with higher reporting of supraventricular arrhythmias (SVAs) (ROR: 23.1; 95% confidence interval: 21.6 to 24.7; p < 0.0001; IC: 3.97), central nervous system (CNS) hemorrhagic events (ROR: 3.7; 95% confidence interval: 3.4 to 4.1; p < 0.0001; IC: 1.63), heart failure (ROR: 3.5; 95% confidence interval: 3.1 to 3.8; p < 0.0001; IC: 1.46), ventricular arrhythmias (ROR: 4.7; 95% confidence interval: 3.7 to 5.9; p < 0.0001; IC: 0.96), conduction disorders (ROR: 3.5; 95% confidence interval: 2.7 to 4.6; p < 0.0001; IC: 0.76), CNS ischemic events (ROR: 2.2; 95% confidence interval: 2.0 to 2.5; p < 0.0001; IC: 0.73), and hypertension (ROR: 1.7; 95% confidence interval: 1.5 to 1.9; p < 0.0001; IC: 0.4). CV-ADR often occurred early after ibrutinib administration. Importantly, CV-ADR were associated with fatalities that ranged from ∼10% (SVAs and ventricular arrhythmias) to ∼20% (CNS events, heart failure, and conduction disorders). Ibrutinib-associated SVA portends poor prognosis when CNS events occur concomitantly, with 28.8% deaths (15 of 52 cases).
CONCLUSIONS - Severe and occasionally fatal cardiac events occur in patients exposed to ibrutinib. These events should be considered in patient care and in clinical trial designs. (Evaluation of Reporting of Cardio-vascular Adverse Events With Antineoplastic and Immunomodulating Agents [EROCA]; NCT03530215).
Copyright © 2019 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
PURPOSE - Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data.
METHODS - Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation.
RESULTS - The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods.
CONCLUSIONS - In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
OBJECTIVES - Studies suggest that mitochondrial dysfunction underlies some forms of sepsis-induced organ failure. We sought to test the hypothesis that variations in mitochondrial DNA haplogroup affect susceptibility to sepsis-associated delirium, a common manifestation of acute brain dysfunction during sepsis.
DESIGN - Retrospective cohort study.
SETTING - Medical and surgical ICUs at a large tertiary care center.
PATIENTS - Caucasian and African American adults with sepsis.
MEASUREMENTS AND MAIN RESULTS - We determined each patient's mitochondrial DNA haplogroup using single-nucleotide polymorphisms genotyping data in a DNA databank and extracted outcomes from linked electronic medical records. We then used zero-inflated negative binomial regression to analyze age-adjusted associations between mitochondrial DNA haplogroups and duration of delirium, identified using the Confusion Assessment Method for the ICU. Eight-hundred ten patients accounted for 958 sepsis admissions, with 802 (84%) by Caucasians and 156 (16%) by African Americans. In total, 795 patient admissions (83%) involved one or more days of delirium. The 7% of Caucasians belonging to mitochondrial DNA haplogroup clade IWX experienced more delirium than the 49% in haplogroup H, the most common Caucasian haplogroup (age-adjusted rate ratio for delirium 1.36; 95% CI, 1.13-1.64; p = 0.001). Alternatively, among African Americans the 24% in haplogroup L2 experienced less delirium than those in haplogroup L3, the most common African haplogroup (adjusted rate ratio for delirium 0.60; 95% CI, 0.38-0.94; p = 0.03).
CONCLUSIONS - Variations in mitochondrial DNA are associated with development of and protection from delirium in Caucasians and African Americans during sepsis. Future studies are now required to determine whether mitochondrial DNA and mitochondrial dysfunction contribute to the pathogenesis of delirium during sepsis so that targeted treatments can be developed.
BACKGROUND AND PURPOSE - The purpose of the study is to characterize diffusion tensor imaging indices in the developing spinal cord, evaluating differences based on age and cord region. Describing the progression of DTI indices in the pediatric cord increases our understanding of spinal cord development.
MATERIALS AND METHODS - A retrospective analysis was performed on DTI acquired in 121 pediatric patients (mean, 8.6 years; range, 0.3-18.0 years) at Monroe Carell Jr. Children's Hospital at Vanderbilt from 2017 to 2018. Diffusion-weighted images (15 directions; = 750 s/mm; slice thickness, 5 mm; in-plane resolution, 1.0 × 1.0 mm) were acquired on a 3T scanner in the cervicothoracic and/or thoracolumbar cord. Manual whole-cord segmentation was performed. Images were masked and further segmented into cervical, upper thoracic, thoracolumbar, and conus regions. Analyses of covariance were performed for each DTI-derived index to investigate how age affects diffusion across cord regions, and 95% confidence intervals were calculated across age for each derived index and region. Post hoc testing was performed to analyze regional differences.
RESULTS - Analyses of covariance revealed significant correlations of age with axial diffusivity, mean diffusivity, and fractional anisotropy (all, < .001). There were also significant differences among cord regions for axial diffusivity, radial diffusivity, mean diffusivity, and fractional anisotropy (all, < .001).
CONCLUSIONS - This research demonstrates that diffusion evolves in the pediatric spinal cord during development, dependent on both cord region and the diffusion index of interest. Future research could investigate how diffusion may be affected by common pediatric spinal pathologies.
© 2019 by American Journal of Neuroradiology.
OBJECTIVES - In the USA, certain races and ethnicities have a disproportionately higher gastric cancer burden. Selective screening might allow for earlier detection and curative resection. Among a USA-based multiracial and ethnic cohort diagnosed with non-cardia gastric cancer (NCGC), we aimed to identify factors associated with curable stage disease at diagnosis.
METHODS - We retrospectively identified endoscopically diagnosed and histologically confirmed cases of NCGC at Mount Sinai Hospital in New York City. Demographic, clinical, endoscopic and histologic factors, as well as grade/stage of NCGC at diagnosis were documented. The primary outcome was the frequency of curable-stage NCGC (stage 0-1a) at diagnosis in patients with versus without an endoscopy negative for malignancy prior to their index exam diagnosing NCGC. Additional factors associated with curable-stage disease at diagnosis were determined.
RESULTS - A total of 103 racially and ethnically diverse patients were included. Nearly 38% of NCGC were stage 0-Ia, 34% stage Ib-III, and 20.3% stage IV at diagnosis. A significantly higher frequency of NCGC was diagnosed in curable stages among patients who had undergone an endoscopy that was negative for malignancy prior to their index endoscopy that diagnosed NCGC, compared to patients without a negative endoscopy prior to their index exam (69.6% vs. 28.6%, p=0.003). A prior negative endoscopy was associated with 94.0% higher likelihood of diagnosing curable-stage NCGC (p=0.003). No other factors analyzed were associated with curable-stage NCGC at diagnosis.
CONCLUSIONS - Endoscopic screening and surveillance in select high-risk populations might increase diagnoses of curable-stage NCGC. These findings warrant confirmation in larger, prospective studies.
BACKGROUND - Immune checkpoint inhibitors (ICI) produce durable antitumor responses but provoke autoimmune toxicities, including uncommon but potentially devastating neurologic toxicities. The clinical features, including the spectrum, timing, and outcomes, of ICI-induced neurologic toxicities are not well characterized.
METHODS - We performed disproportionality analysis using Vigibase, the World Health Organization pharmacovigilance database, comparing neurologic adverse event (AE) reporting in patients receiving ICIs vs. the full database. Neurologic AEs were classified by group queries using Medical Dictionary for Regulatory Activities, between database inception to September 28, 2018. Associations between ICIs and neurologic AEs were assessed using reporting odds ratios (ROR) and information component (IC). IC compares observed and expected values to find associations between drugs and AEs using disproportionate Bayesian reporting; IC (lower end of the IC 95% credibility interval) > 0 is considered statistically significant.
RESULTS - Among the full database, 18,518,994 AEs were reported, including 48,653 with ICIs. ICIs were associated with higher incidence of myasthenia gravis (0.47% of ICI reports vs. 0.04% of the full database, ROR 16.5 [95% CI 14.5-18.9]; IC 3.31), encephalitis (0.51% vs. 0.05%, ROR 10.4 [95% CI 9.2-11.8]; IC 3.15), peripheral neuropathy (1.16% vs. 0.67%, IC 0.68), and meningitis (0.15% vs. 0.06%, ROR 3.1 [95% CI 2.5-3.9]; IC 1.01). Myasthenia gravis and encephalitis were associated with anti-PD-1 whereas other neurologic AEs were associated with anti-CTLA-4. Myasthenia gravis was characterized by high fatality rates (~ 20%), early onset (median 29 days), and frequent concurrent myocarditis and myositis; whereas other neurologic AEs had lower fatality rates (6-12%), later onset (median 61-80 days), and were non-overlapping.
CONCLUSIONS - ICIs produce a spectrum of distinct classes of neurologic AEs that can cause significant morbidity and mortality and tend to occur early and with class-specific associations.