The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
OBJECTIVES - To investigate the association between green tea intake and incident stones in two large prospective cohorts.
METHODS - We examined self-reported incident kidney stone risk in the Shanghai Men's Health Study (n = 58 054; baseline age 40-74 years) and the Shanghai Women's Health Study (n = 69 166; baseline age 40-70 years). Information on the stone history and tea intake was collected by in-person surveys. Multivariable Cox proportional hazards models were adjusted for baseline demographic variables, medical history and dietary intakes including non-tea oxalate from a validated food frequency questionnaire.
RESULTS - During 319 211 and 696 950 person-years of follow up, respectively, 1202 men and 1451 women reported incident stones. Approximately two-thirds of men and one-quarter of women were tea drinkers at baseline, of whom green tea was the primary type consumed (95% in men, 88% in women). Tea drinkers (men: hazard ratio 0.78, 95% confidence interval 0.69-0.88; women: hazard ratio 0.8, 95% confidence interval 0.77-0.98) and specifically green tea drinkers (men: hazard ratio 0.78, 95% confidence interval 0.69-0.88; women: hazard ratio 0.84, 95% confidence interval 0.74-0.95) had lower incident risk than never/former drinkers. Compared with never/former drinkers, a stronger dose-response trend was observed for the amount of dried tea leaf consumed/month by men (hazard ratio 0.67, 95% confidence interval 0.56-0.80, P < 0.001) than by women (hazard ratio 0.87, 95% confidence interval 0.70-1.08, P = 0.041).
CONCLUSIONS - Green tea intake is associated with a lower risk of incident kidney stones, and the benefit is observed more strongly among men.
© 2018 The Japanese Urological Association.
BACKGROUND & AIMS - Previous studies reported an association of the bacteria Helicobacter pylori, the primary cause of gastric cancer, and risk of colorectal cancer (CRC). However, these findings have been inconsistent, appear to vary with population characteristics, and may be specific for virulence factor VacA. To more thoroughly evaluate the potential association of H pylori antibodies with CRC risk, we assembled a large consortium of cohorts representing diverse populations in the United States.
METHODS - We used H pylori multiplex serologic assays to analyze serum samples from 4063 incident cases of CRC, collected before diagnosis, and 4063 matched individuals without CRC (controls) from 10 prospective cohorts for antibody responses to 13 H pylori proteins, including virulence factors VacA and CagA. The association of seropositivity to H pylori proteins, as well as protein-specific antibody level, with odds of CRC was determined by conditional logistic regression.
RESULTS - Overall, 40% of controls and 41% of cases were H pylori-seropositive (odds ratio [OR], 1.09; 95% CI, 0.99-1.20). H pylori VacA-specific seropositivity was associated with an 11% increased odds of CRC (OR, 1.11; 95% CI, 1.01-1.22), and this association was particularly strong among African Americans (OR, 1.45; 95% CI, 1.08-1.95). Additionally, odds of CRC increased with level of VacA antibody in the overall cohort (P = .008) and specifically among African Americans (P = .007).
CONCLUSIONS - In an analysis of a large consortium of cohorts representing diverse populations, we found serologic responses to H pylori VacA to associate with increased risk of CRC risk, particularly for African Americans. Future studies should seek to understand whether this marker is related to virulent H pylori strains carried in these populations.
Copyright © 2019 AGA Institute. Published by Elsevier Inc. All rights reserved.
AIMS - We conducted a prospective study of emergency department (ED) patients with acute heart failure (AHF) to determine if worsening HF (WHF) could be predicted based on urinary electrolytes during the first 1-2 h of ED care. Loop diuretics are standard therapy for AHF patients. A subset of patients hospitalized for AHF will develop a blunted natriuretic response to loop diuretics, termed diuretic resistance, which often leads to WHF. Early detection of diuretic resistance could facilitate escalation of therapy and prevention of WHF.
METHODS AND RESULTS - Patients were eligible if they had an ED AHF diagnosis, had not yet received intravenous diuretics, had a systolic blood pressure > 90 mmHg, and were not on dialysis. Urine electrolytes and urine output were collected at 1, 2, 4, and 6 h after diuretic administration. Worsening HF was defined as clinically persistent or WHF requiring escalation of diuretics or administration of intravenous vasoactives after the ED stay. Of the 61 patients who qualified in this pilot study, there were 10 (16.3%) patients who fulfilled our definition of WHF. At 1 h after diuretic administration, patients who developed WHF were more likely to have low urinary sodium (9.5 vs. 43.0 mmol; P < 0.001) and decreased urine sodium concentration (48 vs. 80 mmol/L; P = 0.004) than patients without WHF. All patients with WHF had a total urine sodium of <35.4 mmol at 1 h (100% sensitivity and 60% specificity).
CONCLUSIONS - One hour after diuretic administration, a urine sodium excretion of <35.4 mmol was highly suggestive of the development of WHF. These relationships require further testing to determine if early intervention with alternative agents can prevent WHF.
© 2018 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.
RATIONALE - Intensive care unit (ICU) delirium is highly prevalent and a potentially avoidable hospital complication. The current cost of ICU delirium is unknown.
OBJECTIVES - To specify the association between the daily occurrence of delirium in the ICU with costs of ICU care accounting for time-varying illness severity and death.
RESEARCH DESIGN - We performed a prospective cohort study within medical and surgical ICUs in a large academic medical center.
SUBJECTS - We analyzed critically ill patients (N=479) with respiratory failure and/or shock.
MEASURES - Covariates included baseline factors (age, insurance, cognitive impairment, comorbidities, Acute Physiology and Chronic Health Evaluation II Score) and time-varying factors (sequential organ failure assessment score, mechanical ventilation, and severe sepsis). The primary analysis used a novel 3-stage regression method: first, estimation of the cumulative cost of delirium over 30 ICU days and then costs separated into those attributable to increased resource utilization among survivors and those that were avoided on the account of delirium's association with early mortality in the ICU.
RESULTS - The patient-level 30-day cumulative cost of ICU delirium attributable to increased resource utilization was $17,838 (95% confidence interval, $11,132-$23,497). A combination of professional, dialysis, and bed costs accounted for the largest percentage of the incremental costs associated with ICU delirium. The 30-day cumulative incremental costs of ICU delirium that were avoided due to delirium-associated early mortality was $4654 (95% confidence interval, $2056-7869).
CONCLUSIONS - Delirium is associated with substantial costs after accounting for time-varying illness severity and could be 20% higher (∼$22,500) if not for its association with early ICU mortality.
Chronic end-organ complications result in morbidity and mortality in adults with sickle cell disease (SCD). In a retrospective-prospective cohort of 150 adults with SCD who received standard care screening for pulmonary function abnormalities, cardiac disease, and renal assessment from January 2003 to 2016, we tested the hypothesis that clustering of end-organ disease is common and multiple organ impairment predicts mortality. Any end-organ disease occurred in 59.3% of individuals, and 24.0% developed multiple organ (>1) end-organ disease. The number of end-organs affected was associated with mortality (P ≤ .001); 8.2% (5 of 61) of individuals with no affected end-organ, 9.4% (5 of 53) of those with 1 affected organ, 20.7% (6 of 29) of those with 2 affected end-organs, and 85.7% (6 of 7) with 3 affected end-organs died over a median follow up period of 8.7 (interquartile range 3.5-11.4) years. Of the 22 individuals who died, 77.3% had evidence of any SCD-related end-organ impairment, and this was the primary or secondary cause of death in 45.0%. SCD-related chronic impairment in multiple organs, and its association with mortality, highlights the need to understand the common mechanisms underlying chronic end-organ damage in SCD, and the urgent need to develop interventions to prevent irreversible end-organ complications in SCD.
© 2018 Wiley Periodicals, Inc.
PURPOSE - Standardized care via a unified surgeon preference card for pediatric appendectomy can result in significant cost reduction. The purpose of this study was to evaluate the impact of cost and outcome feedback to surgeons on value of care in an environment reluctant to adopt a standardized surgeon preference card.
METHODS - Prospective observational study comparing operating room (OR) supply costs and patient outcomes for appendectomy in children with 6-month observation periods both before and after intervention. The intervention was real-time feedback of OR supply cost data to individual surgeons via automated dashboards and monthly reports.
RESULTS - Two hundred sixteen children underwent laparoscopic appendectomy for non-perforated appendicitis (110 pre-intervention and 106 post-intervention). Median supply cost significantly decreased after intervention: $884 (IQR $705-$1025) to $388 (IQR $182-$776), p<0.001. No significant change was detected in median OR duration (47min [IQR 36-63] to 50min [IQR 38-64], p=0.520) or adverse events (1 [0.9%] to 6 [4.7%], p=0.062). OR supply costs for individual surgeons significantly decreased during the intervention period for 6 of 8 surgeons (87.5%).
CONCLUSION - Approaching value measurement with a surgeon-specific (rather than group-wide) approach can reduce OR supply costs while maintaining excellent clinical outcomes.
LEVEL OF EVIDENCE - Level II.
Copyright © 2018 Elsevier Inc. All rights reserved.
PURPOSE - Neurologic and endothelial injury biomarkers are associated with prolonged delirium during critical illness and may reflect injury pathways that lead to poor long-term outcomes. We hypothesized that blood-brain barrier (BBB), neuronal, and endothelial injury biomarkers measured during critical illness are associated with cognitive impairment and disability after discharge.
METHODS - We enrolled adults with respiratory failure and/or shock and measured plasma concentrations of BBB (S100B), neuronal (UCHL1, BDNF), and endothelial (E-selectin, PAI-1) injury markers within 72 h of ICU admission. At 3 and 12 months post-discharge, we assessed participants' global cognition, executive function, and activities of daily living (ADL). We used multivariable regression to determine whether biomarkers were associated with outcomes after adjusting for relevant demographic and acute illness covariates.
RESULTS - Our study included 419 survivors of critical illness with median age 59 years and APACHE II score 25. Higher S100B was associated with worse global cognition at 3 and 12 months (P = 0.008; P = 0.01). UCHL1 was nonlinearly associated with global cognition at 3 months (P = 0.02). Higher E-selectin was associated with worse global cognition (P = 0.006 at 3 months; P = 0.06 at 12 months). BDNF and PAI-1 were not associated with global cognition. No biomarkers were associated with executive function. Higher S100B (P = 0.05) and E-selectin (P = 0.02) were associated with increased disability in ADLs at 3 months.
CONCLUSIONS - S100B, a marker of BBB and/or astrocyte injury, and E-selectin, an adhesion molecule and marker of endothelial injury, are associated with long-term cognitive impairment after critical illness, findings that may reflect mechanisms of critical illness brain injury.
BACKGROUND - Delirium during critical illness results from numerous insults, which might be interconnected and yet individually contribute to long-term cognitive impairment. We sought to describe the prevalence and duration of clinical phenotypes of delirium (ie, phenotypes defined by clinical risk factors) and to understand associations between these clinical phenotypes and severity of subsequent long-term cognitive impairment.
METHODS - In this multicentre, prospective cohort study, we included adult (≥18 years) medical or surgical ICU patients with respiratory failure, shock, or both as part of two parallel studies: the Bringing to Light the Risk Factors and Incidence of Neuropsychological Dysfunction in ICU Survivors (BRAIN-ICU) study, and the Delirium and Dementia in Veterans Surviving ICU Care (MIND-ICU) study. We assessed patients at least once a day for delirium using the Confusion Assessment Method-ICU and identified a priori-defined, non-mutually exclusive phenotypes of delirium per the presence of hypoxia, sepsis, sedative exposure, or metabolic (eg, renal or hepatic) dysfunction. We considered delirium in the absence of hypoxia, sepsis, sedation, and metabolic dysfunction to be unclassified. 3 and 12 months after discharge, we assessed cognition with the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). We used multiple linear regression to separately analyse associations between the duration of each phenotype of delirium and RBANS global cognition scores at 3-month and 12-month follow-up, adjusting for potential confounders.
FINDINGS - Between March 14, 2007, and May 27, 2010, 1048 participants were enrolled, eight of whom could not be analysed. Of 1040 participants, 708 survived to 3 months of follow-up and 628 to 12 months. Delirium was common, affecting 740 (71%) of 1040 participants at some point during the study and occurring on 4187 (31%) of all 13 434 participant-days. A single delirium phenotype was present on only 1355 (32%) of all 4187 participant-delirium days, whereas two or more phenotypes were present during 2832 (68%) delirium days. Sedative-associated delirium was most common (present during 2634 [63%] delirium days), and a longer duration of sedative-associated delirium predicted a worse RBANS global cognition score 12 months later, after adjusting for covariates (difference in score comparing 3 days vs 0 days: -4·03, 95% CI -7·80 to -0·26). Similarly, longer durations of hypoxic delirium (-3·76, 95% CI -7·16 to -0·37), septic delirium (-3·67, -7·13 to -0·22), and unclassified delirium (-4·70, -7·16 to -2·25) also predicted worse cognitive function at 12 months, whereas duration of metabolic delirium did not (1·14, -0·12 to 3·01).
INTERPRETATION - Our findings suggest that clinicians should consider sedative-associated, hypoxic, and septic delirium, which often co-occur, as distinct indicators of acute brain injury and seek to identify all potential risk factors that may impact on long-term cognitive impairment, especially those that are iatrogenic and potentially modifiable such as sedation.
FUNDING - National Institutes of Health and the Department of Veterans Affairs.
Copyright © 2018 Elsevier Ltd. All rights reserved.
OBJECTIVES - To examine the effect of late-life body mass index (BMI) and rapid weight loss on incident mild cognitive impairment (MCI) and Alzheimer's disease (AD).
DESIGN - Prospective longitudinal cohort study.
SETTING - National Alzheimer's Coordinating Center (NACC) Uniform Data Set, including 34 past and current National Institute on Aging-funded AD Centers across the United States.
PARTICIPANTS - 6940 older adults (n=5061 normal cognition [NC]; n=1879 MCI).
MEASUREMENTS - BMI (kg/m2) and modified Framingham Stroke Risk Profile (FSRP) score (sex, age, systolic blood pressure, anti-hypertension medication, diabetes mellitus, cigarette smoking, prevalent cardiovascular disease, atrial fibrillation) were assessed at baseline. Cognition and weight were assessed annually.
RESULTS - Multivariable binary logistic regression, adjusting for age, sex, race, education, length of follow-up, and modified FSRP related late-life BMI to risk of diagnostic conversion from NC to MCI or AD and from MCI to AD. Secondary analyses related late-life BMI to diagnostic conversion in the presence of rapid weight loss (>5% decrease in 12 months) and apolipoprotein E (APOE) ε4. During a mean 3.8-year follow-up period, 12% of NC participants converted to MCI or AD and 49% of MCI participants converted to AD. Higher baseline BMI was associated with a reduced probability of diagnostic conversion, such that for each one-unit increase in baseline BMI there was a reduction in diagnostic conversion for both NC (OR=0.977, 95%CI 0.958-0.996, p=0.015) and MCI participants (OR=0.962, 95%CI 0.942-0.983, p<0.001). The protective effect of higher baseline BMI did not persist in the setting of rapid weight loss but did persist when adjusting for APOE ε4.
CONCLUSIONS - Higher late-life BMI is associated with a lower risk of incident MCI and AD but is not protective in the presence of rapid weight loss.