The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
OBJECTIVES - We aimed to validate an algorithm using both primary discharge diagnosis (International Classification of Diseases Ninth Revision (ICD-9)) and diagnosis-related group (DRG) codes to identify hospitalisations due to decompensated heart failure (HF) in a population of patients with diabetes within the Veterans Health Administration (VHA) system.
DESIGN - Validation study.
SETTING - Veterans Health Administration-Tennessee Valley Healthcare System PARTICIPANTS: We identified and reviewed a stratified, random sample of hospitalisations between 2001 and 2012 within a single VHA healthcare system of adults who received regular VHA care and were initiated on an antidiabetic medication between 2001 and 2008. We sampled 500 hospitalisations; 400 hospitalisations that fulfilled algorithm criteria, 100 that did not. Of these, 497 had adequate information for inclusion. The mean patient age was 66.1 years (SD 11.4). Majority of patients were male (98.8%); 75% were white and 20% were black.
PRIMARY AND SECONDARY OUTCOME MEASURES - To determine if a hospitalisation was due to HF, we performed chart abstraction using Framingham criteria as the referent standard. We calculated the positive predictive value (PPV), negative predictive value (NPV), sensitivity and specificity for the overall algorithm and each component (primary diagnosis code (ICD-9), DRG code or both).
RESULTS - The algorithm had a PPV of 89.7% (95% CI 86.8 to 92.7), NPV of 93.9% (89.1 to 98.6), sensitivity of 45.1% (25.1 to 65.1) and specificity of 99.4% (99.2 to 99.6). The PPV was highest for hospitalisations that fulfilled both the ICD-9 and DRG algorithm criteria (92.1% (89.1 to 95.1)) and lowest for hospitalisations that fulfilled only DRG algorithm criteria (62.5% (28.4 to 96.6)).
CONCLUSIONS - Our algorithm, which included primary discharge diagnosis and DRG codes, demonstrated excellent PPV for identification of hospitalisations due to decompensated HF among patients with diabetes in the VHA system.
© Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
BACKGROUND - Rural/urban variations in admissions for heart failure may be influenced by severity at hospital presentation and local practice patterns. Laboratory data reflect clinical severity and guide hospital admission decisions and treatment for heart failure, a costly chronic illness and a leading cause of hospitalization among the elderly. Our main objective was to examine the role of laboratory test results in measuring disease severity at the time of admission for inpatients who reside in rural and urban areas.
METHODS - We retrospectively analyzed discharge data on 13,998 hospital discharges for heart failure from three states, Hawai'i, Minnesota, and Virginia. Hospital discharge records from 2008 to 2012 were derived from the State Inpatient Databases of the Healthcare Cost and Utilization Project, and were merged with results of laboratory tests performed on the admission day or up to two days before admission. Regression models evaluated the relationship between clinical severity at admission and patient urban/rural residence. Models were estimated with and without use of laboratory data.
RESULTS - Patients residing in rural areas were more likely to have missing laboratory data on admission and less likely to have abnormal or severely abnormal tests. Rural patients were also less likely to be admitted with high levels of severity as measured by the All Patient Refined Diagnosis Related Groups (APR-DRG) severity subclass, derivable from discharge data. Adding laboratory data to discharge data improved model fit. Also, in models without laboratory data, the association between urban compared to rural residence and APR-DRG severity subclass was significant for major and extreme levels of severity (OR 1.22, 95% CI 1.03-1.43 and 1.55, 95% CI 1.26-1.92, respectively). After adding laboratory data, this association became non-significant for major severity and was attenuated for extreme severity (OR 1.12, 95% CI 0.94-1.32 and 1.43, 95% CI 1.15-1.78, respectively).
CONCLUSION - Heart failure patients from rural areas are hospitalized at lower severity levels than their urban counterparts. Laboratory test data provide insight on clinical severity and practice patterns beyond what is available in administrative discharge data.
OBJECTIVE - The mortality observed-to-expected (O:E) ratio is rapidly becoming the most important measured quality metric by allowing quantification and comparison of survival outcomes among different providers and institutions. Although the O:E ratio is monitored by external observers, the ratio is unfamiliar to individuals within most institutions.
STUDY DESIGN - Retrospective chart review.
SETTING - Vanderbilt University Medical Center.
SUBJECTS AND METHODS - Twenty-eight patients cared for by the Department of Otolaryngology died while in the hospital between January 2001 and December 2010. All patient charts were reviewed for indicators related to mortality. From January 2006 to December 2010, a standardized mortality O:E ratio had been available using the All Patient Refined-Diagnosis Related Group (APR-DRG) grouper from the United Healthcare Consortium (UHC). The O:E ratio can be monitored over time to measure and quantify the effect of various interventions.
RESULTS - The otolaryngology O:E ratio quarterly results have varied from 1.1 to 0.29, based on a standard of 1.0. Internally, results have been primarily the result of mortalities of patients on the Head and Neck Service. Attention to common postoperative complications, accurate coding of comorbidities, and the compassionate use of palliative care consults have led to a significant decrease in the O:E ratio. Conversely, transfers from other hospitals have increased the ratio.
CONCLUSION - The Department of Otolaryngology has reduced the O:E ratio by focusing attention on factors that have been shown to reduce mortality and to enhance compassionate terminal care.
BACKGROUND AND OBJECTIVES - Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m(2) (eGFR 75).
DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS - From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a "missing" data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75.
RESULTS - All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; P<0.001). Improvements in misclassification were greater in patients with impaired kidney function (full multiple imputation + serum creatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75.
CONCLUSIONS - Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods.
CONTEXT - Currently most automated methods to identify patient safety occurrences rely on administrative data codes; however, free-text searches of electronic medical records could represent an additional surveillance approach.
OBJECTIVE - To evaluate a natural language processing search-approach to identify postoperative surgical complications within a comprehensive electronic medical record.
DESIGN, SETTING, AND PATIENTS - Cross-sectional study involving 2974 patients undergoing inpatient surgical procedures at 6 Veterans Health Administration (VHA) medical centers from 1999 to 2006.
MAIN OUTCOME MEASURES - Postoperative occurrences of acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction identified through medical record review as part of the VA Surgical Quality Improvement Program. We determined the sensitivity and specificity of the natural language processing approach to identify these complications and compared its performance with patient safety indicators that use discharge coding information.
RESULTS - The proportion of postoperative events for each sample was 2% (39 of 1924) for acute renal failure requiring dialysis, 0.7% (18 of 2327) for pulmonary embolism, 1% (29 of 2327) for deep vein thrombosis, 7% (61 of 866) for sepsis, 16% (222 of 1405) for pneumonia, and 2% (35 of 1822) for myocardial infarction. Natural language processing correctly identified 82% (95% confidence interval [CI], 67%-91%) of acute renal failure cases compared with 38% (95% CI, 25%-54%) for patient safety indicators. Similar results were obtained for venous thromboembolism (59%, 95% CI, 44%-72% vs 46%, 95% CI, 32%-60%), pneumonia (64%, 95% CI, 58%-70% vs 5%, 95% CI, 3%-9%), sepsis (89%, 95% CI, 78%-94% vs 34%, 95% CI, 24%-47%), and postoperative myocardial infarction (91%, 95% CI, 78%-97%) vs 89%, 95% CI, 74%-96%). Both natural language processing and patient safety indicators were highly specific for these diagnoses.
CONCLUSION - Among patients undergoing inpatient surgical procedures at VA medical centers, natural language processing analysis of electronic medical records to identify postoperative complications had higher sensitivity and lower specificity compared with patient safety indicators based on discharge coding.
OBJECTIVE - Patients with hospital-acquired acute kidney injury (AKI) are at risk for increased mortality and further medical complications. Evaluating these patients with a prediction tool easily implemented within an electronic health record (EHR) would identify high-risk patients prior to the development of AKI and could prevent iatrogenically induced episodes of AKI and improve clinical management.
METHODS - The authors used structured clinical data acquired from an EHR to identify patients with normal kidney function for admissions from 1 August 1999 to 31 July 2003. Using administrative, computerized provider order entry and laboratory test data, they developed a 3-level risk stratification model to predict each of 2 severity levels of in-hospital AKI as defined by RIFLE criteria. The severity levels were defined as 150% or 200% of baseline serum creatinine. Model discrimination and calibration were evaluated using 10-fold cross-validation.
RESULTS - Cross-validation of the models resulted in area under the receiver operating characteristic (AUC) curves of 0.75 (150% elevation) and 0.78 (200% elevation). Both models were adequately calibrated as measured by the Hosmer-Lemeshow goodness-of-fit test chi-squared values of 9.7 (P = 0.29) and 12.7 (P = 0.12), respectively.
CONCLUSIONS - The authors generated risk prediction models for hospital-acquired AKI using only commonly available electronic data. The models identify patients at high risk for AKI who might benefit from early intervention or increased monitoring.
BACKGROUND - Incremental achievement of quality indicator goals has been associated with progressive improvement in mortality and hospitalization risk in hemodialysis (HD) patients.
STUDY DESIGN - Descriptive cross-sectional study.
SETTING & PARTICIPANTS - All 33,879 HD patients treated at Fresenius Medical Care North America facilities for >90 days with scorable 36-Item Short Form Health Survey responses from January 1, 2006, to December 31, 2006.
PREDICTOR - We hypothesized that achieving up to 5 HD goals before the survey (albumin >or= 4.0 g/dL, hemoglobin of 11-12 g/dL, equilibrated Kt/V >or= 1.2, phosphorus of 3.5-5.5 mg/L, and absence of HD catheter) results in better self-reported quality of life (QoL).
OUTCOMES & MEASUREMENTS - Distributions of Physical and Mental Component Summary (PCS/MCS) scores within and across quality indicator categories determined during the prior 90 days from survey date (compared using analysis of covariance and linear regression models, with adjustment for case-mix and each of the quality indicators).
RESULTS - Incremental achievement of up to 5 goals was associated with progressively higher PCS and MCS scores (both P for trend < 0.001). Compared with patients meeting all 5 goals (n = 4,208; reference group), case-mix-adjusted PCS score was lower by 1.8 point with only 4 goals met (n = 11,785), 3.4 points for 3 goals (n = 10,906), 4.9 points for 2 goals (n = 5,119), 5.9 points for 1 goal (n = 1,592), and 7.8 points in the 269 patients who failed to meet any goal (each P < 0.001 vs the reference group). The corresponding decreases in case-mix-adjusted MCS scores were 1.0 point for 4 goals met, 1.7 point for 3 goals, 2.3 points for 2 goals, 3.0 points for 1 goal, and 4.7 points with no goal met, with each P < 0.001 compared with the MCS score from patients who achieved all 5 goals.
LIMITATIONS - Potential residual confounding from unmeasured covariates.
CONCLUSION - Patients progressively meeting more quality goals report incrementally better QoL. Further studies are needed to determine whether prospective achievement of quality goals will result in improved QoL for HD patients.
CONTEXT - Hospitals are under pressure to increase revenue and lower costs, and at the same time, they face dramatic variation in clinical demand.
OBJECTIVE - : We sought to determine the relationship between peak hospital workload and rates of adverse events (AEs).
METHODS - A random sample of 24,676 adult patients discharged from the medical/surgical services at 4 US hospitals (2 urban and 2 suburban teaching hospitals) from October 2000 to September 2001 were screened using administrative data, leaving 6841 cases to be reviewed for the presence of AEs. Daily workload for each hospital was characterized by volume, throughput (admissions and discharges), intensity (aggregate DRG weight), and staffing (patient-to-nurse ratios). For volume, we calculated an "enhanced" occupancy rate that accounted for same-day bed occupancy by more than 1 patient. We used Poisson regressions to predict the likelihood of an AE, with control for workload and individual patient complexity, and the effects of clustering.
RESULTS - One urban teaching hospital had enhanced occupancy rates more than 100% for much of the year. At that hospital, admissions and patients per nurse were significantly related to the likelihood of an AE (P < 0.05); occupancy rate, discharges, and DRG-weighted census were significant at P < 0.10. For example, a 0.1% increase in the patient-to-nurse ratio led to a 28% increase in the AE rate. Results at the other 3 hospitals varied and were mainly non significant.
CONCLUSIONS - Hospitals that operate at or over capacity may experience heightened rates of patient safety events and might consider re-engineering the structures of care to respond better during periods of high stress.
STUDY OBJECTIVE - We seek to determine whether cardiac risk factor burden (defined as the number of conventional cardiac risk factors present) is useful for the diagnosis of acute coronary syndromes in the emergency department (ED) setting.
METHODS - This was a post hoc analysis of the Internet Tracking Registry of Acute Coronary Syndromes (i*trACS) registry, which had 17,713 ED visits for suspected acute coronary syndromes. First visit for US patients who were not cocaine or amphetamine users, who did not leave against medical advice, and for whom ECG and demographic data were complete were included. Acute coronary syndrome was defined by 30-day revascularization, diagnostic-related group codes, or death within 30 days, with positive cardiac biomarkers at index hospitalization. Cardiac risk factors were diabetes, hypertension, smoking, hypercholesterolemia, and family history of coronary artery disease. Cardiac risk factor burden was defined as the number of risk factors present. Because multiple logistic regression analysis revealed that age modified the relationship between cardiac risk factor burden and acute coronary syndromes, a stratified analysis was performed for 3 age categories: younger than 40, 40 to 65, and older than 65 years. Positive likelihood ratios and negative likelihood ratios with their 95% confidence intervals (CIs) were calculated for each total risk factor cutoff.
RESULTS - Of 10,806 eligible patients, 871 (8.1%) had acute coronary syndromes. In patients younger than 40 years, having no risk factors had a negative likelihood ratio of 0.17 (95% CI 0.04 to 0.66), and having 4 or more risk factors had a positive likelihood ratio of 7.39 (95% CI 3.09 to 17.67). In patients between 40 and 65 years of age, having no risk factors had a negative likelihood ratio of 0.53 (95% CI 0.40 to 0.71), and having 4 or more risk factors had a positive likelihood ratio of 2.13 (95% CI 1.66 to 2.73). In patients older than 65 years, having no risk factors had a negative likelihood ratio of 0.96 (95% CI 0.74 to 1.23), and having 4 or more risk factors had a positive likelihood ratio of 1.09 (95% CI 0.64 to 1.62).
CONCLUSION - Cardiac risk factor burden has limited clinical value in diagnosing acute coronary syndromes in the ED setting, especially in patients older than 40 years.
BACKGROUND - Utilization risk assessment is potentially useful for allocation of health care resources, but precise measurement is difficult.
OBJECTIVE - Test the hypotheses that health-related quality of life (HRQOL), severity of illness, and diagnoses at a single primary care visit are comparable case-mix predictors of future 1-year charges in all clinical settings within a large health system, and that these predictors are more accurate in combination than alone.
RESEARCH DESIGN - Longitudinal observational study in which subjects' characteristics were measured at baseline, and their outpatient clinic visits and charges and their inpatient hospital days and charges were tracked for 1 year.
SUBJECTS - Adult primary care patients.
MEASURES - Duke Health Profile for HRQOL, Duke Severity of Illness Checklist for severity of illness, and Johns Hopkins Ambulatory Care Groups for diagnostic groups classification.
RESULTS - Of 1,202 patients, 84.4% had follow up in the primary care clinic, 63.2% in subspecialty clinics, 14.8% in the emergency room, and 9.6% in the hospital. Of $6,290,775 total charges, $779,037 (12.2%) was for follow-up primary care. The highest accuracy was found for predicting primary care charges, where R2 for predictors ranged from 0.083 for medical record auditor-reported severity of illness to 0.107 for HRQOL. When predictors were combined, the highest R2 of 0.125 was found for the combination of HRQOL and diagnostic groups.
CONCLUSIONS - Baseline HRQOL, severity of illness, and diagnoses were comparable predictors of 1-year health services charges in all clinical sites but most predictive for primary care charges, and were more accurate in combination than alone.