Hospital Report Cards TM Maternity Care and Women's Health Methodology 2006 6
© Copyright 2006 Health Grades, Inc. All rights reserved.
May not be reprinted or reproduced without permission from Health Grades, Inc.
Statistical Models for Predicting Mortality
1. Unique statistical models were developed for each patient cohort using logistic regression.
2. Comorbid diagnoses (e.g., hypertension, chronic renal failure, anemia, diabetes), demographic
characteristics (e.g., age), and specific procedures (for procedure-based cohorts) were classified as
possible risk factors. HealthGrades used logistic regression to determine which of these were actually
risk factors and to what extent they were correlated with mortality. A risk factor stayed in the model if it
had an odds ratio greater than one and was also statistically significant in explaining variation. Potential
risk factors with odds rations less than one are removed from the model except in a few cases where
the risk has been previously documented in the medical literature. Complications were not counted as
risk factors as they were considered a result of care received during the admission.
3. The statistical models were checked for validity and finalized. All of the models were highly significant,
with p values not greater than 0.0001. These cohort specific models were then used to estimate the
probability of death for each patient in the cohort.
4. Patients were then aggregated for each hospital to obtain the predicted outcome for each hospital.
Assignment of Ratings for Cardiac/Stroke Services for Women
For each hospital, the actual mortality was summed for all of the six patient cohorts and the predicted
mortality (risk adjusted) was summed for all of the six patient cohorts. The predicted mortality rate was
compared to the actual mortality rate for each hospital and tested for statistical significance at 90 percent
(using a z-score and a two-tailed test). Percentile scores were calculated based on the z-score.
The following rating system was applied to the comparison of the actual mortality for all six patient cohorts
and the predicted mortality rate for all six patient cohorts.
·
Better than expected Actual performance was better than predicted and the difference was
statistically significant.
·
As expected Actual performance was not significantly different from what was predicted.
·
Worse than expected Actual performance was worse than predicted and the difference was
statistically significant.
To be included in the study, a hospital must have had at least 30 cases in each of five cohorts out of the
possible six, and they must have had at least five cases during 2004 in five out of six cohorts. Also,
hospitals that transferred more than 14.3 percent of their stroke patients were excluded.