Registries that follow the following three benchmarking best practices generate insights that clinicians, researchers, and others can trust.
Our industry-leading registry product, RegistryX, hits this mark. Not only does RegistryX seamlessly acquire, validate, and transform your data, but it also allows you to explore benchmarking insights and include benchmarks in nearly every type of report.
Let’s dive into the specifics around these practices and explore how you can build benchmarks that are trusted and actionable.
One key part of getting benchmarks right is clearly defining both what metrics you are comparing and what organizations you are using as comparisons.
A few examples of comparison metrics for healthcare benchmarking include mortality rates, length of stay, readmissions, and complications. Comparison metrics also include things like process adherence, patient satisfaction, and outcomes.
In our blog post on the basics of healthcare benchmarking, we describe that “comparison metrics are most accurate with a proper comparison group, also called a peer group.”
A peer group is a group of individuals or entities that share similar characteristics and interests among one another. Importantly, peer groups allow organizations to compare themselves to other, similar organizations.
Peer groups are typically made up of individuals or entities that share similar characteristics. A few examples of characteristics of healthcare organizations and practices include practice setting and type, geographic type, provider and procedure volume, and several others.
Another important consideration when implementing benchmarking is the sample size of your peer groups. When peer groups are implemented with too small of a sample size, you are not able to effectively use those peer groups as a comparison tool.
Our RegistryX reporting suite signals you when you have selected a combination of peer groups that generate too small of a sample size. Notice in the example below that two of the selected years do not have sufficient sample sizes, so the data is masked to prevent misinterpretation of the data.
To deliver the most accurate insights, benchmarking reports should be both risk- and reliability-adjusted.
Risk adjustment is a process that corrects for the severity of a patient’s illness, which ensures that comparisons of hospitals and clinicians are fair and accurate.
For example, it would be misleading to compare the postoperative results of an 85-year-old female with underlying medical conditions to that of an otherwise healthy 50-year-old male undergoing the same procedure. The two patients don’t have the same level of risk.
In addition to risk adjustment, reliability adjustment also enhances the accuracy of benchmarking reports.
Putting this in a more specific context, when sample sizes for a healthcare organization are small, the observed rates of rare outcomes may be due to chance and should be considered less precise than rates based on larger sample sizes. Reliability adjustment corrects for random error, making your benchmarking results more fair and accurate.
Together, risk- and reliability-adjustment help ensure your benchmarking insights are trustworthy and valuable.
With your comparison metrics and peer groups defined, and statistical models in place, the next step is bringing benchmarks to life through best-in-class visualizations and reports.
RegistryX includes embedded Tableau for creating powerful visualizations. We use color, a variety of reporting types, and tooltip explanations to enable benchmarking in engaging and informative ways.
Tableau is a user-friendly, powerful data visualization tool that easily digests raw data into easy-to-interpret graphs, charts, and other types of visuals.
Having Tableau at your fingertips within an ArborMetrix registry allows you to more easily grasp insights, share information with others, and construct evidence-based reports.
With our unique use of Tableau in our registries, benchmarking is made even easier. You can visually compare your organization’s performance against others, understand how quality improvement initiatives are progressing, and identify future areas for improvement.
For organizations who want to ensure the privacy of other sites and clinicians from the user, we offer the option to implement Blinding. Blinding allows users to view their performance against their peers but does not allow the user to see who the other sites are.
Within the ArborMetrix platform, you can experiment with user-defined benchmarks. The example below shows a report where you can set the benchmarks (in this case, start date, comparison date, and end date). This is an example of a user-defined benchmark that allows for temporal analysis of a measure, and it is useful for understanding the effects of a new quality improvement initiative or process implementation.
With the right comparison metrics and peer group, statistical models, and visualizations, you can provide powerful, trusted benchmarking in your registry.
Check out the following articles for more like this.