HDG #035: The non-mathematical guide to using risk scores in healthcare

 

Exciting times ahead!

I’ve been promising some announcements and here they are.

  1. Words I never thought would come out of my mouth: I’m now also on TikTok, so follow me there for bite-sized health data tips, mostly recorded in my car, as any proper influencer does.

  2. After I spending the last two years (which is, like, 14 in startup years) straying from healthcare data analytics but diving deep into data and software engineering/tech, I'm thrilled to recommit to empowering healthcare organizations and innovators drive meaningful, impactful, and equitable change in healthcare.

    The time wasn’t lost. It reaffirmed my advocacy.

    Data and technology can save lives, prevent people from falling through the cracks like I almost did, and connect people across siloes to collaborate, which is going to be absolutely key for any future healthcare innovation to addresses disparities in any meaningful way.

  3. That is where my newest work comes in: HealthTech Rx is a health innovation hub dedicated to health equity and technology. We'll be providing advisory consulting, innovation programs, education + engagement + activism, and investment to help organizations eliminate health disparities through innovative new solutions. While I hinted at it last month, we announced it on Linkedin last week, so it's official!!

    I'll be spearheading our data-driven innovation programs, community engagement, and advisory services. Aside from my existing consulting on analytics strategy, we will also leverage our team’s vast experience as startup operators and healthcare SMEs to help health organizations, healthtech founders/startups, and other healthcare stakeholders to better source, vet, test, and develop scalable new ideas that can create a significant impact on health outcomes in New Mexico and beyond, while also contributing to healthcare innovation and economic development.

  4. You’ll also see a slow convergence of my HDG branding and topic themes with my HTRx efforts, starting with the new header image style you see today!

So stay tuned: we're launching a podcast, more content, and even exploring new platforms like TikTok and YouTube.

After all, it takes a village—and I consider each and every one of you as part of this mission tribe!

P.S. What’s an Innovation Hub?

 
 

. . .

Now… about those Risk Scores in healthcare.

Risk modeling and risk scoring in healthcare can be complex, but their use and development doesn’t have to be. Anyone can understand their fundamentals and feel more confident with them, even if you don’t have a mathematical background.

This guide is for those who aren’t deeply versed in actuarial science or statistics. It aims to make the concept of risk models more approachable and help you understand how to effectively use or even contribute to their development within your healthcare organization.

In a previous issue of Health Data Guru, I wrote about the many (many, many) predictive risk models in healthcare. We covered their types, uses, and limitations. If you’re unfamiliar with risk models and risk scoring, that’s a good starting point.

Building on that, this article aims to further your understanding, covering:

  • Terminology Basics: Clarifying terms like risk, risk model, risk score, and risk stratification

  • Real-world Barriers to Adoption: Why these models aren’t always instantly and unanimously embraced by case managers, care teams, operations or finance teams, analysts

  • Examples and Ideas: Concrete examples of how risk models might be better used for population health management

First — let’s clarify some terminology basics:

Risk

In the sense that we’re using it in this article, this typically refers to the likelihood of some adverse event or outcome. It could be financial (the likelihood or “risk” that a health plan enrollee will end up being a “high cost” member), clinical (the likelihood or “risk” that a patient with a recent admission will be readmitted), or representative of disease burden or complexity (for instance, being considered “higher risk” than other patients due to your unique combination of factors or certain diagnostic conditions). 

In healthcare, risk could also refer to other concepts related to the financial responsibility for a person’s healthcare (for instance, providers taking on “risk” for their patient populations), risk adjustment of payments to health plans and providers based on the relative health of their population, and other operational risks (compliance, etc.).

Risk Model

A risk model in healthcare refers to the statistical tool or framework used to calculate specific outcomes or events based on various inputs, factors, or features. In other words, it is the actual process, or tool, or calculation, or mathematical methodology that typically uses statistical or other computational methods to derive whatever output it seeks to compute.

There are many different types of models out there:

  1. Predictive Models: Models like the LACE index for Readmission (Length of stay, Acuity of the admission, Comorbidities, and Emergency department visits) are used to “predict” the likelihood of a patient being readmitted within 30 days after discharge.

  2. Clinical Risk Models: These models assess the risk of specific clinical outcomes, such as the development of a particular disease or complication. For example, the Framingham Heart Study risk model predicts the risk of developing cardiovascular disease based on factors like age, cholesterol levels, blood pressure, and smoking status.

  3. Actuarial & Risk Adjustment Models: In health insurance, these models are used to estimate the future healthcare costs of individuals or populations. Models like Milliman Advanced Risk Adjusters (MARA) or DXCG now by Cotiviti helps in setting premiums and understanding financial risk by combining diagnoses and drug data to predict costs. These types of models consider factors like age, gender, medical history, and lifestyle.

  4. Quality Risk Models: These models assess risks related to healthcare quality and patient safety. They might predict the likelihood of medical errors, hospital-acquired infections, or patient falls, helping institutions to implement preventive measures.

  5. Population Health Risk Models: These models are used to identify and manage risks at the population level, focusing on preventive care and chronic disease management. They help in segmenting populations based on risk factors and designing targeted interventions. For example, the Johns Hopkins Adjusted Clinical Groups (ACG) System uses patients’ diagnoses and demographic data to predict healthcare utilization.

  6. Resource Utilization Models: These models predict the healthcare resources (like hospital beds, staff, and equipment) that will be needed in specific scenarios. They are crucial for capacity planning and management, especially in situations like pandemics or natural disasters.

  7. Social Risk or Vulnerability: a bit newer but seeks to understand how vulnerable a patient/member might be based on social determinants of health at the individual and/or community level. I’ve also seen an individual Social Risk score that identifies how at-risk someone may be based on their individual factors: credit score, housing stability, etc. I think this is a perfect example of something that could/should be combined with other risk factors for a more holistic view of the patient/member. More about that below.

  8. Impact or Impactability Models: similar to some of the above, but these models are specifically designed for the final output to identify exactly how likely a member is to be impacted, that is to say, benefit from, intervention. I’ve seen this mostin case management: a the model shows how likely is a patient/member to benefit from case management.

A data scientist may describe models in healthcare a little differently, which may also be appropriate. In the data science context, one might identify the model more akin to the method used, such as linear or logistic regression, optimization, survival analysis, random forest, or etc. Since this is the non-mathematical guide, I’ll leave that for someone more well-versed in the numbers and computational methods to expound upon.

Risk Score & Risk Stratification

A risk score is the numerical value produced by the risk model. For instance, consider the LACE index for Readmission model mentioned above. The model produces a numerical score from 0 to 19. That score is then used to categorize, or risk stratify, patients into a discrete risk category (strata) of Low, Moderate, or High:

You can kind of think of this distinction between the score and the stratification like the grading scale from school. While the risk score could be akin to the numerical score/percentage you get on a test (0–100), the risk stratification/category is more akin to the corresponding letter grade (A, B, C, D, or F).

I like seeing both the score and the final stratification, rather than just one or the other. The score helps us to sort, prioritize, and distribute the population results so I can understand the what I’m working with overall and how each individual person compares to others in the population. But the stratification allows us to group people and apply certain interventions, resources, or workflows that correspond to their needs.

The only tricky thing here to note is that the risk score itself could be on any scale depending on the model used and how they developed it. For instance, with the LACE index, the score could range from 0 to 19, which is then stratified into one of three distinct risk strata. Other models might produce scores ranging from -10 to 10, or 0 to 400, each with their own stratification criteria. 

It all depends on the specific model, how it computes the score, and what that score means in terms of the risk stratification or classification. 

Despite their potential, risk models often still face barriers to adoption in healthcare settings.

I’ve observed some of the reluctance stemming from a lack of understanding, perceived complexity, and doubts about practical applicability. Addressing these concerns is crucial for broader understanding and adoption.

Some of the most common challenges I see are:

1. The lack of transparency for many risk models gets us wrapped around the axle when it comes to operationalizing.

We often get lost in trying to understand every detail of these models. With the rise of more and more proprietary models, the lack of transparency and fuzzy understanding has also grown — without time for training roll-out, or perhaps due to turnover — oftentimes the end user may end up seeing the number or a category assigned to their patient/member in a report or dashboard somewhere, but not know what it means or how they should act upon it. Thus it goes ununsed.

Some of the common ones like the Johns Hopkins Adjusted Clinical Groups (ACG) System, DXCG now by Cotiviti, and Milliman Advanced Risk Adjusters (MARA) are the considered by some as the “gold-standard,” but let’s not forget all the proprietary models by other vendors. Many, many other vendors offer proprietary risk adjustment models, each with their unique methodologies and focus areas. For instance, 3M’s Clinical Risk Groups (CRGs) lean towards chronic disease management, while Optum’s Impact Pro model zeroes in on avoidable costs. HBI Solutions and other population health vendors have predictive models for all kinds of conditions and events using a combination of claims, enrollment, clinical, lab data, etc.

Consider the sheer number of models out there. It would be practically impossible to understand them all at the deepest level.

A basic understanding of the risk model should be adequate for comfort with an operational use.


Understanding every single one of the models available, in excruciating detail, is often not feasible. You can avoid getting lost in the details becase our vendors have done all that work exactly so that you don’t have to. And they usually have Tecnical Specs or Methodology documents they can share to help shed light on what their model entails.

I don’t think this is a dealbreaker, though— especially for models that are more population-based and less clinically/diagnostic or carrying financial gravity. Understanding the basic elements, like the factors considered in the model and how to interpret the final score, is often sufficient.

At a minimum, I recommend you do have a high-level understanding of:

  • What inputs (data, fields, factors, features) a model is using. For instance, understanding that a certain model considers overall cost, utilization, demographic risks, community social risks, and mortality risk helps you understand that when it identifies someone as higher risk, what factors might have contributed to that and to what extent. In the case of the LACE index, I understand that it considers length of stay, the acuteness of the admit, the patient’s comorbidities, and recent emergency department (ED) visits.

  • Which factors contribute more substantially to the final result. Do certain factors drive a higher score than others based on how the model is designed? Does the model tend to favor certain conditions or scenarios? Would the model potentially miss certain scenarios? Understanding which factors might contribute more substantially to the final result helps you make more informed decisions based on the scoring/stratification. In the LACE index example, we can tell from the value/point information that the model tends to assigns a higher risk to longer lengths of stay and greater comorbidities than it assigns to acute admits.

  • How to interpret the final results. Interpretation matters. While this sounds straightforward, it often isn’t. Because there are so many different kinds of models, aimed at different kinds of outcomes, it’s very important to know how to interpret whatever final result the model produces. Many models will come with some information about their model and how results should be interpreted. For instance, if the model is more of an “impact” score, certain scores or strata might mean someone is more likely to benefit or be impacted by case management or care coordination. Others models’ risk categories might be representative of an estimated average total cost for any one health plan member in year. Others might be clinical risk as compared to peers. Others might be clinical risk in general.

The bigger challenge, and where your time and effort is better spent, is determining what to do with the risk information.

For instance, what do you do (clinically or operationally) when they come back as “Moderate” versus “High?”

2. Fixation on actuarial/statistical and other details leads to analysis paralysis and underuse

I recognize and absolutely agree with the level of actuarial and statistical rigor that goes into creating many of these commercially-sold models. Afterall, if many organizations will be using them in some fashion, we need to know they’re statistically sound, they’re accurate at doing what they say, and they’re reliable. Statistical validation is especially critical for models that directly impact clinical decisions.

This is such a hot topic that the Society of Actuaries (SOA) validates these and others on occasion. For example, here is a SOA report from 2016 evaluating claims-based risk scoring models. The report seeks to evaluate the “predictive accuracy of the current set of commercial risk scoring models available in the marketplace.” The actuarial view almost always focuses on predicting cost of a covered life, but that is just one type of model. Other risk models attempt predict clinical outcomes, such as risk of a patient to have a readmission or mortality (death).

The key to encouraging the use of risk models lies in simplifying their presentation and focusing on their practical applications. Making these models more accessible and understandable can help healthcare professionals see their value and applicability.

Practical Recommendations: Enhanced Use of Risk Models for Population Health

I’ve had best success using risk models as “directional pointers” rather than explicit predictors to take literally. 

Out-of-the-box, they can easily be used for:

  • Population-level comparison of two or more groups

  • An indivual’s comparison to peers in the population

  • Identify changes in coding, or relative risk of a population, over time

  • As one input to a broader prioritization or enhanced risk model

For the latter, consider a process that starts with a vetted risk score, then “adds” or “layers” in additional factors, like cues, that are important.

This are particularly useful for identifying patients who might benefit from targeted intervention programs, care/case management, and outreach.

Example:

Patients Risk Score + 
If they’ve had a recent ED visit +
If they are eligible for a certain program +
If they answered certain questions in a certain way on an assessment +
If they have certain conditions of interest


Because the basic score alone often doesn’t tell us much about the specifics of a patient/member, the added factors can help to layer in additional “weighting” to the basic score, allowing us to push patients with certain characteristics higher on the ensuing list. 

This helps us to systemically surface or prioritize patients based on triggers, events, or characteristics that are of particular importance to the business or a program. I like this method for prioritization and sub-stratification. It is not scientific, but it is very operationally actionable.

I have successfully used this hybrid method for:

  • Prioritization & Ranking of patients/members

  • Creating custom flags to identify patients/members for certain targeted interventions

  • Early indicators of risking risk or red flags for case/care management

Risk models should be used as on tool in your toolbox to build upon, not the end-all-be-all.

Risk models in healthcare are invaluable tools, but their utility is maximized when we approach them with practicality and clarity. Understanding the basics of these models — such as key factors and how to interpret scores — is often enough for practical application. By understanding their basic concepts and focusing on their directional value, we can make better, informed decisions in healthcare management.

I’d love to see these models get broader use, because they are so valuable and riogorous. At the same time, keeping in mind their limitations and how you might operationalize them is the biggest challenge, but could make them even more useful. 

No model is perfect and no model is perfectly accurate everytime. The sooner we understand that, but don’t let it hold us back from still using the risk models how we can, the sooner we can start driving more impact and not let these become a forgotten number listed on a report or dashboard somewhere, paid for but not ever utilized.

. . .

Risk models in healthcare are invaluable tools, but their utility is maximized when we approach them with practicality and clarity. By understanding their basic concepts and focusing on their directional value, we can make better, informed decisions in healthcare management. Let's move away from the intimidation of complexity and towards a more accessible and effective use of risk models in healthcare.

. . .

Until next time,

-Stefany


P.S. If you have any ideas, suggestions, feedback, or requests for specific topics, I’m always open. Hit reply and let me know!!

 
 

Like my content?

If you want to learn more about health data quickly so you can market yourself, your company, or just plain level up your health data game, I recommend subscribing to this newsletter and checking out my free Guides. Courses and more resources are coming soon, so check back often!

Want to work together?

I work with healthtech startups, investors, and health organizations who want to transform healthcare and achieve more tangible, equitable outcomes by using data in new ways. Book some time with me to talk health data, advice for healthtech startup and investment, team training+workshops, event speaking, or fractional support + analytics advisory.

Learn more about the health innovation hub.

And follow me on LinkedIn, TikTok, and Medium to stay up-to-date on resources and announcements!

 
 
Previous
Previous

#036: Introducing health EQ!

Next
Next

HDG #034: 100+ open-source health data sets