Hospital leaders face a numbers game

Trustee talking points

  • An ever-increasing amount of data is available for health care leaders to use in making decisions about drugs, devices and procedures.
  • The flood of data can lead to poor results if it's not interpreted correctly.
  • Board members need to be alert to randomness, significance and bias when reviewing data.
  • When used correctly, evidence-based medicine can drive clinical and strategic decisions that improve quality of care, patient outcomes and safety.

Data, big and small, is everywhere. Data will change society and health care in profound ways, driving decisions more and more in the future. And health care boardrooms will be no different: Trustees will likely see even more data than they do now as they work to make health care more productive and safer.

How do trustees make sense of data and not get accidentally driven in the wrong direction? How will they judge whether the data support offering a new technology or not? What basic rules of data should they keep in mind when they are inevitably shown the "evidence" supporting (or not) a particular drug, device or procedure?

These questions made me think about an excellent book by Leonard Mlodinow, The Drunkard's Walk: How Randomness Rules Our Lives. It’s a fascinating read about how our lives are profoundly informed by chance, and the psychological illusions that cause us to misjudge the world around us.

While reflecting on Mlodinow’s thesis, I began pondering evidence-based medicine and the data-driven decisions that have become hallmarks of a rapidly changing health care system. Without question, this quest for data should propel better and safer care. For example, should mammography be done annually after age 40 or 50? Should proton beam radiotherapy be preferred over conventional radiotherapy for children?

Facing tough questions such as these, leaders are finding that a flood of data and evidence that can provide answers has spread to nearly every corner of health care. We now must sort through so much data that in some cases we are in danger not of making decisions based on inadequate information but, rather, of making decisions based on flawed data or flawed interpretations. In short, we have data, but what does it really tell us?

Fuzzy math

What do we need to know to understand what the data say? Not all correlations matter — and we must ensure that we do not succumb to the tendency to create a seemingly causal relationship out of what is actually nothing more than randomness. But in many cases, we can use data to understand the true value of a drug, device or procedure. Often, we can assess data to then make a prediction about the future.

For example, what is the likelihood of a new drug working better than an older one? Or, what is the likelihood that conducting mammography screening earlier will lead to longer lifespans? Unfortunately, these questions become challenging — even when there are data available to review — because we must first and foremost rule out the effects of randomness. Humans generally are not wired to do this intuitively, so we must use the mathematics of probability to help us.

Here’s why. If you flip a coin five times and it comes up heads each time, the chance of it coming up heads the next time is still 50 percent. Yet it “feels” like it should be more likely on the sixth toss to come up tails. But this is not more likely: In random coin flipping the probability is always 50 percent, and there will always be streaks of certain outcomes that appear nonrandom but actually are. Even if you flip 50 heads in a row, the next flip is still 50:50.

But in looking at data, sometimes this is not so obvious, because we humans often use our feelings or intuition. So, if your board is comparing outcomes in a set of data that is too small, it is easy to “see” an effect that may not be true. Small samples will have random streaks that cause you to mistakenly conclude something is related to something else or some outcome is more or less probable.

To help avoid this error and have more confidence in your results, check if you have a large enough sample. If anyone on your board were to flip a coin only four times, the chance of “correctly” getting a 50:50 result reflecting the true 50:50 probability of heads versus tails is not guaranteed. But if he or she were to flip the coin 10,000 times, the chance of 10,000 heads in a row is much less likely.

Three rules

When a board reviews data from, say, a clinical research study, a test that looks at 20 patients may not be significanrt. A study of 10,000 patients, all else being equal, has much more value because it is less likely to contain randomness to trick the board members.

So, Rule No. 1: Don’t let small numbers fool you. Look for large trials with true statistical significance. And, when someone says, “The limitation of this research is the relatively small number of patients enrolled,” be wary.

Rule No. 2: Statistical significance does not equal clinical significance. For example, imagine that researchers conduct a large randomized controlled trial and show that a new technology implant causes a weight loss of 2.5 pounds. While losing 2.5 pounds is better than nothing, it may not be enough to make a difference in the overall health of most people.

If there is risk in the implant procedure, providers must balance the benefit with the potential harm. The benefit may be too small to take the risk of harming the patient. Even though providers have statistical confidence in the improvement, the effect may still be too minor to provide a real benefit. In short, just because something does what it is supposed to does not always mean it is the right thing to do.

The next rule comes as a consequence of the big-data revolution itself and from large data sets and the illusions humans sometimes create from cognitive bias.

Rule No. 3: Keep confirmation bias at bay. The human mind has enormous capacity to learn to see trends, but it also has the capacity to believe something that does not truly exist. Our ability to see patterns may sometimes fool us, especially when we have a preconceived belief.

For example, if we believe we see a pattern, such as people in a particular neighborhood developing cancer, we may conclude that the environment is causing the disease. We then do a study to prove our theory. Except that when we plot the cancer-versus-location pattern, we unwittingly draw the geography in such a way as to group the pattern so there appears to be an obvious relationship.

What we forget is that the tendency to look for evidence that confirms our beliefs affects our judgment. In reality, cancer clusters will often be present simply because of the randomness of population and the disease's naturally random, uneven dispersion. In our example, it is our desire to believe there is an environmental cause that drives us to miss the point: that the pattern is really nothing more than what one would expect if one were looking at enough data. Sooner or later, researchers will find a cancer cluster in some place, so the fact there is a cancer cluster here is potentially proof of randomness, not of a cancer-causing environment.

Confirmation bias, of course, affects all of us, so the best guard against it is to not just look at the evidence supporting a theory but to look for evidence to disconfirm it.

The board’s role

As health care becomes more data driven, the opportunity to make better health decisions increases. Providers are able not only to review the art of medicine but also to see the results of medicine in databases of patients and outcomes. If boards look at data with the three key rules in mind, they will help their organizations improve quality, improve outcomes and make patients safer.

As a field, we will learn what works well and what does not. We will answer tough questions such as, "What benefit truly comes from prostate cancer screening?" and, “Which types of patients will benefit from which specific treatment and which will not?” If we treat only those patients likely to receive a real benefit — and help them avoid the pain, suffering and cost of unnecessary treatments — we will all be better off.

Anthony J. Montagnolo, M.S. (amontagnolo@ecri.org), is executive vice president and chief operating officer of ECRI Institute, Plymouth Meeting, Pa.