The science of safety
Keeping patients safe is a prerequisite for delivering high-quality health care. In response to the Institute of Medicine’s 1999 report To Err is Human, the health care field has dramatically increased efforts to address adverse safety events. Those efforts could be accelerated by making use of the solid body of knowledge, including safety science, that has effectively guided other high-risk industries.
The integration of safety science principles into health care delivery points to a new focus for enhancing the safety of care. To accomplish this goal, trustees must help their organizations foster a safety culture that views every event as an opportunity to learn and improve.
This article provides a layperson’s overview of safety science and suggests ways to learn from successes in other sectors to enhance our safety efforts. It also details cultural elements that will enable safety science to take hold. In doing so, we introduce best-practice models for responding to near misses or adverse events — models that facilitate better learning, transparency for patients and a safety culture for staff. Finally, we offer suggestions for the role of hospital and health system trustees in creating a culture of safety.
People and systems
The application of safety science in any complex, high-risk industry is based on a simple premise: Human beings are not able to perform perfectly all the time. A safety approach that relies on the elimination of human error is fragile and vulnerable, if not impossible. To ensure safety at large-scale enterprises, we must deliberately design systems, procedures and culture to both minimize the opportunity for human error and mitigate the impact when mistakes do occur.
One of the best-known examples of safety science in action is in the aviation industry over the past 50 years. The major-carrier commercial airline industry is now ultrasafe because of proactive safety management that started in the early 1970s. Before this, accident investigations were attributed to “pilot error” and addressed through discipline (if the pilot survived) and education.
The change in approach opened new questions: Could this error happen again? What other potential errors could occur? How do we make the system safer? This fostered a new culture focused on equipment design, procedures, training and company policies, all with one goal: to minimize risk. Consider this: The commercial aviation system in the United States handled more than 26,000 flights and nearly 2.6 million passengers per day in 2016 without a single fatality — and yet, according to a recent study, commercial airline pilots and air traffic controllers commit approximately two communication errors per hour.
System safety engineering uses engineering and management principles to identify and analyze hazards, and then eliminate, control or manage them. The keys to system safety engineering are:
- Focusing on hazards, not human error; i.e., how does the system put us at risk?
- Analyzing the entire system, the performance of the people in the system, each individual system component, the environment and technology, and the way these components interact.
- Using human-factors engineering principles to understand human performance abilities, constraints and limitations.
Understanding the limitations of human performance does not mean excusing chronic underperformers. Instead, an emphasis on the accountability of managers and leaders ensures that the right staff are hired, properly brought on board, trained and monitored. Even with training and experience, however, human error can still occur.
Safety engineers are particularly concerned about slips and lapses, the kinds of errors that occur when you are performing a task that you have done numerous times. This can mean that your mind is functioning in “automatic” mode. A slip occurs when you have the right intention but execute incorrectly, like turning your car toward work when you intend to go to the grocery store. A lapse occurs when you have the right intention but don’t execute, like forgetting to go to the grocery store altogether.
Defibrillator design offers a useful example. Defibrillators shock the heart back into a normal rhythm after the heart stops. Time matters, as the patient’s chance of being resuscitated drops by 10 percentage points every passing minute. In this high-stakes scenario, simulation research studies have demonstrated a recurring error: If the clinician slips and activates “off” instead of “shock,” it will cause a two- to three-minute delay while the unit powers back up — time the patient cannot afford.
It is a fact of human nature that even well-trained, high-performing clinicians will sometimes press the wrong button. The human-factors response asks why and redesigns the device to prompt confirmation when the user selects “off.”
When slips or lapses result in near misses or adverse events, health systems now have two best-practice models that are available online.
One, developed by the National Patient Safety Foundation, is named RCA2, for root cause analysis and action. In this program, RCA2 reviews are focused on discovering and mitigating system vulnerabilities by implementing sustainable and effective systems-based improvements. The RCA2 guide also provides tools to evaluate individual event reviews to determine whether the ultimate objectives have been met. The focus of these reviews is deliberately focused on the system factors, not individual performance. As the NPSF stated in a 2015 report on the process, “individual performance is a symptom of larger systems-based issues.”
The other model is named the CANDOR (Communication and Optimal Resolution) Event Review process, authored by MedStar Health for the federal Agency for Healthcare Research and Quality in 2016. The process outlines a systematic approach to learning based upon the principles of safety science, with these recommendations:
- Ensure rapid response to adverse events by immediately deploying a CANDOR response team.
- Engage staff, providers, patients and family members as soon as possible to capture their unique and valuable information about what occurred.
- Allow immediate, anonymous and confidential reporting by front-line staff and providers.
- Ensure that the health care system’s policy on event reviews meets protections allowed by state and federal laws to encourage the highest degree of transparency.
- Provide feedback, both in the immediate aftermath of an event and during the investigation, to those who reported it.
- Take steps to protect patients from future events through a continuous learning process.
Science and culture together
The event review process proposed in both of these models is an essential part of a safety culture, but the models are reactive and therefore are not enough by themselves. Health systems would do well to focus more time and resources on proactive approaches: preventing adverse events from occurring in the first place.
Work done in Michigan to prevent ventilator-associated pneumonia, and later led by Peter Pronovost, M.D., at Johns Hopkins Hospital in Baltimore, is an example of a proactive approach to improving safety. Pronovost and his team developed a five-step checklist that nurses and doctors completed together during rounds each day, which resulted in a reduction in the VAP rate from 11 percent to zero.
Pronovost’s checklist is rightly celebrated for its impact, but the change that accompanied it — creating a cultural expectation that checklists would be completed each day on each patient — was just as crucial.
Weaving the patient-safety net
Science and culture working in tandem can prevent slips and lapses from causing adverse events. At MedStar Health, experts in human-factors engineering and safety science have teamed up with operational safety leaders to adapt a safety management approach that has three main categories (two proactive, one reactive):
- Primary safety interventions that involve a deliberate and informed focus on safety during the initial design and implementation of the system and its parts (which might include selection and training of people, cultivation of the culture, task allocation, and design or selection of processes, devices, technology and environments) to prevent hazards in the first place.
- Secondary safety interventions that focus on identifying existing hazards, threats and vulnerabilities — and on designing and implementing mitigations that are sustainable and effective — to either eliminate or lessen the threat.
- Tertiary interventions that occur after the adverse event but still offer the opportunity for learning, improvement, doing the right thing for the patient and his or her family, and taking care of the staff member.
Secondary prevention is largely an untapped area. Health care delivery is exceptionally complex, and complex systems in any industry are full of hazards. While resilience, real-time adaptability and the vigilance of health care workers help to keep patients safe, these same workers are acutely aware of the daily hazards that could lead to a safety problem. Leveraging this knowledge is the driver of secondary prevention.
Near misses are signals of a potential adverse event. Human nature, however, is to treat a near miss as a crisis averted and to move on. And, in fact, most health care organizations focus almost all of their safety-management resources on reacting to events. To reach the next level in health care safety, we need to shift our focus to the proactive side, where the real opportunity lies.
Cultures that strengthen the net
Changing culture is the key to learning where the hazards lurk. Our hospitals and health systems must develop cultures that empower clinicians to raise the alarm when they experience a hazard or near miss. Hospital trustees and C-suite executives must drive this culture change.
Consider hospitals’ efforts to increase hand-washing compliance. Many surveillance systems exist to track who is washing their hands and how often. But what if those tracking data were analyzed to focus on when and where staff are washing — or not washing — their hands? Maybe certain rooms, units or times of day have lower compliance rates. When we use the data to find the underlying cause — the location of a sink, the reminders given in a staff huddle and so on — we send a strong signal that the culture is focused on preventing adverse events, not assigning blame.
Another driver is how a hospital reacts to near misses and adverse safety events. Studies suggest that about 600 near misses occur for every adverse safety event. Clinicians are much more likely to report near misses when their hospital responds to adverse events by focusing on how to fix processes, procedures and equipment to avoid a safety problem. Unless the case involves intentional injury or reckless behavior, removing the individual stigma and consequences from adverse safety events will bring near misses to light.
The role of trustees
Trustees can play a crucial role in this shift to a more proactive safety approach. If the issues sound complex, they are. But oversight should not require more than a layperson’s understanding of medicine, safety science or human-factors engineering. There are professionals — safety engineers and human-factors engineers — specifically trained in helping organizations to make this shift.
Health care organizations must partner with these safety professionals to make these changes happen. Health care finds itself now where other ultrasafe industries were years ago. We must dedicate ourselves to building upon knowledge from these other industries to keep our patients safe.
Rollin J. (Terry) Fairbanks, M.D., M.S. (Terry.Fairbanks@MedStar.net), is the founding director of the National Center for Human Factors in Healthcare, co-director of the MedStar Telehealth Innovation Center, assistant vice president for ambulatory quality and safety at MedStar Health in Columbia, Md., and associate professor of emergency medicine at Georgetown University in Washington, D.C. Seth A. Krevat, M.D. (Seth.Krevat@MedStar.net), is assistant vice president of safety at MedStar Health, faculty associate at MedStar Health’s National Center for Human Factors in Healthcare and assistant professor of clinical medicine at Georgetown University.
Trustee takeaways: Setting the direction
Trustees have an essential role to play in safety in their hospital or health system, always remembering that there are scores of professionals available to help them and their organization. Here's what trustees can do to set the direction on safety:
- Review near misses as part of every quality committee meeting. Ask the leaders responsible for quality and patient safety if they are looking at near misses and mitigating hazards.
- Insist that event reviews focus on the processes, procedures, equipment and culture that allowed the hazard to occur instead of the individuals involved.
- Consider safety early in the development of every health care design project, e.g., the architectural design of clinical settings and the clinical design of new evidence-based protocols.
- Visibly and vigorously support cultural change that promotes an open, transparent culture of safety.