sponsored by accenture


illustration of human hand shanking digital hand.

innovation

Taking a Wider Lens to Responsible AI in Health Care

A leadership approach in the contemporary age of innovation

By Lauren H. Logan

The global state of artificial intelligence (AI) is hinting at reinvention, not just technology innovation. At the very same time, health care writ large finds itself in a moment of unprecedented strain, eager for a redeemer. This inertia — complicated by the fact that AI in health care is relatively unregulated — creates tremendous opportunity and risk. Enterprises are excited to experiment and transform, but they face serious risk of taking action without discipline or focus on measurable value.

Traditional AI such as robotic process automation (RPA) and predictive modeling has already heavily penetrated health systems, but never before has our generation experienced the speed of innovation driven by generative AI. It is poised to transform health care operations, diagnostics, access, engagement and outcomes. For example, we have already seen generative AI reinvent health care worker tasks, most notably in electronic health record software integrationof GPT-4 to generate patient chart summaries. Although implementation of generative AI is still gaining speed, democratized access to generative AI is heralding new ways to reshape health care. For the very first time, a wide range of users can generate notes, code, analyses, creative concepts, images and even videos.

As providers and health systems adopt AI — from traditional to generative — leaders will not just be called upon to deploy responsible AI (RAI) as the siloed technical approach it once was. AI is becoming an undercurrent that increasingly touches every part of the enterprise. RAI is now a mechanism to drive value — for patients, providers and operators.

Redefining Responsible AI

The National Institute of Standards and Technology (NIST) “identifies and quantifies trustworthy and responsible AI in technical terms.” Traditionally, RAI has centered on the design, development, deployment and monitoring of AI solutions within analytics, informatics and information technology (IT) departments. Some RAI programs and governance models have been more expansive than others. But they have largely been rooted in technical themes like fairness-accuracy tradeoffs and data and model drift.

These traditional RAI protocols, and recent augmentation efforts to improve application in health care, should remain part of a larger RAI framework. They ensure fairness and mitigation ofdata, algorithmic and human biases. They also enable explainability, transparency, validity and reliability, among other best practices.

Contemporary RAI, however, begs for a broader definition that reflects the paradigm of responsible business in health care. RAI in health care is evolving into a discipline that homes in on value, compliance and trust within the context of AI. It requires a return to the basics, enabling leaders to steer heath systems with purpose and action. It does not stop at principles, guidelines, policies, or technical protocols.

The Path Forward

Before health systems embark on this journey, they should assess the level of data literacy among business leaders. If the level of data literacy across decisionmakers is insufficient for executives to fully understand the potential value, capabilities, limitations, risks and outcomes of AI, then sponsored education must be undertaken. Data literacy is essential for success in today’s business environment.

Anchor on value. The most common barrier to AI adoption for provider systems today is a lack of perceived or realized value. This is avoidable. Health system leaders need to establish their strategic posture — and value targets — before experimenting with AI technologies. This includes defining enterprise objectives, acknowledging hurdles (e.g., current technical infrastructure, budgetary constraints, decision-makers) and knowing who they want to be as AI-adopters. Do they want to be leaders, leapfroggers, or laggards (also known as reinventors, transformers, or optimizers?

Innovation for the sake of innovation can be costly and even adversely impactful. After setting a baseline strategy, this expanded approach to RAI requires (a) identifying value domains (e.g., cost of care) to impact, (b) assessing the feasibility and expected impact of bringing AI to identified areas and (c) anchoring investment decisions with a clear understanding of expected return on investment and metrics for success.

Disciplined leaders who look before they leap are less likely to fall victim to the “Target shopper fallacy”: overfilling the cart with tools and capabilities that do not align to value. They are also intentional in their selection of pilots and proofs of concepts, avoiding a patchwork of initiatives that causes other organizations to get lost. They understand that if a proof of concept is tested and fails to meet value requirements, they should capture lessons learned and move forward without overcommitting and over-indexing. Consistently grounding in a clear strategic posture, one anchored by value, helps organizations make decisions related to risk tolerance, scalability and durability and build/buy/rent efficiencies.

Stay clear-eyed on compliance, but do not frame it as a constraint. As with all other regulatory aspects of health care, it is imperative that organizations inform their decision-making by monitoring regulatory guidance and requirements. Compliance is an important mechanism, necessary to ensure that quality standards are maintained, and steep penalties are avoided. This, of course, is not novel to health care leaders.

Today, the regulatory landscape for AI in health care is still in development. In the meantime, health systems are best served by observing strong business, data engineering and data science hygiene — readying for increased rigor and new requirements that are bound to come. Negative consequences can be avoided by abiding by existing industry-wide and jurisdictional requirements, respecting copyright terms of vendor agreements, safeguarding in-house intellectual property, and making sound investment decisions that are unlikely to be recalled (e.g., including clinical validation where applicable).

Beyond adherence to external requirements, the opportunities that come with AI innovation challenge organizations to reflect on their own appetite for risk. What is an acceptable risk profile? What is the tradeoff between reward and financial or reputational hazard?

At the end of the day, compliance and associated risk management should not be a barrier to innovation. Practicality is a must. Too many organizations are delaying time to value by pulling together a series of uncoordinated parties — legal, IT, operations and more — to evaluate every potential risk that can be associated with an AI solution. In certain instances, this is undoubtedly necessary. However, each value domain will have unique requirements. For example, marketing risks vary tremendously from diagnostics. Compliance protocols and journeys should be adapted accordingly.

Trustee Takeaways

Collaboration is more important than ever. To shape the next horizon of health care, and avoid getting left out in the cold, organizations must foster relationships — internally, externally and across levels. As leaders embrace RAI, they are encouraged to recall:

  • The combination of rapid innovation, health system strain and relative regulatory immaturity of AI in health care creates both heightened opportunity and risk.
  • RAI in the modern era is more expansive than ever before because AI will increasingly impact every part of health systems.
  • A strategic RAI posture must champion value, compliance and trust. Directional principles and guidelines are not enough.
  • Anchor on value.
  • Regulatory requirements will evolve. Meanwhile, health systems need to lay a foundation that enables durability and readiness for the future.
  • Compliance cannot be a barrier. Maintain rigor and practicality.
  • Trust is the currency of health care. It should be built, nurtured and maintained at every opportunity.

Trust is the currency of health care. For AI adoption to be successful, trust in the data, technology and outputs is non-negotiable. If technology or insights are not trusted — and therefore are unused — value is absent. Leaders need an honest approach to the current state: within health care, the incumbent trust ecosystem is already complex. Technology has historically reduced clinician productivity and increased administrative burdens. At present, as clinicians are stretched to manage patient loads and may have less tolerance for potential “distractions.” Meanwhile, there is a patient trust deficit and heightened awareness of equity, security, privacy and consent risks.

When selecting, designing and deploying AI, it is critical that leaders identify areas of friction versus fuel. Where is trust going to be challenged (e.g., changed workflows; personal data collection)? Where can trust be built (e.g., when ambient listening technology enables clinicians to increase eye contact and reduce time typing; personalization results in better health outcomes)? What additional programs or protocols can be instituted to build trust (e.g., requiring data scientists to take ethics training; data collection opt-out options)?

Every step of this journey is filled with precarious moments where trust will be built or eroded. Again, RAI carries that opportunity and risk. During selection or development of AI applications, stakeholders must be included. This increases efficacy, usage and acceptance, and it is part of the human-centered adoption journey. Driven by behavioral science principles, this approach accomplishes the following: (a) it asks leaders to set visible and approachable messages; (b) it focuses on shifting mindsets and relevant behaviors through engagement and reinforcement in social networks; (c) it elevates skills and understanding (e.g., data literacy around prompt engineering) through formal and informal support; and (d) it takes steps to reward adoption that results in value for impacted parties.

RAI calls for human-centricity and a constant commitment to learning. RAI is not about “setting-it-and-forgetting-it.” To the contrary, it requires constant monitoring of the technologies and behaviors that are impacted. No matter how anthropomorphic AI gets, trust will endure at the core of health.

Other Considerations

As organizations expand AI footprints that demand massive datasets and bad actors use generative AI to create malware, security will go from critical to sacrosanct. Strong data provenance, storage, access controls and vendor management will be even more important.

Enterprises are also asked to consider sustainability concerns associated with generative AI energy usage. Per Accenture Research in 2023, “if current trends continue, machine learning systems could consume nearly all of the world’s energy production by 2040.” For that reason, leaders should make environmentally sustainable choices.

Lauren Logan ( Lauren.H.Logan@accenture.com) is a managing director in Accenture’s Global Health Practice based in Chicago.

Please note that the views of authors do not always reflect the views of AHA.