Artificial intelligence requires real leadership
Interest in artificial intelligence and machine learning in health care is accelerating and diversifying. The promise of AI in health care, though still somewhat speculative, is profound and envisions capabilities that were the stuff of science fiction just a few years ago.
The challenge of AI in health care is equally profound and requires answering difficult questions and addressing thorny issues. What are the realistic promises that AI can make? How does AI intersect with other emerging health care capabilities such as genomic medicine? How can public and systemic expectations be managed and concerns allayed? And what can health systems considering AI opportunities do now to maximize their chances of success for gaining efficiencies, research discovery and integration into clinical workflows?
Well-informed and thoughtful leadership is essential to navigating these and other questions.
New and not so new
AI is an evolving term that generally captures technology at the cusp of how information is translated into knowledge. Modern AI solutions are the “natural” descendants of other technologies that use the latest computer science advances to solve problems, gain insights and automate in ways that would be out of reach otherwise. Freed from human-dictated logic, modern AI systems use multi-layered neural networks to store and categorize information in their own ways, and find their own “organic” ways of generalizing from examples, finding relationships, categorizing data and finding patterns.
The organic nature of these AI systems creates an underlying issue. Unlike other systems, it is often difficult to find out “why” an AI system reached a particular conclusion. This is the so-called “black box” feature of AI, which can complicate efforts to quickly identify and correct if a performance problem arises. AI systems are also not static — they are not simply turned on and off; rather, they are put through a learning process that continues to be accretive after the technology solution is launched. Accordingly, the results of an AI system’s operations will only be as good as the information used to train the system and the training itself. Poor data quality or training can result in biased outcomes — essentially, a poorly educated computer that will not be a good problem solver going forward.
AI in health care
As is the case with digital health tools more generally, health care AI tools are currently being used in many settings to address a variety of issues, including:
- Precision medicine.
- Health and wellness (for example, nutrition and exercise recommendations).
- Predictive analytics of a patient or patient populations.
- Diagnostic assistance and image analysis ("clinical decision support").
- Patient engagement.
- Clinical research protocol design.
- Revenue cycle management.
- Providing provisional health care services.
- Health insurance modeling and intervention.
Two other uses of AI systems in health care bear mentioning. First, AI tools are being utilized to identify health indicators from data not traditionally subject to health practitioner review, such as media posts and shopping habits. Second, consumers are directly engaging with AI systems as well as the “internet of things," creating the first stepping stone to a diffuse but integrated health “system” with significantly more patient/consumer control and engagement than currently exists.
The AI challenge
The integration of new technology into any enterprise is a challenge; and to a certain degree, AI is no different from any other new technology. The technology must be:
- Identified to a particular concern or opportunity.
- Vetted and evaluated in the context of the identified concern or opportunity.
- Considered in the light of existing systems and strategic goals.
- Subjected to a financial analysis.
- Assessed under and deployed consistently with existing legal frameworks (that may lag behind the technology).
AI systems, however, will present unique challenges requiring additional considerations and strategies for their integration. These additional challenges may include the following:
Workflow: If the AI system interrupts a workflow, and particularly if it actually or apparently supplants an individual with respect to a professional task, then a significant amount of socialization of the technology may be required. This may mean extensive education and training, and institutional culture changes. Recruiting trusted colleagues as champions can help in this process.
Patient hesitation: Whether the AI system assists with patient engagement/communication or provides high-level clinical related services, expect a degree of patient pushback against being “forced” to engage with a “machine.” The effective adoption of AI systems will need to address consumer concerns, suspicions and fears — realistic or otherwise — just as the health system addresses those issues with its own staff. Education and consumer engagement will be critical to consumer acceptance and adoption.
Address the black box: The black box nature of AI systems is not simply an interesting feature; rather, it creates a set of novel issues in terms of risk allocation. These issues need to be addressed in the contractual relationships with vendors, professional liability insurance terms and workflows.
Watch for bias: Bias in AI systems has been well documented and the subject of lawsuits. It is important to recognize that bias is usually inadvertent. Given the many examples of bias in AI systems where there is little or no evidence of intentionality, the quality of the data used to train, and the training itself, of an AI system will be critical to avoiding bias: this requires careful oversight. Further, bias in health care data may result in an adverse event or misdiagnosis. Accordingly, protecting against bias, and addressing its consequences in contractual relationships, is critical to avoiding significant problems.
Consumer-generated data and “diagnosis”: As noted above, health systems should be aware of the likely utilization by patients of consumer-oriented AI systems. These tools, combined with the increasing number of wearable and remote monitoring devices (including through the internet of things) and easily accessible medical information, are creating an environment in which consumers not only have access to much more information about themselves and their condition than in the past but also have the ability to self-diagnose and transmit information to their health care providers. Over time, this will likely result in an evolution of the patient-practitioner relationship and produce different types of consumer demands of practitioners and systems. Health systems must monitor this evolution and, when possible, guide it to an appropriate outcome.
Expect regulatory delay: Not addressed in this article are the very interesting and difficult policy and legal implications of the utilization of advanced AI systems in health care. These issues span a wide range and include matters related to standards for professional services, product liability, intellectual property rights and Food and Drug Administration oversight. These issues are only now being tentatively considered from a policy and regulatory perspective, and they must be monitored.
Manage expectations: AI solutions have great potential, but they are easy to confuse with their science fiction counterparts, leading to unrealistic practitioner, patient and consumer expectations. Clearly describing what the system can and cannot do is part of the public education and socialization process.
Privacy and security: Part of the appeal of AI is that it can produce insights, identify patterns and provide answers that we might not otherwise be able to produce ourselves. This also means that it may deduce information that would otherwise remain private, even from the individual herself. Thus, AI highlights the privacy challenge of a “right not to know” alongside the possibility that to protect such a right, the machine may be allowed to learn something that you will elect to keep secret from yourself. In addition, modern AI systems may create insights that present acute sensitivity concerns, and AI functionalities may create new relationships among data owners. Separately and together, these AI twists on privacy and security highlight the need to have a comprehensive privacy and security system and recognize that bad actors are also adopting technology — including AI technology — to achieve their own goals.
Strategy, strategy, strategy: With the advent of digital health tools that present very real solutions to health issues and offer improvements to processes related to clinical research, health system operations and patient engagement — as well as with the adoption of value-based payment systems — the integration of information technology solutions into the DNA of health systems is a necessary and critical step. While advanced AI systems may represent just another aspect or element of this progress, the remarkable functionality should be seen as a beacon for the necessity of health systems to realign their strategic vision.
The common elements with all of these challenges are preparation and leadership. Anticipating these challenges and meeting them head on is critical. To do this, effective leadership, from the board and C-suite, to consider the implications of technology, set a system strategy, adopt processes to ensure appropriate evaluation and implement solutions is a necessity.
Jennifer S. Geetter and Dale C. Van Demark are partners in the health law practice of McDermott Will & Emery.