Why Honesty is the Best Policy as AI Continues Shaping Healthcare: Q&A with Lucia Savage, Chief Privacy & Regulatory Officer
With artificial intelligence (AI) development continuing to expand in healthcare, concerns among patients and providers alike are taking hold. Six in ten U.S. adults are uncomfortable with the idea of their healthcare provider relying on AI to do things like diagnose disease and recommend treatments, while 41% of physicians are worried about AI’s impact on things like patient privacy. To better understand the past, present and future of AI’s adoption in healthcare, we enlisted the help of Lucia Savage, Omada’s Chief Privacy and Regulatory Officer and an expert on patient privacy and digital health public policy.
Recognized by SC Magazine as a leading figure in health tech, Savage is an experienced executive and board member who has advised CEOs, cabinet secretaries, and elected officials on leveraging technology to transform healthcare. Prior to joining Omada, she served the Obama Administration as Chief Privacy Officer at the U.S. Department of Health and Human Services Office of the National Coordinator for Health IT.
We sat down with her to talk through items like AI’s potential in healthcare, how to navigate best practices of AI’s safe development and deployment, where she sees AI regulation headed, and more.
OMADA HEALTH: In healthcare, we appear to be at a critical moment in time where AI innovations, with their vast potential, are presenting both opportunities and challenges for traditional healthcare. Can you talk about how we got here?
LUCIA SAVAGE: It all goes back to the original HIPAA legislation in 1996, a primary purpose of which was to force claims data to be digitized so they could be analyzed mathematically. And then in 2009, we had the HITECH Act, which created a program to incentivize physicians and hospitals to adopt electronic health records. In adopting electronic health records, we also had to digitize the actual clinical information underlying those transactions. With the combination of HIPAA and HITECH, the government very intentionally created an environment in which there’s a lot more digital clinical information now than we had 25 years ago. This was done deliberately to move us along this path where we can then take advantage of advances in computing in our healthcare system. It also allows us to really step back and look at the totality of the ecosystem, at the population level, using these complicated mathematical techniques, which of course have become more sophisticated since 1996, as well.
OH: From your perspective, how can AI tools help care teams and patients alike? In other words, what’s the best case scenario?
LS: At the ecosystem level, the complicated math underlying AI lets us look at that data and recognize patterns. So when we think about looking at healthcare data at the ecosystem level, we’re considering what patterns exist. Perhaps we want to amplify a pattern because it's beneficial, and sometimes you want to disrupt a pattern because it's bad for the system as a whole. We've been looking at patterns in healthcare since we digitized the claims data way back when, but now we can look at more patterns. We can use this technology to understand if there’s too much orthopedic care or too little care at all in a particular community. We can use this technology to help understand what causes the pattern we want to amplify or disrupt.
And of course another area of low-hanging fruit is that computers are really good at doing repetitive administrative tasks and calculations. We know this because we let them do that in our own lives, in managing our email inboxes or in online bill pay.
OH: How does the application of AI tools differ in healthcare settings versus, for example, consumer applications?
LS: In healthcare, we have well established, longstanding standards for quality of care. Many types of providers are licensed. They have to meet minimum standards. That is not as pervasive or well developed in many consumer products. Yes, of course, we don't want cars to blow up, and we all can think of stories about that kind of product safety. But in the area of software products, there are much less well established safety standards. So that's really important. You should be able to go to your healthcare provider and, even if they're using AI, know that they still have to adhere to safety and quality standards.
Another difference that's really important is privacy. In healthcare settings, we have very strict rules about privacy, and HIPAA is a very important one of those rules. But privacy rules also come from the state level and apply to particular providers. For consumer products that are not squarely in the healthcare system, that's a lot less clear. It can appear, from the outside, like there isn’t a very precise approach to the process of maintaining privacy for some consumer products. That's one reason why we see a lot of state laws emerging to better regulate consumer privacy.
Then of course, if you have software that's operating in a manner that's regulated as a device by the FDA, then you have additional federal regulations, including for what safety looks like, and the device must be proven safe to sell. If you make certain claims about those devices, you have to prove that those claims are true. Many consumer-purchased tools with AI features don’t have these same ground rules for quality, safety, or privacy.
OH: In your view, what are some of the top privacy concerns, in relation to AI’s adoption in healthcare? And how can industry stakeholders better address them?
LS: Speaking again to some of the largest opportunities, like population health, I think it’s important to highlight that some healthcare initiatives relate to broad populations, and other needs are patient-specific. When we're looking at big population patterns, we don't really care very much about specific individuals within the pattern or their identities, and so we approach privacy in a different way.
When we're looking at total population levels, the identity of each person in a population is not necessarily relevant because we're looking at patterns within the population. But if we want to address something that we see in there for a particular person, then we need to identify the particular person, and that’s when a host of important privacy issues come into play.
I also think that, in healthcare, we haven't been very good about explaining this distinction to people. It's geeky, and sometimes people's eyes glaze over. Most people focus on what they need for their own care when they're sick. But it’s important for people to understand the basics of how privacy works and how information about people like them can be relevant for all of us - and then for responsible care providers to be thoughtful about protecting patient privacy. The concern is the same whether it's population health work that we're doing with AI or through a researcher reading a spreadsheet.
OH: How can healthcare companies get out in front of rising privacy or security threats as more AI tools become part of the healthcare ecosystem?
LS: Many of the same concerns about running a tight ship from a security perspective in general also apply when you include AI. You need to have all of the right levels of credentials and authorized access in your systems. You need to constantly monitor your systems for nefarious actors and tend to all of the normal security engineering. We can double down on these things because they still apply. And then there are areas that relate to the unique vulnerabilities that might be presented by AI. For example, your large language model may be based on a certain quantity of data, and if somebody (nefariously or accidentally) injects malicious data into that learning data, then the whole thing can run off the rails and be skewed. It might even take you a little while to figure that out. So you not only have to protect against people accessing any identifiable information in your data set or misappropriating the intellectual property in that data set, but you also must protect against polluting your learning model methodology.
On the privacy side, I’ll go back to what I said about consumer data versus healthcare data and the importance of adhering to healthcare privacy standards. For example, in the U.S., healthcare providers have the ability to use data at a population level to learn more about their population and to do these kinds of really broad analyses. This was one of the things that, when I was in the government, my European colleagues kind of envied. But in enabling this, we have a standard, which is: use the only data you need. If you don't need people's personal identifiers to understand what's happening at a population level, then don’t use them. When you're looking at the population level, identifiable information is rarely necessary. We don't have these legal standards on the consumer side in all cases in the U.S., although there are new state level consumer privacy laws each year. So while consumer organizations can adopt the concept of minimum necessary data as a matter of principle, they are not always required to by law.
OH: As healthcare companies continue to develop AI tools, how can care teams help ensure safe deployment of AI tools?
LS: This has to be accomplished through intense and ongoing collaboration between the healthcare professionals and the AI developers. I think healthcare professionals must point out that an LLM shouldn't do something that would be unsafe. It's their job to raise issues and make sure they get fixed. And there are some legal guardrails in place. For example, under FDA rules, you can develop certain types of software as long as you stay within certain swim lanes, like educational content based on documented, existing healthcare protocols. So if you wanted to provide general education on how to take care of one’s self after chemotherapy, you might license the American Cancer Society’s educational materials and teach that to your LLM. But then you have to monitor what your LLM is doing to make sure it stays inbounds. Somebody has to pay attention. and I think there needs to be an ongoing dialogue about the best way to do that. I know the Consumer Technology Association is slowly working through some baseline AI and healthcare standards. So standards are developing.
OH: Transparency around AI development is a hot-button issue. Realistically, how can healthcare companies practice transparency in the current climate?
LS: It’s certainly a best practice when you have an AI tool interacting with a patient or their care partner that you tell them it's an AI tool (and, in some cases, it's already legally required). It’s also something the FTC might question if a company puts an AI feature in front of someone but presents it as a human being.
At one end of the spectrum, there are people who think that the algorithms themselves should be published and tested in a peer-reviewed setting. Others disagree with that since algorithms can be intellectual property. There are people who are less concerned and just want convenience. And there’s a huge spectrum in between. There’s also some AI that makes predictions and logical connections at very, very high speeds, and so at some point, even if you ask the developer, they may not be able to adequately explain how it does what it does. So it's a complicated space, but honesty is the best policy.
OH: How should healthcare companies handle patients being made aware they may be interacting with AI tools or with care teams supported by AI?
LS: For direct interaction with AI tools, again I think honesty is the best policy. Then it becomes a decision of how to reflect that in the design, right? Is this a text-to-text situation or is the AI agent speaking to you? Maybe then the AI introduces itself by saying, “I'm an AI agent.” Or maybe you want a pop-up. There are a lot of different ways to communicate it, and I don't think there's any sort of gold standard. I think the same is probably true as you make AI available to your healthcare professionals. Again, honesty is the best policy, particularly if you want to have a situation where care teams can confidently take advantage of AI that makes them more efficient and more accurate while also being able to reject AI information when it’s not what they need or is unsafe or inaccurate. For AI-driven clinical decision support systems, the FDA requires that the professional user know that it's AI and has to be able to escape it or to determine that the AI is not giving the right answer.
OH: Conversations around equity and bias in relation to training AI tools is another issue of widespread concern. How should healthcare companies be approaching that?
LS: For decades, healthcare companies have been required to provide fair and equitable care and not to discriminate on the basis of many different protected classes. That all remains true. AI hasn't changed any of that, but I think it’s giving us a new domain in which we have to be thoughtful about equity and bias.
For example, we have to consider whether training data was fair. Did it fail to account for a particular segment of the population? We've been dealing with questions like this for decades. There are medications that have been previously approved based on randomized control trials with only white men, and it turned out, there was a contraindication for certain other segments of the population. Sometimes, when you do research, the research data - whether powering AI or not - is not fully representative of the population you’re trying to treat. We have to think about all of this.
At the same time, AI can help us rapidly and efficiently look at patterns, which also means that we can take new looks at the data we have and find the hot spots where we need to supply more care. For example, in what counties are more Black and Latino women dying because they don't have the right prenatal care compared to their white counterparts? AI has this incredible ability to help us better understand the situation we find ourselves in and find bad patterns that we want to disrupt.
OH: As you see it, where is the future of regulation around AI in healthcare headed?
LS: I think we're going to be in the mode of people finding their way for a while. Some states have already taken action. Utah, Colorado, and California, for example, have new laws on the books. Virginia just had one vetoed by the governor. Those are mostly focused on AI in the consumer environment and use of consumer data to build AI, but they do have some healthcare flavors, and I think you will see states take action in this way. I think there will be developments with regard to regulation of litigation and the way malpractice works that will really mature over the next few years, as AI developers and their health system purchasers sort out who's financially responsible when something goes wrong and it causes patient harm. So that's an area for legal geeks to watch. There's a lot of desire to take advantage of these innovations, and so those who want to build these innovations and sell them into commerce, healthcare or otherwise, are going to have to be mindful. We'll be finding our way for a while, but I think the good news in healthcare is that we’ve had this body of law for a while, and it tells us a lot. It may not answer every single question, but it definitely gives us a lot of road signs about how we might do it right.
*This interview has been edited for length and clarity