Digital Health Talks - Changemakers Focused on Fixing Healthcare

Navigating Healthcare Automation: Regulatory Outlook and Legal Considerations for Streamlining Processes and Eliminating Inefficiencies

Episode Notes

See what is happening now at www.healthimpactlive.com 

Originally Published: Jul 21, 2023

YouTube Video:  https://www.youtube.com/watch?v=4V8Tx9Krzlw

Navigating Healthcare Automation: Regulatory Outlook and Legal Considerations for Streamlining Processes and Eliminating Inefficiencies 

 

In order to keep pace with increasing demands on healthcare technology teams, healthcare organizations must foster a culture of continuous improvement that prioritizes efficiency, effectiveness, and innovation. This means getting rid of the "stupid stuff" that slows down progress and undermines the quality of care. In this session, we will explore the importance of continuous improvement in healthcare technology and provide practical guidance on fostering a culture of continuous improvement within your organization and the regulatory and legal issues organizations need to consider as they attempt to streamline and automate processes. 

 

Kyle Y. Faget, Co-Chair Health Care Practice Group, Foley & Lardner 

Megan Antonelli, Chief Executive Officer, HealthIMPACT Moderator

Episode Transcription

Navigating Healthcare Automation - Regulatory Outlook and Legal Considerations for Streamlining Processes and Eliminating Inefficiencies

[00:00:00]

Megan Antonelli: Hi, welcome back everybody. I am here today with Kyle Y. Faget with Foley and Lardner. She was with us in June in New York on our smokey smokey Wednesday. And the panel discussion that she participated in was the Getting rid of the stupid stuff, which we've talked a lot about in Health Impact.

The Hawaii Pacific program, we had Santosh from Moffitt who was with partners when they launched that program. And during the panel they talked a lot about you know, how to do continuous, you know, process improvement and what that really means in [00:01:00] healthcare. So Kyle, thanks so much for joining us again, and I'd love to hear, you know, your thoughts on the panel and kind of what your key takeaways were.

Kyle Y. Faget: Sure. Thank you so much for having me. So I think the panel was great. I think that we could delve into the, this topic really extensively. And so of course in any panel situation you're a bit time limited. But I think the key takeaways are basically this, that there is a lot of interest in automating particularly administrative functions in healthcare practices and facilities.

I think that clinicians really wanna be focused on clinical work instead of administrative work. And any movement toward pulling that administrative work away from them and repositioning that work somewhere else, particularly if it can be automated. There's just a huge push to do that.

Megan Antonelli: For sure.

I mean it, you know, I think one of the things, what we've been hearing a lot, [00:02:00] you know, pre pandemic, even pre, you know, pre conversational AI chat, G B T explosion Yeah. Was a lot of hesitancy around AI and healthcare. Right. And of course there's still some hesitancy and there's some risk. For sure.

And there's lots of things that we need to think about in the guardrails. But as the staffing challenges and the resource challenges have become so prevalent, There's a bit of an acceptance or like a desperation, like, we need this automation. Right? So talk to me a little bit about what you think in terms of the guardrails, in terms of the regulatory place, you know, areas that, that things are, are coming down the pike or actually coming into play that organizations should be thinking about.

Kyle Y. Faget: So I think a couple things. I mean, I think the regulatory environment is very ripe for developing tools that will automate particularly administrative functions. So, F D A had to, under the 21st Century Cures Act actually Address these [00:03:00] issues and they issued a set of guidance documents. And those guidance documents are pretty clear that F D A has no interest in regulating software.

That is really about automating processes. So I think there's a very open playing field for innovators in that space. You know, on the other hand, I think F D A is still trying to get its arms around, like AI for example. So, it issued a, maybe a few months ago at this point a framework for regulating AI and it issued several months ago at this point clinical decision support software.

Its second iteration of that guidance. And I think that there is a real appetite. For innovation in this space, particularly by innovators. I think the hard part, I was just talking on a panel at the American Telemedicine Association the other day about ai [00:04:00] and why aren't we seeing adoption of AI more?

And I think some of it is that people are still uncomfortable with the idea that clinical decision making, for example, would be made. By software, even if you see that actually the software does the work, whatever, that work is often better than clinicians. So even if there's a 1% error rate and we know there's a 1% error rate Embedded in ai technology, we think of that and we know what it is so we can react to it, whether it's, like I said, whether it's rational or not.

And our thinking is that, well, clinicians, we know clinicians make mistakes, right? There's an entire world of tort actions. For clinicians making mistakes, but we don't have perfect numbers on those percentages, and nobody wants to believe that their clinician is the clinician that makes a mistake.

Right? So, [00:05:00] you know, you're going to see Dr. Smith and he, she or they you have full confidence that they're not in whatever percentage there is for error rates. And I think. It's gonna take time till we get comfortable with the idea that there are some things that technology can do better than human beings.

And it may actually end up being that some of what can be automated free up clinicians even more to do even more sophisticated healthcare work. But I think that's a paradigm shift from where we are. I also think F D A I. It's still getting a, like I said, it's arms around how to regulate. So the traditional medical device space is very iterative and you have product A, so let's just take a pacemaker, for example, and then one of the big manufacturing companies makes an improvement on that pacemaker and pushes out a new iteration of that pacemaker.

That's really the regulatory framework that [00:06:00] FDA's used to, whereas AI is constantly learning and evolving so. The software feature that is in existence on Friday afternoon when F D A looks at it and says, yes, this has been validated, this actually does. What it's supposed to do is not gonna be the same software Monday morning.

And that's right by design. Right? And hopefully what happens Monday morning is better than what was Friday, but what if it's not? And so, mm-hmm. I think that that's the regulatory hurdle. And you know, f D a very recently issued a warning letter to a manufacturer that didn't go back to F D A, that made some software tweaks and didn't timely go back to f d A.

So I think as far as assessing the products that are out there that can help automate, I think it's important to know what you know, has this product gone through regulatory? Approval and or clearance if that's necessary. I think to the [00:07:00] extent that you can understand how it is, and this is in the c d s guidance from F d A understand the basis for the decision making or the learning mechanism, which doesn't mean that clinicians necessarily need to become engineers, but they do need to understand what the inputs are and how it is that any particular software is coming to the conclusion that it's coming to.

And you know, that's important stuff and I think unfortunately clinicians are gonna have to get more savvy at understanding how this software works so that they can use it as a tool in their practice to, like I say, free up their time to do even more sophisticated clinical work. Right.

Megan Antonelli: That is Yeah, I mean, that, that's, it's so interesting.

I mean, if you think about it and the cultural shift, just in thinking about how to shift from a, you know, clinical trial, prescription medicine-based regulatory approval process to, yep. The technology, which obviously, I mean, to some degree of course, science evolves as well. But there's, you know, [00:08:00] there's opportunity in the medical field when the, you know, if it's, if it's different enough, it's a new drug and it gets a new patent, yeah, that's it.

And you know, you don't wanna necessarily apply that. To technology because then it's like going, you know, it's like it's taking stupid stuff that slows down science and innovation and applying it to an area that we don't need it. But how do you, you know, to some extent put in the place the. The systems, the people to evaluate the technology and the new systems on a, on such a quick, you know, the resources to do that are, it's, it's hard to imagine, you know, one company can do it when they, you know, they iterate and they evolve their technology.

But when you go beyond that, so it'll be interesting to see how that happens. Are there conversations within the f d A about, you know, how they would then, I guess, partner with innovators to really be able to have that appropriate feedback loop? With the

Kyle Y. Faget: organizations. Yeah, so I think a couple things there.

[00:09:00] One, I don't sit in the f d a, so I don't know, but I do know that like I say f d A issued that framework and is still accepting comments. And F d A is always open to working with innovators and conversing with innovators. So, there's a whole process for meeting with F D A and I always say to my clients meet with F D a early and often so that.

Everyone's clear about what the design pathway is. You know, the other thing that, you know, just thinking about efficiencies and building efficiencies into clinical practice and the stupid stuff like marriage of those two things, this didn't come up on the panel. I. But one incredible efficiency that we've seen get really adopted during Covid is use of telemedicine in clinical trials, in the decentralized clinical trial model.

And I think that the barrier to entry there was absolutely enormous pre covid, because if you're. A clinician engaged in clinical [00:10:00] practice across state lines. You need to be licensed in the state in which the patient is located. And there are certain states that actually consider clinical research, the practice of medicine, statutorily speaking.

And so the barrier to entry for licensure and practice standards, like different states have different approaches to how. One can practice telemedicine compliantly, and that's all regulated at the state level. But I think, you know, if I say the stupid staff medical boards will hate me forever, but this you know, the fact that there is state by state licensure and that there's not, other than the State medical license or compact, which not every state is a member of.

Short of that, there really isn't a great efficient way to get licensed up. And so that's just a barrier to entry for use of a tool that is so obviously the right tool for clinical trials. If you don't have to have clinical trial [00:11:00] participants come to a brick and mortar academic medical center, typically in an urban location.

That's a barrier to getting participation in trials, adherence to protocol. You get dropouts. It's a huge problem in clinical trials and this is a really easy solve if you don't have to conduct a procedure that requires onsite. Presence, then utilizing telemedicine makes a ton of sense. But there are a whole host of state level regulatory and legal hurdles to actually being able to utilize that model effectively.

Right. I think during Covid, you know, there were all the licensure waivers and the state level that allowed this practice to occur and had to happen. So that clinical trials, even though many of them were ground to a halt During that time, it's the only way that they could keep going was through the use of telemedicine.

So I think we'll see this, you know, we're seeing decentralized clinical trials really [00:12:00] take hold, but I think people are surprised about how many legal and regulatory hurdles there still are to truly adopting and operationalizing an effective clinical trial. So, F D A addressed that too, but it said right in the guidance.

You have to follow state laws and regulations that are applicable to clinical practice, and of course you do. So, you know, that's, that's an interesting one.

Megan Antonelli: Yeah, it's one, I mean, it is one of the ones that, you know, it's been in, you know, it was the issue with te you know, just telehealth in general and that we've gotten rid of some of the hurdles there in terms of reimbursement, in terms of how people get paid.

But that state line For clinical trials is, you know, it's a big deal and it's, you know, I mean, I think people, I mean, you know, in healthcare, I guess they do, but people don't realize how much of the time spent in. Pharmaceutical research in any kind of res in clinical research is spent just recruiting patients.

It's like, oh yeah. Such a barrier to to a successful trial. [00:13:00] So, you know, and yet the, you know, the regulations remain and, you know, in a world where, you know, we, we say, you know, oh, there's no bounds, and, you know, there's no boundaries. And then we, we are still working on these state lines, which. It's pretty crazy, but yeah.

No, it's interesting. But there's, so there's been a lot of talk lately about sort of the federal government stepping in, providing some ga, you know, guard guardrails around ai. Where do you think that is headed? You know, in terms of the, the directions there? Do you think we have a long way to go, do you think?

Health systems are in a position where they've gotta make sure they don't put themselves at a lot of risk using it. And I think even what you were talking about before, you know when a physician makes a mistake. There is, you know, there's a point of someone who you can blame, right?

When, when a, you know, AI makes it, where is, where does the blame begin to fall? And how do we, how do we manage those things? So tell me a little bit about your thinking there, just to, you know, kind of [00:14:00] as we wrap up looking to the future and, and, and what's gonna come down the

Kyle Y. Faget: pike. Yeah. No. So, I mean, I think.

As far as legislation goes, I think we have seen Congress take the position of fostering innovation. And I think there's always a scale, right? That right now we're in the world of. Let's deregulate and allow innovation to occur unimpeded. And that's fantastic until somebody is really injured. And then it will get called back and everybody like, well, where were the regulators on that one?

You know? Right. So there's always an ebb and flow there and what the perfect balance is, I don't know, but I think right now we're in a space of fostering innovation. That's just, you know, taking the temperature of, of legislation. That's where I think we are there as far as liability goes. Right. If a manufacturer or engineer software developer develops something, and I guess this is the nice [00:15:00] part about going through the regulatory process to the extent that people don't like having to go through the five 10 K process and see it's burdensome.

It is a stamp to say, listen, you know, people who are experts looked at this and said it was safe and effective, and that the benefits outweigh the risks, which doesn't mean that any product is ever gonna be a hundred percent perfect. So obviously you're gonna have labeling, you know, disclaimers and so on and so forth, that this is imperfect.

You wanna make sure that you actually have the software validated for ai. But it will be necessarily imperfect. So then the question is, was it used within the standard of care? Right? So did the clinician reasonably use the product? Did the clinician use the product as directed? And if the answers are yes, and the clinician explained what the risks were associated with it and there was no product failure, then I think that's probably the best you can do.

But will you see, [00:16:00] we're a pretty litigious society, so will you see everybody being sued when AI fails? Absolutely. And of course it's gonna be the software developers that have the deeper pockets in all of it, so, I imagine that they will be squarely in the crosshairs of that litigation at some point or another, which is why

Megan Antonelli: they're pretty much out there begging for those guard rails, right?

Yeah. Right, right. You know, they, they want the regulations so that they can. Exist within them and, and be and be protected. That's what laws are for, right? To protect. All right.

Kyle Y. Faget: Yeah. Yeah. So that would be interesting. It's, it's here, it's not going anywhere. I mean, this is the conversation at Foley and Larner.

We had a huge get together called brainwaves in Boston, and AI was the focus of the conversation. And again, the thought being how do you utilize AI in responsible ways so that you can free up clinicians to do what they do best, which is treating patients. Mm-hmm.

Megan Antonelli: Well, I think that's a great, great note to end [00:17:00] on and you know, thank you so much again for joining us.

Yeah. Looking forward to it. And hopefully you'll join us for the round table later. Absolutely.