Digital Health Talks - Changemakers Focused on Fixing Healthcare

Navigating the Intersection of Equity and Innovation in AI-driven Healthcare

Episode Notes

See what is happening now at www.healthimpactlive.com

 

YouTube Video: https://www.youtube.com/watch?v=a4a3xDvGQFU

Originally Published: Nov 28, 2023

 

Dive into the dynamic intersection of equity and innovation in healthcare with Chris Gibbons at HealthIMPACT Live. As an Equity and Innovation Adviser, Chris navigates this critical juncture daily. Join the conversation on AI ethics, patient welfare, and the evolving landscape, shaping the future of patient-centric digital medicine.

 

Chris Gibbons, MD, MPH Founder & CEO, The Greystone Group

Janae Sharp, Founder, The Sharp Index


 

Episode Transcription

Navigating the Intersection of Equity and Innovation in AI-driven Healthcare

[00:00:00]

Janae Sharp: Welcome to Health Impact Live. I'm Janae Sharp, the founder of the Sharp Index, and I am thrilled to sit down today with Dr. Chris Gibbons. And Dr. Gibbons is [00:00:36] passionate about digital health, health innovation, and health equity. And these are all things that I care about. Dr. Gibbons is also the Equity and Innovation Advisor to the American Medical Association and the Chief Health Innovation Advisor to the Federal Communications Commission.

He works at the intersection of equity and innovation every day, and I am looking forward to discussing your work with you and to talk about your participation in the upcoming Node Health digital medicine conference. So first, I'd love to thank you for coming today, [00:01:12] and I would love if you could share with the audience, tell us about yourself, tell us about your work and what you're most excited

Chris Gibbons: about.

Well, thanks for having me today. It's really an honor for me to be with you. I'm a physician originally by training. I trained in surgery and preventive medicine many years ago, and I have a long career at Johns Hopkins. I was the associate director of the Johns Hopkins Urban Health Institute for almost 15 years.

And this was you know, over 20 years ago. I began to have an interest in technology. I remember when I started that [00:01:48] position, you know, Johns Hopkins, like many tertiary academic medical centers are located essentially in urban ghettos and Johns Hopkins is really no different than that. They're trying to change it now, like other places.

But I remember looking out of my window from my office and Johns Hopkins. Seeing, you know, urban blight. And I said, you know, there are a lot of people who have gone before me on this road trying to fix this and hasn't been able to. And at some point, I'm going to leave. I want to be successful. And I'm not going to think that I could do the same thing they did and come with a different [00:02:24] result.

And so I wanted to find something different, something that had The better potential perhaps to succeed about that same time. Many years ago, I heard about this new thing called e health is what they called it at the time that are never heard about that before. Learned more about it and decided that was going to be at the center of my career.

I was a young doctor at the time, and I decided my career would be at the intersection of health e health, technology, and underserved populations, and that's what I began to work on there at Hopkins, and [00:03:00] now many years later I run an equity focused digital health innovation and transformation firm that focuses largely on the same kinds of things.

Janae Sharp: I think that's critical. And I love that. Like, we need more focus on historically under resourced or underserved individuals, especially when it relates to things that are currently big topics in health care, you know, AI and data driven worlds. And I'd love to hear more. We're going to talk a lot about that, but I'd love to hear more about.

About your journey to that. What's [00:03:36] level set for the audience. What is that like right now? Like there are AI driven health care systems and then there are all these needs with public health. And there are also outcomes we're hearing about that aren't great. What does the landscape look like in

Chris Gibbons: your great question?

Great question. Well, I just told you a little bit of how I got started back then. Technology. It wasn't a thing in health care, dating myself a little bit. But when I first came to Johns Hopkins. There was no EMR there. We were still writing orders by hand. And it was while I was there that the first [00:04:12] EMRs were installed.

So it was a whole different kind of world. But because of that. I came to this whole idea very differently than many of my colleagues, most of my colleagues in health care began thinking seriously about some forms of technology, either when EHRs were initially pushed out during the Obama administration or most recently with the pandemic, when we had to rely on technology essentially to survive as a nation.

As well as a health care system. But back then, you know, decades before that[00:04:48] when I started having this interest. I had no, no blinders on most of my colleagues were saying, well, it has to be technology in the hands of doctors, or it has to be technology that touches the E. M. R. I was just thinking technology in the hands of doctors patients.

Anybody that could help potentially improve the care process or the care outcomes. And so I came with a much broader perspective over the years. It's led me to many different things. And now, most recently, 1 of the biggest areas that is on the radar screen is a I and we spend a lot of time [00:05:24] thinking about that in this context.

AI is a number of different things. And most recently, generative AI has, I believe, extreme AI in general, technology in general, but generative AI has a lot of potential to enable us to do things we never did before and accomplish things that we were not even possible. And I do think closing, actually closing the gaps Inequities in removing that problem is potentially one thing that new forms of technology, including [00:06:00] AI may well enable us to do if we do it right.

Right? That's the key question here. If we don't do it right, it could make things worse, a lot worse, a lot faster. Yeah,

Janae Sharp: it could also, you know. Use a data structure to solidify unhealthy systems. So let's talk more about that, like the potential of AI and generative AI. There are a couple things that I thought about as, you know, potential issues, you know, one, the transparency of a system to, you know, the data sets we're working with, or three, like you're mentioned, like, [00:06:36] it does have the potential to make things worse.

So maybe we could go through some of those like, what? Why is data transparency important? And why is that part of it? You know, you're talking about it doesn't matter which it doesn't matter which sources the data comes from. And I think healthcare is seeing that this expanded vision of data can will.

Come from everywhere. And what does that mean? So I'd love to hear data.

Chris Gibbons: Data is foundational to everything, right? At least if you're making evidence based decisions. Now, if you're not doing that, then maybe you make decisions from some other mechanism. But data is [00:07:12] foundational on the provider side of the coin.

You know, data is how we understand the world. That's how we understand the tools that we use. And so it's important to understand it. The information and data, how these models are created, how they work and how they generate their answers as one way of building confidence in them or lack of content, knowing how to tweak them, knowing how when they're not performing as the way.

that we want them to or knowing when they are, you know, on the patient side it's very similar. I mean, if we don't know what's happening, if patients don't know [00:07:48] what's happening with the data that they are ostensibly putting in, or it's being collected from them and furthermore, if they distrust what potentially might happen, it gets into the wrong hands.

It's turned against them. It's used against them. They're likely to mistrust it also intend not to use it whenever they can, or just tell you what they think you want to know, rather than what's really true, which defeats the whole purpose, you know, particularly with generative AI. We talk about data sets a lot, and there is, it's well known that there, it's almost [00:08:24] impossible to have a truly, totally, 100 percent unbiased data set.

And in some cases bias is good. You want to have biases in your data sets. To overrepresent some populations that are not large, let's say, and you want to be able to make evidence based decisions about them. But I think it's important to note that the bias data set is just one place in which bias can be introduced into AI.

In generative AI models. There, there are at every stage of development of [00:09:00] the models and testing of the models. These biases can be introduced based on the decisions or the assumptions that are made about the problem, about the setting in which the problems occur, about the target, target populations through which you're trying to influence.

So we could theoretically fix The database problem, the bias data set problem and still not fix the bias in AI problem. It's a much bigger, much more complicated progress. But having said that, I do believe we can make good progress on it. Look, we've put people on the moon so hard is not a reason not to [00:09:36] do something, but we have to go in with our eyes wide open and not try to act like it's not a real problem and ignore it.

Hit it head on and we'll make progress and we'll eventually overcome. Yeah, I like that.

Janae Sharp: Well, we're going to make progress. I'm also interested in what you were talking about with, you know, the steps to making sure it's clear and how we have transparency there. Have you seen any examples of that or what does that actually look like

Chris Gibbons: in practice?

Well, there are steps being taken. As you [00:10:12] know, there are people who generate these models. There are people who develop the algorithms. And then there are people who use them. And those two groups are not 100%. And so sometimes at least to the general public, and sometimes even to providers the information, the data, the understanding, you of how these models were created or how they work is not clear.

And so that has in the past plagued some for understanding and wanting to use. And in fact, they've often be referred to black box AI because you put it in, it just [00:10:48] works. The challenge with that though, is

Janae Sharp: it's like restarting your computer. You just turn it off, turn it back on. We're good.

Chris Gibbons: Exactly. Exactly. Right. But the problem obviously is then in those kinds of systems you tend to value them or evaluate them based on if they output what you expect to be output. It sounds logical, right? But the reality is that's risky. Yeah, if what you expect to be output is biased in the first place, and you get that you're only reinforcing biases, you're not actually doing what you think you ought to do.[00:11:24]

So that's the problem. That's one of the problems from an equity perspective with those kinds of models and there are plenty of them out there. There is a large push now for transparent models. Explainable AI models. These are all terms that are being used to develop AI and AI models in ways that are more transparent to the users to the providers who use them.

And then comes generative AI, which is at a whole different level altogether. But I think the principles are the same there as well.

Janae Sharp: Right [00:12:00] to have transparency. I would love to talk about a little bit more about what you were saying with being proactive, you know, proactively facing those challenges.

It sounds like we talked a little bit about being proactive about creating a better data set and creating, you know, eliminating a black box as we talk about health care and generative AI and ethics. Like, what does that look like with a proactive health equity lens? Thank you

Chris Gibbons: Yeah, I think it means a number of things.

One is again, recognize we have a problem. and hit it straight on. [00:12:36] Don't try to ignore it. Don't try to downplay it like it's really a not problem. We've got already not just sort of thinking about it in the future. We've got plenty of examples of where real harm has come to patients. Just one. I'll mention one or two.

There was an algorithm that was developed for doctors to use to help make treatment decisions. It was being used by a number of the law. It's hospital systems in New York City for many years. It was thought to be a really good thing. Then a series of researchers came along and say, hey, let's study the outcomes to see how well that algorithm did.[00:13:12]

This was an FDA approved algorithm. And lo and behold, when they studied a large number of the patients who had, algorithm, they found that this algorithm was prescribing the wrong treatment for African Americans. as much as 47 percent of the time. And at the same time prescribing the right treatments for non african americans.

Most of the time, same algorithm being used. Obviously that algorithm is not being used anymore. The pulse oximeter, the thing that we put on your finger [00:13:48] during the pandemic to know how much oxygen is in your blood. That does not work as well. In people with darker skin tones, and therefore doctors were likely recommending the wrong treatments sometimes based on that FDA approved device because it was that it was giving incorrect information to the doctors.

The sad part about that example is, in fact, we knew that was the case before the pandemic for decades and did nothing about it. But here we are, now we know, now there is a [00:14:24] movement to know better and to do better. And so the first thing is recognizing this is not just a theoretic problem. This is a real problem that's already affecting millions of people today.

And then two, there are things that can be done even if they're not perfect fixes yet. You have to base them on the problems that are there that we know of in AI. For example, it's been shown that these models they're not one and done. Once you create them, they don't work that way and work perfectly forever.

They actually degrade over time. Everything else. We all get old, right? [00:15:00] Well, these models sometimes. These models sometimes over

Janae Sharp: a very short period of time,

Chris Gibbons: if you were to ask them to do the same, perform the same function or ask the same question A few months or years later, it would not do as well as answer and answering that question or doing performing those functions as it did when you started.

So like everything else, they can't just be assumed to always be right. We have to monitor them. We have to know when they're Not working right. That's sort of [00:15:36] one problem. Another problem is they're good at answering some kinds of questions and doing some kind of tasks, but others they don't do so well at.

And sometimes when they're not able to do it, particularly on the generative AI. They make up an answer, but generative AI. It's called hallucination. It's a real thing. But generative AI is so good at hallucinating that the answers seem plausible. And unless you dig down deep you would not know just by looking at it that this is an incorrect answer.

And so the challenge [00:16:12] is, you know, some people might make the argument. Oh, that might happen. Whatever. 5 percent of the time, 3 percent of the time. So it's good. Yeah, I don't buy that. I mean, because in the if you're the person that is telling the wrong treatment to or giving the wrong answer to and then treatment is based on that and you get an illness or you die as a result of that's 100 percent failure in you.

And we can't tolerate a system like that. So a system where we don't know when this is happening, and we don't do anything about it. is worse. We know it's happening. So let's find out how much is happening and try to develop ways to stop it from happening. [00:16:48] Increased testing of all kinds involving people from these marginalized populations in the design and development of these models.

You wouldn't ask me as a doctor to come in and do your electrical work in your house. And for good reason, I don't know how to do it. Even if I'm doing The best that I can. Why do we believe that people with one background are the best people to design models for people with entirely different backgrounds for which they have no credible expertise nor experience?

It doesn't make any sense. The assumptions that they're going to make about their target users. Under the best circumstances, doing [00:17:24] the best they can are not likely to be the best ones that could be made for those populations. So involving them in the design and development of these algorithms iteratively over time is one area that is being advocated and shows Promise in significantly improving the development of these models and lowering the risks of these kinds of things happening.

So these are some of the things that we can do right now. We've got to do more work to find out more work and to find out more ways to stop it. Stop it. But let's keep going forward.

Janae Sharp: Yeah, I think that's important. I like what you said about the importance of updating your [00:18:00] models because if we're thinking about health interventions like and something that we've developed with a I like Ideally, once they're released into the world, they'll have an impact, so you can't keep doing the same thing.

If someone already took care of that. That's right. You need to update that data. And the other part that you're talking about at the end. I'd love to kind of look at that from a practical framework. You know, you work a lot in advising people how to put together those ethical, you know, decision making.

And we talked to a lot of people who, they're making decisions for health system for an insurance plan [00:18:36] about a I they've heard these stories about outcomes and they need, you know, sort of checklist. Or a way to look at that. So who needs to be involved in that? And could you give like two or three things that would be on

Chris Gibbons: that?

Yeah, sure. Great question. You're right. And more and more people are saying, okay, we believe you, but this is a problem. What do we do about it? I'm a developer. I want to develop a tool more equitably or that has less risk of this happening. What do we do about it? And for a long time, not a lot had been, you know.

[00:19:12] Put together in a way to help these individuals, but in a project that I was working on with the American Medical Association over the last several years, we've put together. This is 1 resource that's available. We've put together what we, what seems to be the 1st toolkit around this very topic, how to develop digital solutions, We've More equitably and how to assess tools to see if there is actual evidence that the tool is likely to be more [00:19:48] equitable than not or less equitable.

And we call it the in full health toolkit because it's an assessment with examples and things, and it's freely available. The first version of it is on if you go to infillhealth. org and you can download it for free from there. It's out there now. It's, it was made about three, three years ago.

So right as the pandemic was hitting, so things slowed down a little bit, but people haven't heard much about it. And now it's time to. To update it a bit, particularly the section on AI. That's it because a lot has happened in the last 3 [00:20:24] or 4 years in AI, but this is a resource that's out there for developers for solution developers, or for those who evaluate these solutions in 1 form or another, or for those who purchase solutions and want to purchase solutions that have evidence.

Of not creating inequities or helping to close inequities. So this is one of the tools that I've been working on that we've gotten that's out there. There isn't another one that we're know of like this. There are principles and frameworks about these are the kinds of things you should do or stay away [00:21:00] from.

But this is and those can be helpful as well. This is an actual toolkit that sort of directs step by step. Did you think about this? Did you do this document this as an evidence base for these moving in this direction?

Janae Sharp: Oh, I didn't even know I was pinging up your tool. You know, I just, I'm glad that exists because that's an important, that's an important resource.

lEt's, I have one more question. Sure. Well, two more. The first one is. You talked a lot about creating with populations that are impacted, and as we know, like a lot of even during the pandemic and[00:21:36] some of the things happening now in healthcare, there are certain populations that are disproportionately negatively impacted.

Also, many of those people have less trust in the healthcare system. So what does that look like? Like what is co creating or like it like you mentioned it like involving people in all those trials. What does that look like?

Chris Gibbons: Yeah, great. Another great question. You know, what's sad is that we've gotten we as a society and parts of our world have gotten so used to doing it without.

That we don't know how to do it. There's actually a [00:22:12] lot written about how to do it. There's a science about how to do it. There are several scientific fields from human factors and ergonomics and in the computer sciences literature that really they start from a base that saying, hey, we shouldn't consider technologies as a thing all by themselves.

You should consider them as really what they call socio technical systems. So they are integrations of people and technology and their nuances and understandings that you need from both sides in order to build the thing to work adequately with [00:22:48] the people to achieve the outcome that you want. So there is actually a ton written, not serving.

Janae Sharp: We're having a LeBlanc moment where you're like, what, like, it's hard. You're like, literally written in the book.

Chris Gibbons: Oh yeah. Tons of books and not just over the last few years many years, I have books dating back to the seventies where this stuff was talked about about how to do it and how we need to do better at it.

But nonetheless there, there are ways in the in the computer sciences literature, and more people are familiar with user centered design and human centered design. Some people even talking about [00:23:24] humanity centered designed practices now. These, this is a popular way of thinking about it in the pop, in the public health sciences.

Community based participatory approaches to design and development have been advocated for a long time. So for those that are interested in doing this, there is a lot of information out there. We and others are working on bringing it together in a format that's easily digestible for those in this field that haven't seen it before.

But the evidence is there.

Janae Sharp: Right. So it's it might be new to them, but not new. That brings up the other question. Do you think we're [00:24:00] going the right direction? If all these things already existed and we're able, we have access. Where's the gap and where's the future like, is there hope or is this just well, we know what we're supposed to do, but we're not going to do it.

Chris Gibbons: Oh no. There's absolute hope. If I didn't have hope for this, if I didn't think this was a possible, I'd go shoot myself or something. I don't know. But I mean, no, that's absolutely hope, you know, as a world, as a society, as a nation, unfortunately, we have tend to do things in silos. And those silos often don't talk to each other.

And therein [00:24:36] lies the problem. Right? But one of the things that excites me about this, if you're going to solve these problems of inequity it's inherently. Trans multi and cross disciplinary, right? No one person, no one group, no one sector can solve it all by themselves. We're going to have to work together.

But therein lies the challenge that often is hard to do. That's why it hasn't happened yet because it's often easier to just think, talk and [00:25:12] work with those who are trained and think and understand like you do. But nonetheless, we're seeing more and more people coming together from disparate and different disciplines not because they just want to, but it's becoming recognized.

We're all in this together and we all sink or swim together. So we might as well work together so that we can thrive together. So it's beginning to happen. And I'm very bullish on the future. And think that we should continue going forward as tough as it may be doing that.

Janae Sharp: Yeah, I'm glad you said that because I was, you want to be aware of the [00:25:48] challenges, but also it's nice to know that there are people who are trained in this and who are trained in improving things.

So, and you're one of them and you'll be at the digital medicine conference in December. I, you know, lastly, fun question, like, what are you most excited about? Yeah,

Chris Gibbons: you know, I'm really looking forward to that event. It's really a great event because some conferences I go to their show and tell conferences.

Here's the new big shiny thing. We're developing and those are great to learn what's out there. This some [00:26:24] conferences are the, you know, the sort of. Highbrow academic conferences that, you know, the general audience really can't connect with because it's just too much. But this 1 is kind of a mix of the best of the both worlds.

We do have some vendors bringing some of their products there, but it's really about understanding the evidence and understanding the basis. for going forward in this field and talking about that and bringing people together to create the kind of world that we all want to see and are trying to get to.

So it's such a collaborative [00:27:00] event. The colleagues that are there, the innovations that are there, I'm really excited not only to Share on a panel about AI and equity in the future, but to learn from others who are looking at other perspectives and other parts of this puzzle and figure out new ways of working together to achieve the goal.

Janae Sharp: I love that, like new ways of working together will be critical. I want to thank you for being part of Health Impact and for meeting with me today. And more than that, I want to thank you for your work too, where you took something and you decided it was going to be better and made it part of your mission.[00:27:36]

Chris Gibbons: So thank you, Janay. It's been a pleasure being with you today. Thank you for, you know, being the voice that gets the voice out there and that's happening and letting other people know, because that's critical. And that's how we all come together. So thanks for having me.

Janae Sharp: Thank you so much.