Digital Health Talks - Changemakers Focused on Fixing Healthcare

Human Factors in Healthcare AI: Where Patient Safety Meets Real-World Implementation

Episode Notes

Joining us is Kristen Miller, Co-Director of MedStar Health's Center for Diagnostic Systems Safety and Scientific Director of their National Center for Human Factors in Healthcare. As healthcare organizations invest billions in AI technologies, Kristen's research reveals that human factors engineering - the science of how humans interact with complex systems - determines whether AI becomes a safety enhancer or safety hazard, whether patients embrace or resist these tools, and whether healthcare teams achieve promised efficiencies or face new workflow disruptions.

Kristen Miller, Scientific Director, MedStar Health National Center for Human Factors in Healthcare

Megan Antonelli, Chief Executive Officer, HealthIMPACT Live

Episode Transcription

0:01 - INTRO

Welcome to Digital Health Talks. Each week we meet with healthcare leaders making an immeasurable difference in equity, access, and quality. Hear about what tech is worth investing in and what isn't as we focus on the innovations that deliver. Join Megan Antonelli, Janay Sharp, and Shahid Shah for a weekly no BS deep dive on what's really making an impact in healthcare.

0:29 - MEGAN ANTONELLI

Hi everyone. Welcome to Digital Health Talks. Change makers are focused on fixing healthcare. I'm Megan Antonelli, and today we're exploring the critical role of human factors, engineering in healthcare AI implementation, examining both the remarkable successes and unexpected pitfalls that emerge when AI meets real world clinical practice. I am excited to have Kristen Miller here today. She is the co-director of MedStar's Health Center for Diagnostic Systems and Safety and scientific director of the National Center for Human Factors in Healthcare. As healthcare organizations invest billions in AI technologies, Kristen's research reveals that human factors engineering - the science of how humans interact with complex systems - determines whether AI becomes a safety enhancer or a safety hazard, whether patients embrace or resist these tools, and whether healthcare teams achieve promised efficiencies or face new workload disruptions. Hi Kristen, how are you?

1:30 - KRISTEN MILLER

Great. Thank you, Megan. It's a pleasure to be here.

1:32 - MEGAN ANTONELLI

Yes, it's so fun to have you on the show. I know you've been a frequent speaker at Health Impact and, you know, you do such a great job and so, such great work at MedStar that I'm excited to chat today to see what you've been up to. We've talked a lot, you know, sort of offline about kind of what you do and, and your role there, but if you could just tell our audience a little bit about, you know, what your work kind of centers around with respect to human factors and AI, let's start there.

2:03 - KRISTEN MILLER

Great, yeah. Human factors I think is a pretty broad way to talk about systems engineering all of the different factors that could go into any new initiative or implementation so our team really takes that systems approach we're looking at the people, the tools, the tasks, the built environment, the policies, the organizational culture, and really trying to understand the opportunities that AI is bringing but also the potential challenges and if we can get ahead of them to ensure both patient safety but also the impact on providers as well as the system level if everyone's looking to buy the best new shiny AI thing and so we apply some scientific rigor to understand how we can do that in the best way possible.

2:46 - MEGAN ANTONELLI

Yeah, no, and, and I mean, I think now that the conversation has really shifted towards AI we certainly, you know, I mean, we met and we're talking a lot more about, you know, just technology in general and, and kind of that, what that impact on patient safety. But when you're looking specifically at healthcare and AI's implementation, what are some of the kind of positive impacts or areas where that's become kind of an opportunity for human factors?

3:13 - KRISTEN MILLER

Right, yeah, I think there's plenty of success stories we see when AI is designed using that human factors approach. The tools really fit into the workflow instead of asking people to fight against it, and so we can see things like enhanced decision making, improved teamwork, the reduced cognitive load is huge when it's helping to support decisions and attention. I think most of the metrics that we're seeing in healthcare are focused on the provider and system level. So we look at things like reduced documentation, we talk about pajama time, how much documentation providers are doing when they get home after a shift. We can look at things like throughput. Are we able to get people in and out of an urgent care faster, streamlining tasks, there's certainly a lot of operational administrative things that AI is helping to support. Those are all really important metrics. I worry that they're not telling the whole story, and so we really wanna think bigger than that about cognitive work, communication, and really interaction with patients. So my recent work has focused on the patient perspective. How do we communicate the use of AI to patients? We wanna relieve the fear that they have that some new tools are sort of taking over the human work, and we think that we can do that with a more coordinated, transparent, like responsive approach, let them know how AI is being used, and what that means for their care and then collectively for physicians, nurses, patients, we can create a system that's more trusted, it's more usable and safe, not just technically impressive.

4:45 - MEGAN ANTONELLI

Right. When you, when you talk about that, when you say I want to go a little bit deeper there, I like that, maybe some of the indicators around, you know, even if it's pajama time or some of the stuff where we're, we're, where we're seeing a lot of positive impact, a lot of positive adoption, which we don't always see in technology and healthcare, What are some of the sort of possible, you know, I guess why do you question some of those positive impacts? Are there some, you know, sort of other potentially negative things that you're, you're concerned about, or is it more, you know, just that, you know, are we always measuring the right thing? I mean, when it comes down to it.

5:25 - KRISTEN MILLER

Yeah, I think the measurement, the measurement in health care is always hard. I think this one is particularly challenging, and we're really limited to a lot of the process metrics. I think our biggest concern is automation without adaptation like we're rolling things out but we're not accounting for the variability of individual practices and like any other health IT related thing we worry about alert fatigue, workflow disruption, our clinicians spending more time fixing something than helping the patient. And so even when these tools are well intentioned they might not be used or they might be unsafe. I do think we're hearing positive things. A lot of my examples are coming from the Ambient digital scribe. When we figure out how to really integrate that well into practice, the clinicians are thrilled to have these tools that are helping them right that that's a great example of being able to connect with the patient we can make eye contact for this whole encounter. I don't need to be looking at the computer and taking notes and we know that that really changes the experience for both the patient and the provider, generally I think we're measuring things like accuracy. Are a lot of these tools making the right decisions? Are they finding that abnormality in a radiology image? And so certainly again, there's tons of opportunity. We just wanna make sure that we're doing it the right way, like the most transparent way, and we're fitting it right into the workflow.

6:49 - MEGAN ANTONELLI

Yeah, no, it's interesting. I mean, it, it's almost like that because there's always unintended consequences. And even if you think back to the initial implementation of electronic health records where, you know, the patient safety was certainly the, you know, the impetus for all of it, right? I mean, it was, it was, we knew we needed to get better at tracking care and, and that technology would help. But then the unintended consequences and the workflow impacts, you know, sort of came about, right? But taking it back and you mentioned the patient and, and kind of what do they want and what are they looking for in this and, and maybe also what, what concerns do they have? You're doing a lot of research in that space, tell us a little bit about, you know, what they wanna know and, and how, you know, healthcare leaders can help them adapt and, and use this technology best.

7:42 - KRISTEN MILLER

Yeah, sure, so we started with the study last year we brought in, I think 17 patients to talk about what they would like to know about AI, and then more recently we've been doing co-design sessions in the evenings with a number of our patients from across the country, not just MedStar patients, and it's amazing the intelligence that they're bringing like this is not some technology that they've never heard of before like they're well aware they're reading the articles and so what we're learning is it's not necessarily the technical detail that they want they just want transparency. They don't need to know like the algorithmic breakdown but they do wanna know: Is AI being used? What is it doing? How is it helping the provider? How is it helping me? And then some really real concerns about how's my data being protected? Is it secure? What if there's a data leak, and then I think the most interesting thing, not surprising but most interesting is, is a human still involved in my care. And so they're recognizing that the AI can be a partner to the provider, but they don't want it to be a replacement. And so we're working on ways to frame this to give them that information at different points in time. Are we talking about AI when you first schedule your appointment? Are you getting it in an informed consent document that you're signing? Could we have some, you know, TikTok style videos for those that are interested? I think there's not a one size fits all and so being able to message a similar message about how AI is being used in different ways will be really helpful. Again, talking about the ambient digital scribe, that's the one we're using it across MedStar. I know many other facilities are as well, and that's being viewed as really positive because it's focused on the workflow and the documentation. But then when we talk about things like clinical interpretation, like you're, I don't know, writing a message in the patient portal and it's an AI that's responding or there's a chatbot that you're texting, but it's not a human, it's an AI that's where the patients are more cautious, and so they wanna know is there a double check? Is a clinician looking at this before they sign off on it, or is it really just an algorithm, and they're bringing us really good questions like, and if it is AI, what's the difference between me just using chat GPT at home? Like really, where is the benefit and where's the, what's the role of the health care system then in that, and so we talk a lot about clinician oversight. Again, I think it's just the transparency. If they knew that a provider was looking at the portal messages and editing them and responding to them, they were looking at that EHR documentation, but I don't think that information is explained clearly to patients. I think we worry about those conversations about, I don't know, the health literacy and the understanding. But also the very practical concerns of like you have a 15 minute primary care visit and if we spend 10 minutes of those talking about this ambient digital scribe, I'm not gonna be able to provide you the care that you need. So our goal is to not overwhelm patients but give them the information that they need, hold ourselves accountable, let them know that that technology is there to support them and not substitute their care.

11:01 - MEGAN ANTONELLI

Yeah, I mean, I think it's like we always talk about sort of the technology needs to be invisible, right? And if you're spending half the time talking about what the technology might do or could do, then there's a drawback there. However, as it's, it's new, and there's an education and a part of that, that, that's certainly necessary for, you know, and we've seen it both on the physician and the clinician side, but also with, with those, with the patients. But I think it's really interesting in terms of where the concerns are and of course that concern that or that want desire to still have the physician, you know, involved. What is the answer when they say, what's the difference between chat GPT and, and the tools we're using?

11:45 - KRISTEN MILLER

I think we had an interesting conversation where one of the patients said, you know, I know that the physicians have that little book that they pull out of their pocket, right? And if that was using, we're using that information to answer my questions, I would feel better and so again trying to help them understand in these different contexts. We're not using the chat GPT that's pulling anything that exists and potentially misinformation, right? We have guardrails around the type of tools that we're using. We're using trusted sources. We're pointing towards evidence-based guidelines, so I think that's an important distinction, but perhaps the most important is that provider oversight, right? There's somebody double checking, maybe a response has been drafted, but that's certainly not the final version. But we do need to think about that for these like agentic AI tools and the chatbots and, you know, where does that oversight create more work for the provider than just allowing them to do the work in the first place. So there's certainly a trade-off.

12:45 - MEGAN ANTONELLI

Yeah, no, we were talking, I was talking to the folks at Heidi Health actually about because they have a very personalized tool for the clinicians that, that, you know, kind of eventually as the tool learns, eliminates the need for that customization or that re positioning. So, you know, as these tools evolve, of course there's so much kind of available for these to, to kind of get to that place, but it takes time, you know, and, and as with, you know, I think the privacy concerns which we certainly saw and we see with the electronic health record people want their information that the call for transparency, that need for transparency has been there with all tech, you know, and I remember in the beginning when we were, you know, when health systems were just putting Alexa in health rooms, not necessarily to assist with care, but to assist with sort of more administrative, you know, giving you access to your music and stuff, there was still privacy concerns and people turning off those devices, you know, so as we look at, you know, how to kind of bring the patient along in this journey and and build them in the way that it's what they want, right? I mean, that, that it's, it's improving that engagement. There's a lot to it, with, and then beyond the patient when we talk about nurses and physicians, and there's been a lot of talk and I love it around cause I think nurses are playing a real active role in the dialogue around how these tools should work because they're there on the front lines and there's so much potential to alleviate that burden. What are you guys seeing in terms of your research, in terms of the work across different healthcare systems or at MedStar between that difference of those tools?

14:34 - KRISTEN MILLER

Yeah, it's a great question because I don't think many people are talking about it. Like we're really focused on the physician side and that probably is the nature of the decision making that's happening if we think about AI being more about decision support, diagnostic reasoning - that does fall more to the physicians and so traditionally I think the nurse tools are more about workflow orchestration like more of those operational things about tasks, alerts, we're seeing some documentation right creating care plans. And so because they're more physician driven, I think we've left nurses out of the conversation. Our team is trying to change that and so really bringing in nurses, talking about nursing focused design. We have one study right now working with nurses to understand if we were to create, I know I'm using chat GPT as like a large language model, that was focused just on nurses, would they use that and what would they use it for? And so trying to understand, is it about communicating with the patient? Is it looking up evidence-based guidelines? Is it creating those care plans? Like what are the actual tasks and then the how, what, where, when, why, because a tool like that you can imagine it would bring some value, but there are different hesitations and boundaries I think that nursing has that physicians don't, so that context is really important to us. I think there's a general recognition AI can help, but it can help me do my work more efficiently, it can help me educate patients, prioritize tasks, whatever it is, but we wanna unpack what those differences are and make sure that nursing is getting a tool that's not meant for physicians, right? We're giving them tools that are meant for their work, their level of professional judgment, their teamwork, right? All of the things that come specific to, you know, the incredible work that the nurses are doing.

16:30 - MEGAN ANTONELLI

Yeah, it is interesting to think about the sort of potential as well as, you know, some of the, you know, it opens, I, you know, there's a whole information overload to a certain degree of what is the potential of what these tools can do and you still want to keep people within the guardrails of their own roles to optimize what it is they do and provide them those specific tools. So I do think that's really important and people haven't made that distinction as much. We talked a little bit around like the patient errors, right? I mean, that that's kind of where electronic health records were, were, you know, sort of seem to have the most impact. And this, you know, when we think about the risk of AI, I mean, I'm constantly correcting my AI, you know, and, and you do, you spend a lot of, a fair amount of time just checking it, making sure did it do it right, right? And when it comes to healthcare, that's really dangerous to some degree, and there's a lot of fear there. What do you, you know, what are the guardrails that are being put in around AI to kind of prevent new types of errors that, you know, we haven't even begun to think about, that could come both at the nursing and physician level.

17:42 - KRISTEN MILLER

Yeah, it's interesting the idea of this rework, and one thing comes to mind in speaking with providers about the ambient digital scribe and one of our urgent care providers said, you know, it just doesn't sound like me. It doesn't write like me, it doesn't talk like me. And I think that's an important distinction that it's not just like spitting out information and people are, you know, agreeing that that then goes into the note. They want it to feel like they wrote it, right? Whether they're personal with their patients and they're recalling conversations that happened or they're more directed and simple, so again, you know, there's a lot of complexities here. I think with the human factors lens there are certainly new types of errors. I think more about making old errors faster, right? Like we're able to do things more efficiently and so a lot of it is those same errors that we haven't been able to solve. We're really focused on how we're using AI in the diagnostic safety space and that's, you know, this is the core human factor stuff that we love about how information is presented, how are we communicating and handling things like uncertainty. How are teams communicating and coordinating, and then acting on that AI input. I think there's a worry that like inherently the AI might be wrong, like it's inaccurate. It's made a mistake, but I think we still haven't solved these long-standing problems about how information is presented. You look in the electronic health record, we talk about hunting and gathering, how providers are going to every page on that tool, they're going to the medication list, they're going to their documentation, they're going to somebody else's documentation. There's a lot of inefficiency that we haven't figured out yet where AI I think could really help about presenting information and not overloading, but really pointing providers to the most salient things. You know, we've been thinking about different ways to present this information to both patients and providers, healthcare systems too, and one example we've been building off of is a nutrition label where everyone can interpret a nutrition label about what the ingredients are and, you know, some fast facts and so trying to replicate that - is there a simple standardized summary that clarifies what the AI tool is doing, what it was trained to do, and what some of the limitations are. I think acknowledging there are limitations, let's focus on what they are, right? We know if this is more sensitive or more specific or what the predictive probabilities are if we just shared that information, I think it would help users make more informed choices, right? I'm gonna trust this or I'm gonna trust it but with skepticism, but right now I think it's sort of a black box and so we're not messaging it at all, and so really thinking about making sure everyone has the information about the safety again, that this idea of transparency, and then providers can calibrate their own confidence, they can understand that uncertainty, and they can detect when the AI might be wrong and when they need to spend a little bit more time really reviewing what the output is.

20:53 - MEGAN ANTONELLI

Yeah, no, it's super interesting. I mean, when you think about the need for it to have your voice, and you want it to have your voice, but then if you make an error and a consistent error, do you then train that error into it, right? So how do we ensure that, you know, the tool is trained enough that it doesn't allow for the errors that impact patient care, but it allows that customization. And I, I mean, it reminds me of the conversations we had about the electronic health record around, you know, is this one size fits all tool that, you know, it wasn't specialty specific, you know, and as we look at tailoring these large language models to specialty specific, you know, clinicians and practice and those and, and whether it's, you know, the various clinic physician specialties, or as you said, the nurse workflows, how do we, you know, ensure that it learns, you know, and, and do that correctly from a human factor standpoint? Yeah, you know, it's super interesting. I mean, and I think, you know, where we... and you know, where the risk, you know, and sort of mitigate the risk, you know, and are you guys looking specifically at kind of different specialties across those two?

22:11 - KRISTEN MILLER

Yeah, so, it's interesting we've talked about the different user groups and then I think there's a few other dimensions, specialties being one and thinking about local fit, you know, the activities that are happening, the pace, the communication strategies, those are all obviously very different depending on the specialty and so, again this one size does not fit all when we're doing something in radiology that workflow is different than primary care oncology is different than emergency medicine and so for each of these we wanna make sure that we're tailoring the interface with the information that we're seeing, the output of the AI, but also things like the timing and the data flow, like it needs to fit into that environment that's specific to that specialty. The other really interesting thing that we're seeing is the level of experience, and again this concern that the AI is replacing the physician. In many instances, I know at MedStar we hold back on the AI things for the junior trainees, right? Like we want everyone to have those core skills. And I think explaining that to patients, right, to other stakeholders to understand we're not just replacing and nobody's training on any of this - in primary care, for example, they don't get to use the ambient digital scribe until they're much further along in their training or they have to learn how to listen and take notes, and those are reviewed by the more senior staff. In radiology, for example, learners might be asked to read the film before an AI-assisted interpretation comes in. And so again, this idea of de-skilling as an unintentional consequence of this, making sure that we're really thinking about it. We don't want to shortcut, you know, any of these things with this technology that may or may not be here or look the same in 10 years. And so matching assistance, like what the AI is delivering to the specialty and also to the expertise, I think is huge.

24:05 - MEGAN ANTONELLI

Yeah, the whole deskilling thing is just too big for my brain to handle because I've been de-skilled for sure.

24:13 - KRISTEN MILLER

You know, I mean easy. It's hard not to because there are so many things, right? Take my 1000 page paragraph and turn it into 200 words, right? And you, that happens in a second, right?

24:25 - MEGAN ANTONELLI

Well, and you think even just to Google, you know, where we don't have to remember stuff anymore, because we can just Google it, you know, and so that over the last 15 years, and then this now as it's like, now I don't have to, I don't also don't have to look it up, but I don't actually have to formulate a sentence about it. I think cohesively, but then it's, you know, it is a different set of skills and it, and it will be as it was with, you know, sort of having it, having all the information available, the skill becomes choosing the right information. And now that skill that clinicians, physicians, and pretty much anyone in the job of having to kind of express information to ensure that you've, you know, both taken the correct information but that it is expressed clearly to those patients, you know, I think it's so important. You know, when you talked a little bit about kind of how do you get the clinicians, I'm sorry, the patients, you know, what do you tell them and what do you share with them? And I think you guys have done a fair amount of work around kind of informed consent and what's involved there. What are some of the things that you're, you know, to get into some of the specifics of what they need to understand that organizations should disclose?

25:50 - KRISTEN MILLER

Yeah, so I think there's some of the sort of basic like, what is this thing? What's the purpose? What's the function? How is it being used? I do think patients wanna know the context of this and so in general what are the benefits and not just selfishly like what are the benefits for me - we're hearing patients say like this makes my provider's job easier like tell me that, right? I wanna know how this is impacting the provider, right? And the pajama time being a great example. I would love for my physician to spend the evening with their kids, right? So like that's all I needed to know. Thank you. I'm OK with you using this for my care. But I do think it's a lot about the trust piece and how is this really impacting me as a patient? How does this change anything for me? For the limitations and risks, like what is it doing and what can it do? And maybe that question is actually, are you aware of the risks? Like I might not need to know them, but I wanna know that you know what they are. A lot of conversation about accountability and oversight and making sure that there are some checks and balances and some guardrails, and again I might not need to know what they are, but I want to know that there's some sort of governance plan in place. I think the privacy, security, ownership one is huge. And this is similar for any study. Patients don't like being the data point where they don't want to be the thing that this stuff is tested on. And so I think understanding this is secure, right? The audio recording from this very sensitive primary care visit is not gonna end up on the internet somewhere. It's not gonna be sold to a health insurance company, and patients are really seeing their voice in the same way that they think about like a blood sample or their genetics and think about that like AI and genomic medicine and where that's gonna go is gonna be even more problematic. And so I think these are really good questions just at the heart of like what is this. And then one that keeps coming up is, who can I talk to for more information, which I think is great, like appreciating, give me this nutrition label, give me this informed consent, that's fine. I do wanna go through our visit today. But what if I have more questions and like, is there an open forum? Who is responsible for this at MedStar so that I can come speak to them, which again speaks to this interest and intelligence and they wanna be involved and they wanna know more, they wanna know how to get there.

28:17 - MEGAN ANTONELLI

Yeah, I mean, it's that balance, right? And it's kind of the transparency, the education, and, you know, in informed consent, as we call it, right? I mean, you just, you want to give them the tools to understand it, and then to even, you know, continue to use it, you know, to engage them as we, you know, and it, and then that balance of kind of ensuring safety and user experience, right? I mean, it comes down to that. So when you look back and when you look at the research and kind of all the work you've done on this over the years, as you're talking to health care executives and hospitals, you know, who are looking to make these investments, what are some of your top recommendations of how to kind of balance that, you know, ensuring the safety and that the user experience is kind of maintained or at best, you know, all of it that it's improved.

29:20 - KRISTEN MILLER

Yeah, and I think again like we're talking about AI, it is different in many ways but similar to any other intervention if you're worried about the end user, that's where you start, right? I think we're thinking about cost and we're thinking about implementation. Is this even doing the thing that the provider needs it to do, right? So starting with the end users, not the algorithm, that's why human factors testing is so important from day one, understanding what are the clinicians experiencing in these real settings. We do worry about edge cases or we can simulate those. And then for patients including them in the co-design too and so you know for us our mission is about safety. It's also about equitable care, and not just innovation for innovation's sake and so thinking about the human performance and the patient trust. I mean, I will say so much of medicine and technology and innovation is incredible like the future is wild, but I think there's huge opportunities to think about tools that are really gonna work for the patients that have gone maybe overlooked for some time. And so, for example, how many people actually read their discharge summaries? Not many, right? We see them crumpled up in the garbage cans of the hospital. Could we use AI to really know what you as a patient need? If I knew your reading level, if I knew your health literacy level, if I knew how you wanted to be communicated to, right? Do you want maybe softer, more nuanced tones, or do you just want the facts, right? I think there's huge opportunities in the way that things like an LLM could customize and personalize and tailor care for patients that could make some big differences, you know, some inroads in places that we haven't really been able to fix before. And it's just on the patient side, similarly for the providers, right? Can we tailor and customize these things? You know, could the note sound like that provider, right? Could it write like that provider? And I think we're in the infancy of this, and, you know, 10 months from now we're gonna be having a different conversation about what this looks like. But it really starts with the users, and making sure that whatever we're developing is really meeting their needs and not at the sacrifice of safety or transparency.

31:29 - MEGAN ANTONELLI

Yeah, no, absolutely. Well, you know, we always like to talk about what's good in healthcare, and that's our, like we have the segment 5 good things in healthcare. So I always ask the last question, like, what are you most excited about? I feel like you might have touched on a little bit of it with that right there because the future is bright and it's exciting. But if you had to think sort of big picture, what are the, you know, what's the good thing about all of this or even more broadly, you know, that's coming in healthcare that you think that this is really gonna have that positive impact.

31:59 - KRISTEN MILLER

Yeah, I think it probably is that personalization. I think for me like my good thing right now is doing this work in partnership with patients and providers. We've got a lot of exciting things in the mix. Can we develop feedback tools for providers about their performance, right? Can we customize information for patients. I think there's a lot of opportunity, but we're doing it with everybody at the table and I mean these sessions that we're doing with patients, my favorite thing right now to hear their stories, understand their experiences, and say like we're equal partners in this, right? And we're gonna figure this out together, but we need to do that with just open honest feedback and a starting point to know, what are you most worried about and how can I help fix that. And so we're excited to build some new things and get them out tested starting with this language. How are we communicating the use of AI and see what really resonates with patients and how we can improve it, and we plan to share that with anyone else that needs it. Nobody needs to reinvent the wheel. So if we find an informed consent script that patients really love and trust, to share that for other folks to use as well. So a lot more on the short horizon, you know, within the next few months.

33:14 - MEGAN ANTONELLI

Yeah, no, I think that personalization piece is so key and it's such, because that's, we don't in our normal healthcare system, pre-AI we just don't have time to do the personalization that's needed to drive real patient engagement, you know, or even literacy, education or then change behaviors, you know, and so with the tools that we have available, and the access that they provide, it really does, you know, sort of create that kind of promise of what technology has always said it would deliver, it makes it possible, right? So I love that. Well, thank you so much, Kristen. I could talk to you all day, but I know you have a very busy schedule, so, you know, thanks so much for joining us. What's the best way for our audience to kind of find your research, get in touch?

34:06 - KRISTEN MILLER

Yeah, I think for anything AI Human Factors through the MedStar National Center for Human Factors in Healthcare, and then on the patient side, AI or really anything in healthcare is our patient partner diagnostic Center of Excellence. So, I'll share that contact information. Would love to talk to anyone about really anything in this space, and thank you for the opportunity.

34:28 - MEGAN ANTONELLI

Awesome. Well, thanks so much, Kristen. Thanks to our audience. Till next time, keep transforming healthcare by remembering that the most sophisticated AI succeeds only when it serves humans who use it, both clinicians delivering the care and patients receiving it.

34:42 - OUTRO

Till next time, that's Megan Antonelli. Thank you for joining us on Digital Health Talks where we explore the intersection of healthcare and technology with leaders who are transforming patient care. This episode was brought to you by our valued program partners: Automation Anywhere, revolutionizing healthcare workflows through intelligent automation. Netera, advancing contactless vital signs monitoring. Elite groups, delivering strategic healthcare IT solutions. Cello, securing healthcare identity management and access governance. Your engagement helps drive the future of healthcare innovation. Subscribe to digital health talks on your preferred podcast platform. Share these insights with your network, and follow us on LinkedIn for exclusive content and updates. Ready to connect with healthcare technology leaders in person? Join us at the next health impact event. Visit healthimmpactforum.com for dates and registration. Until next time, this is Digital Health Talks, where change makers come together to fix healthcare.