Digital Health Talks - Changemakers Focused on Fixing Healthcare

When to Trust the Machine: AI Decision-Making in Healthcare with Professor Vasant Dhar

Episode Notes

As AI systems increasingly influence clinical decisions—from risk stratification to treatment recommendations—healthcare leaders face a critical question: When can we safely rely on AI, and when must human judgment remain in the loop?

Professor Vasant Dhar, NYU Stern professor, veteran AI researcher, and author of the newly released Thinking With Machines: The Brave New World of AI, joins Digital Health Talks to deliver what healthcare executives urgently need: a practical framework for evaluating AI reliability, recognizing model blind spots, and designing guardrails that actually work.

With decades of experience bringing machine learning to high-stakes environments and over one million downloads of his Brave New World podcast, Professor Dhar offers rare clarity on the mounting tension between rapidly advancing AI capabilities and our ability to evaluate their trustworthiness. Healthcare CIOs, CMIOs, and technology leaders will walk away with actionable insights for governing AI deployment—not just soundbites.

Vasant Dhar, Author, Thinking With Machines: The Brave New World of AI

Megan Antonelli, Chief Executive Officer, HealthIMPACT

Episode Transcription

00:00:00 Intro: Welcome to digital Health talks. Each week we meet with healthcare leaders making an immeasurable difference in equity, access and quality. Hear about what tech is worth investing in and what isn't as we focus on the innovations that deliver. Join Megan Antonelli, Jenny Sharp and Shahid Shah for a weekly no BS deep dive on what's really making an impact in healthcare.

00:00:29 Megan Antonelli: Hi everybody. Welcome to Digital Health Talks. This is Megan Antonelli and today we are tackling one of the biggest questions facing healthcare technology leaders today. When can we trust AI with clinical decisions and when is human judgment non-negotiable? My guest is Professor Vasant Dhar, a professor at NYU Stern School of Business and Center for Data Science and one of the world's foremost authorities on prediction, data science and trust in AI. His new book, thinking with Machines The Brave New World of AI, just released by Wiley, provides a practical framework for evaluating AI reliability, something healthcare executives desperately need as opaque models increasingly influence everything from risk scores to treatment pathways. Professor Dhar has been studying machine intelligence and model risk since the nineteen nineties. His Brave New World podcast has surpassed one million downloads, and his work has appeared in The New York Times, Wall Street Journal, and MIT Tech Review. He's one of the rare experts who can explain, not just speculate, on how to decide when a model is trustworthy. Professor Dhar, welcome to Digital Health Talks. It's such an honor to have you here.

00:01:37 Vasant Dhar: Delighted to be on the show, Megan.

00:01:39 Megan Antonelli: I mean, you know, we have been talking with so many on AI and the impact on healthcare. So to have someone who's been studying it as long as you. Um, you know, is just great. So tell us a little bit about, you know, what you've been seeing and what you've been working on. I mean, you know, it has been this Everybody talks about AI for the last four years, but it's been around for for quite a bit. So tell us. Tell us about your work.

00:02:04 Vasant Dhar: Around for a long time. Um, so, you know, Megan, the reason I got into AI was because of healthcare. This was in nineteen seventy nine, you know, so I was a doctoral student wondering what to do with my life. And, you know, one of my senior, uh, colleagues came to me and said, hey, um, there's this guy who has built the world's first medical diagnostic system that covers the whole field of internal medicine, the entire field of internal medicine. And I want to have him offer a course in AI. And I said, what is AI? And he said, well, it's about getting computers to be smart. So I said, that sounds good to me. So he and I, along with two other students, walked up to the medical school at the University of Pittsburgh top floor. There was a lab called the Decision Systems Lab, and the professor was on the phone. So we waited outside his office, and in the middle of there was a long room. In the middle of the office. There was a big screen connected to a computer at Stanford. This was the days of timesharing. There were no PCs, and there was a physician with snow white hair puffing a cigar, discussing a case with the system called internist, and he was discussing it via his assistant because he couldn't type. In those days, no one could type up. And I was watching this, and he entered a bunch of symptoms about a patient, a real case and internist went off and started asking a bunch of questions. He answered some of them to some of them. He said, I don't know. Um, and then at some point, internist asked him a question about, you know, whether there was pain in the lower right quadrant or something like that. And Jack Myers was the physician, said, why are you asking me this question? And I'll never forget the response, because it changed my life in terms of I said, because the evidence you've given me so far is consistent with the following hypotheses. And this question will help me discriminate between the top two. Right. And I was like, Holy smoke, how How is the computer doing this? You know, because you know my notions of computers where they were calculating machines. But to me, this interaction with this physician, this was nineteen seventy nine, just blew my mind. And that's what I decided I wanted to do with, with the rest of my life was, you know, I wanted to build these magical machines. And, you know, it's, you know, what a long, strange trip it's been, you know, since then because I start my book with that very interaction, you know, between Jack Myers and the physician. And then later in the book, I have the same interaction with ChatGPT. You know, I gave ChatGPT the exact same symptoms, and I ran it multiple times. And guess what? It came up with a virtually identical differential diagnosis for the case. Right. So that's what's amazing now is that, you know, fifty some years later, we actually have internists in the large language model.

00:04:54 Megan Antonelli: It's and it's in everybody's hands, right? I mean, how crazy. It's funny, I it reminds me we just were at the American Heart Association, and at one of the sessions, they did a stump the doctors session with, you know, they actually had three different language models going and then three physicians on the, on the stage. And, um, it was incredible to see. And it was cardiology. And they were very difficult questions, you know, some taken from, you know, advanced exams, some taken from kind of real stump questions that like, you know, even the doctors that were taking care of it hadn't gotten to. So, um, it was amazing to see how one the this the this the difference between them. But also I think what and this is what what really struck me is where the physicians were were inconclusive. They ordered more tests. They could order more tests, they could get more information. Whereas the confidence with which the language models said yes or no was always very high, and they made the diagnosis and they set the treatment plan. And that's where the risk and the nuance comes, right? Because when you're when you have that human element and a team approach and you are also using it, and one of the things we didn't do in the session was to actually have physicians and AI. Right? Because that's where the that's where the magic sauce should be, right? Yeah. So, um, but it is interesting to think that fifty years have gone by and here we are, you know, just finally getting to really use these tools in a, in a broad, broad way. But.

00:06:31 Vasant Dhar: Yeah, exactly. You know, I gave a talk last year at the CDC, um, which was had a provocative title. Uh, it said something like, Will AI make human doctors obsolete? You know, and I posed it as a question and I discussed all the pros and the cons, right? And I actually thought that the audience would be very against, you know, that it would not replace physicians. And to my surprise, they actually felt that the machine would do a better job, you know, and I and in fact, some of them even felt that the machine might even be more empathetic. You know, strangely enough, you know, that, you know, that that is, you know, one of the complaints that, you know, Eric Topol points to in his book Deep Medicine, you know, he has this word cloud, uh, which describes how patients see physicians. And it's not flattering, you know, it's, uh, you know, unsympathetic, hurried, you know, all that kind of stuff. And so I was surprised that people actually felt that, you know, that the answer might be yes, that, you know, in, you know, that, you know, maybe it'll go the same way as, you know, automobile diagnosis, right, where you plug your car into a system, there's all kinds of sensors that take measurements, and then the mechanic does the fixing, you know, whereas the machine does the diagnosis.

00:07:51 Megan Antonelli: And well, I mean, it's so much I mean, there's so much I mean, if we think about the last twenty to thirty years and how, you know, I mean, we've gone from purchasing books in a, you know, in a bookstore to just never even, you know, barely even going to bookstores, although I did go to one recently and it was lovely. And there's these elements of what we hold to be true that no longer, certainly for the younger generation, is is the same. I mean, they don't perceive their relationship with the physicians the same way that my parents perceived their relationship with the physicians. And here, you know, we are in the middle. Um, but when you think about that and, and sort of what the hesitations are, you know, and, and what, you know, you're kind of in that you mentioned that you were surprised. What are some of the other kind of misconceptions that you think healthcare leaders in particular hold around AI?

00:08:45 Vasant Dhar: You know, Yo, Megan, it's. You know, it's interesting that you said the relationship with the physician, right? Because in the old days, we used to have a relationship with physicians, right? We saw them over and over again, and they got to know us and sort of knew about us. Whereas now we have no relationship. Right? Because healthcare has become so specialized that, you know, you just see a specialist for this, you see a specialist for that, and no one's really looking after the big picture, right? That everyone's sort of looking at their own little slice of the body, and that's what they're concerned about. And so in a way, we've sort of lost that relationship that we used to have, you know, and that's, you know, one thing that Eric Topol also said in our podcast is that we've sort of lost the care part in healthcare, you know, in a big way. Even as, you know, machines have become more specialized and, you know, the equipment has gotten better and sensors have gotten better, you know, um, and the specialization has been, I guess, necessary to a large degree, but one of the unfortunate side effects is that we've lost that relationship, you know, with with healthcare providers. Um, and that's unfortunate. And, you know, I mean, I guess one of the things we can talk about is, you know, how we can get that back, maybe.

00:09:57 Megan Antonelli: Right. Well, and that's I mean, it comes down to, you know, sort of the, the system of healthcare has, has made it much more, you know, a commodity and a, and a machine than really a, you know, you think back to little House on the Prairie where, you know, doc came into the house and did that. But of course, on the other side of it, I mean, I was just listening to, um, Scott Galloway talk about his healthcare experience. And, you know what? He does have that relationship. Because when you have money, yes, lots of it. You can have your concierge medicine and a totally different experience. So there's also there's a spectrum of that, of that, you know, sort of healthcare experience and the healthcare that people get. But when we think about the healthcare that we can provide to everyone, which, you know, I firmly believe we should be. You know, AI does. I you know, I think, I believe provide that ability to kind of fill in some of the gaps, you know, where if we get in the way of saying, you know, well, people won't have the relationship with their doctors anymore, as you said, ninety percent or ninety nine percent of people don't have that relationship anymore. So where is it that we can both adopt and use but mitigate the risk, I guess, is the is the big question?

00:11:15 Vasant Dhar: Yeah. And you know, I'm actually and you know, one of the things I say in my book, and by the way, Scott wrote the foreword for my book. Um, and one of the things I say in my book is that I'm actually very optimistic about healthcare, especially physical healthcare, like mental health is a separate sort of domain that we can talk about. But I'm optimistic about it because I think at the moment we're getting the worst of both worlds. You know, we're getting an assembly line kind of process. You know, I went to the emergency last year because of a gash in my knee, and I went through six different quote unquote providers who asked me the same question. And I felt like I'd gone through an assembly line. And so at the moment, we have sort of humans acting like robots, you know, in the healthcare system. This is following a process. And, you know, by the time you actually see the doctor, you're sort of exhausted and you want to get out of there. Um, but I'm optimistic because I think that AI and generative AI and large language models at the moment have that capability to actually look at the sort of trail the exhaust of healthcare and put it together in terms of, you know, usable databases. Like, I'll give you an example. You know, at my age, um, a lot of men develop prostate issues, you know, high PSA levels, prostate specific antigen. And, you know, I've had elevated PSA levels for, War, you know, five or six years now. And I've seen two eminent neurologists. Like, really experienced guys, you know, in the fifties and 60s. They've seen a lot of cases, but they're really baffled. You know, after two biopsies, four MRIs. You know, there just isn't, um, you know, they can't tell me, you know, you know, I've seen thirteen thousand cases like yours, you know, with the exact same PSA trajectory levels. And here were the outcomes, you know, and I'm asking myself, why isn't there that kind of database. And the answer is because we haven't really been doing sort of systematic scorekeeping, right. We you know, the physicians see so many patients, um, and until a few years ago, you know, God knows what actually made it into a system. Now more and more stuff makes it into a system, notes, test results, all that kind of stuff. But it still isn't Integrated. You know it still and none of it like makes it into a database. So physicians still function sort of intuitively, you know, and they do the best they can because they're worried. They're seeing so many people, you know, they can barely get done by the end of the day. Then there's record keeping and all that kind of stuff. Um, that they're not in the position to say, well, you know, let's put together a database like no one's got the effort at the time to, to, to make that effort. And that's an area where I see tremendous potential with AI, where you can sort of turn it loose on, you know, this collection of healthcare records where the data is sort of amorphous. It's all over the place, and it can actually put it together so that a physician can then say, I've seen so many cases that are like yours, and here are the outcomes, right? I mean, that would be more sort of evidence based. And at the moment we don't have that. But we've got process, we've got assembly line, but the data isn't being recorded systematically and no one's making sense of it. But it will happen, you know? I mean, you know, that's something I'm actually quite optimistic about.

00:14:41 Megan Antonelli: Yeah. And I think. Yeah, I mean, we've seen progress, right? I mean, in the last twenty years, basically we've gone from having entirely paper records, you know, lots of faxing, nothing connected, nothing in in any kind of data warehouse. And now things have, you know, with EHR being implemented more widely. Now, the problem, of course, is that just because it's in a system doesn't mean it's a connected system to everything else, right? And so that ability now to connect, you know, is here. Um, and we're seeing it more. But then there's the issues of privacy, the issues of who owns this, the issues of what, you know, what can we do about this? And I think, you know, I hear a lot about kind of the digital twins and they're creating the data to, to then analyze its, you know, sort of de-identified and all of that. But, you know, what you said in terms of diagnosis seemed like it would sort of be the first frontier, right? It would be the easiest, you know, research and diagnosis. And yet we're not even, you know, we're not really we're just at the tip of the iceberg in really being able to to do that. Um, what do you think? You know, why do you think that is in terms of, you know, sort of that slow adoption or that slow, you know, sort of the hesitancy to really get there?

00:16:02 Vasant Dhar: Yeah. I mean, it's, you know, it really has been because of the fact that we haven't really done scorekeeping in a proper, consistent kind of way that will happen. Now, the interesting thing is, you know, I said that I gave, you know, this same set of symptoms that internist had looked at to ChatGPT and it actually did a great job of, you know, um, asking me a bunch of questions, just like internist had, and then coming up with a differential diagnosis. So the knowledge required to do diagnosis is largely there. Right. These LMS have snarfed up, you know, all of the medical journals. They've, you know, snarfed up a lot of knowledge and they're good as long as you can describe the case to it accurately. Right. But there's the rub, right. In order to be able to describe the case properly, you've got to have proper data. Um, and we haven't gotten that yet. And when we do, I think we're going to see a, you know, a tremendous sort of, um, you know, flourishing of AI in the diagnostic space because the knowledge exists. It's just that we're not able to collect the data, make sense of it, and feed it to these medical diagnostic systems. Right.

00:17:23 Megan Antonelli: Right. In your book, you talk a little bit about sort of evaluating the AI reliability and kind of how, you know what what there's a sort of a framework for that. And I think that's one of the biggest discussions now. Right? I mean, nobody wants. Nobody in health care and health systems wants to deploy anything without knowing that it is current. You know, it is reliable. So what, you know, are there ways that health systems can assess whether those systems are are ready for deployment?

00:17:51 Vasant Dhar: Yeah. So that's a great question. And it's sort of feeds into this notion of like when do we trust AI. You know, when when should we trust these machines with decisions. And this is something I started thinking of, of, um, you know, ten or fifteen years ago because I created this machine learning based hedge fund, you know, that, um, trades automatically every day. It gets data and it trades, and I don't interfere with its decision making. Now, ironically, it's wrong almost half the time. So my my win rate is barely, you know, Fifty and fifty four percent, right?

00:18:29 Megan Antonelli: But you wouldn't be here. You would be out on some yacht somewhere.

00:18:34 Vasant Dhar: You know? So. No, no, no. What I was going to say is that. And actually, it does quite well. Right. It's it's hard to do better than fifty four percent accuracy in predicting markets because it's a very noisy and unpredictable problem. So I've actually done quite well with that, you know, with that rate of accuracy. But to me the question was this why am I willing to trust an algorithm that's wrong almost half the time, whereas I don't trust it yet with healthcare and I don't trust a driverless car yet with driving me on the highway, even though it rarely makes any mistakes, you know? So when I thought about it, I realized that trust really depends on two things. It depends on how often the decision maker the algorithm will be wrong, but it also depends on the consequences of being wrong, you know? So in Finance. You know the consequences are not major, right? Okay. You lose a little bit of money, uh, on a trade. But if you've got one hundred positions, you know, losing is part of the game. You should be. You should expect to lose money every once in a while, and you should expect to lose a fair amount of money every once in a while, right? That's baked into your expectations when it comes to healthcare, right? If I've got like a potentially aggressive cancer, you know, do I really want to trust the algorithm because the consequences of error are really high, right. What if the machine is wrong? I want a physician, even though the physician may be wrong. I want that degree of comfort that someone with deep medical knowledge, you know, has looked at the problem, has analyzed the problem, and can hopefully explain to me what's going on in a way that perhaps the machine can't do right now. Right? So for something like that, you know, I'd be hesitant to trust an algorithm. On the other hand, if it's a low cost of error situation like diet or lifestyle. You know, there I would probably be much more willing to trust a machine with giving me advice because chances are it's good, right? And, you know, even if it's wrong about some things, you know, the consequence of error is not death. You know, like it would be with an aggressive cancer. So, you know, it depends on the cost of error. Driverless cars rarely make mistakes, but very high cost of error, so we're hesitant to trust them. Urban taxis were beginning to trust. Now, you know, I took a Waymo last year in San Francisco. Felt remarkably safe, you know. You know, but if I'm going at twenty, thirty miles an hour, it's like, all right, you know, it's different from going on the highway at seventy miles an hour. Like speed kills. So trust really depends on the consequences of mistakes. And in my mind, I think people will trust machines with sort of routine, low cost of error kinds of decisions. You know, decisions like, you know, wearables, you know, things like that, you know, that monitor your body, they monitor, you know, that kind of stuff you can trust. And I think that should also take some of the pressure off of the healthcare system, because, you know, the problem at the moment is that physicians are overburdened with routine cases as well, right? I mean, and so it leaves them less time to focus on the really serious cases. So in my mind, the role of AI will be to take over to automate the sort of low cost of error cases so that people can actually, you know, talk to a computer and get reasonable advice. But if it's really serious, you know, they go see a doctor. I'll give you another example, Megan. So three weeks ago I was speaking at an event. It was my book launch event near Columbia University. I went up, I wasn't feeling great. You know, I, you know, it was a bit of a plod getting up there. And then I attended a reception. I came back home and I'm in bed and I told my wife, I said, you know, I'm not I'm not feeling any better, you know, it just. And I have a pain in my right rib cage. And I thought, maybe I'm misaligned and I need to see my chiropractor. So I fed ChatGPT the symptoms and what it came back with was, um, potentially worrisome. I went to er, right away, and it turned out I had pneumonia. Wow. You know, and if I had not consulted ChatGPT, you know, and this was the weekend, it was Saturday. I was thinking, oh, maybe I'll see my doctor on Monday. Right. You know, the early you know, the fact that I went there early and they put me on really strong antibiotics I recovered within a week, you know, whereas, you know, you can take months to recover from pneumonia, you know. Right. So here was a situation where I actually use the AI, right. It was to me a low cost of error situation, right? I mean, if it if it drove me, there's nothing to worry about. I probably would have done nothing but the very fact that it said you could have pneumonia, you could have a punctured lung, you could have, you know, all kinds of things that weren't, um, you know, giving me any degree of comfort. I decided to go see the physician.

00:23:41 Megan Antonelli: Yeah. And I think, I mean, to that end, there's no question that, you know, and in some ways, it's like as long as it's providing more care or additive opinions or additive, you know, and not taking away that, you know, physician or nurse involvement. Right. So yeah, and and allowing them to not do the things that we, that they don't need to do. Right. But to practice where their expertise is. And, and much like the example I said about the, you know, being on stage with the clinicians by themselves, the language models by themselves. They're more powerful together. Right. And and then in, in the American healthcare system that has so many gaps in access, in quality, you know, that that people can use it to, you know, get get the care or get get to that care faster. Right. Um, but yet, you know, there's some, uh, hesitancy. And I think then also what we hear from a lot of the hospitals is, you know, we're talking now about the governance, right? And as you said, in, in certain industries, go ahead. You know, use it. And that's okay, because if you're, you know, if you're writing a blog post or doing, you know, editing and video, there is no life or death consequence. Whereas if a physician says, okay, well, you know, I'm just going to set up this chatbot and let it start diagnosing my patients. You have a risk. So how can health systems both adopt. But also you know, make sure, I guess, that, you know, it's not being used kind of in this, in this rogue way, because I think there's, you know, as you try to put down the limitations, then you've got people who are going to push those limits. Um, you know, you know.

00:25:30 Vasant Dhar: You know, in health care, I guess one has to, you know, depending on what the problem is, sort of balance the cost of like a false negative versus false positives, right. And that's something that I think physicians sort of calibrate themselves on. You know, that is I don't want to miss something really serious. Right. I mean, that's something that they don't want to do, which is why, you know, I've got high PSA levels and I've had two biopsies because my physician doesn't want to miss cancer. Right. So they're fixated on sort of, you know, avoiding those false negatives even at the expense of sort of more false positives. Right. The fact that the patient has some degree of discomfort. You're worried I'm, you know, maybe something wrong with me or you have a biopsy that's sort of, you know, it's uncomfortable. But, you know, physicians sort of make that judgment about sort of balancing those two kinds of costs. And at the moment, unfortunately, you know, the reason you get so many false positives is for reasons that I was talking about earlier, which is that there isn't sufficient data to, you know, give them the confidence that, yeah, they've been, you know, thirteen thousand cases like this. And, you know, in only five cases, did you know bad outcomes occur? They don't have that kind of thing. So at the moment people tend to err towards the side of more false positives than there should be. Right. Same thing in, you know, let's say pregnancy. You know, there's a case I describe in my book where this famous data scientist, Michael Jordan, you know, went to, uh, you know, his wife was pregnant. They went for a screening and the machine showed some spots that were worrisome. Um, you know, and now normally a physician would have said, well, let's do an amniocentesis. But that's not without risk either. You know, that that does carry risks. And, you know, uh, a large number and, you know, there's a large number of cases where it goes bad. But, you know, Michael Jordan, who's a, you know, happens to be a data scientist. He dug in further and he actually realized that the machine had been calibrated using data from a UK study. And when he sort of pushed them, they said, yeah, it's been sort of giving a large number of false positives recently. And so he decided not to do the amniocentesis. And they have, you know, thankfully had a healthy baby. Now that's another case where he actually dug in deeper into what the machine was saying and discussed it with the physician, and they jointly decided that, yeah, it was probably the machine that was miscalibrated. Right. And that it, you know, that that amniocentesis wasn't necessary. But these are really sort of subtle and complex cases, you know, where the physician and the patient sometimes have to make a judgment about what's the possibility that the machine is wrong, you know.

00:28:41 Megan Antonelli: And yeah, I mean, that, I mean, and I think that also I mean, just in thinking about all of this, I mean, there was a time where it was like, oh, machine learning, predictive reasoning, predictive modeling, you know, now everything is sort of lumped into AI, but but they're, you know, large language models versus AI are different. You know, there's different elements to this. And I think it gets into this like reasoning versus pattern recognition. And I think you talk a little bit about that And maybe you could explain to our listeners kind of what that difference is and why it matters, particularly in clinical applications.

00:29:18 Vasant Dhar: Yeah. So like I'll, I'll try and summarize this like in two minutes. So, you know, when I got into AI in the late seventies and I described internist that was called an expert system, there was a period of ten years over which, uh, the computer scientist who built the system and, and Jack Myers, who was the expert, interacted and built this knowledge base. It was all handcrafted, you know, just huge effort. And that ran into a wall because we know more than we can articulate. And it's very difficult to separate expertise from common sense, you know, oftentimes. Right. And that was that's been one of the biggest assumptions about AI is common sense is too hard. Let's focus on expertise. Machine learning sort of shifted the emphasis of AI towards prediction. Let's get lots of data and let's predict what's going on. Um, and deep learning was sort of more of the same, but it sort of tried. It managed to get data directly from images, sounds, etc. so you didn't have to describe an image to the computer. You can now feed it the image. Right. That was the magic of deep learning. The current paradigm, which I call general intelligence, is one where the machine knows something about everything. And the emphasis again, has gone back from just prediction to reasoning, thinking, planning, understanding, which used to be the vocabulary of AI in the seventies. You know, we got away from it with machine learning because we just became fixated on prediction. But now, with the emergence of these large language models and general intelligence, we're back to sort of the old days where our sort of we've gotten more ambitious now about AI, that that is not just about prediction, it's about reasoning and thinking and explaining what's going on. And that's what we now expect from these large language models. Right. So when a physician uses them, the large language model, quote unquote, understands that the meaning of things, the meaning of symptoms and can interact with the physician in their language. Right. And that's what's new and exciting. That wasn't the case prior to ChatGPT. So, you know, so that's a tremendously useful kind of functionality that modern AI has brought to the table, this ability to sort of integrate, you know, good old prediction with actually reasoning more deeply about a case and thinking about it.

00:31:39 Megan Antonelli: But I mean, in in some ways it is still all. I mean, as you said, that that first, um, model with Jack Meyers, you know, it was one, one physician or one collective physician training it on all that, all the information that he had access to. Right. And even now, as these models are learning, building, growing, they're getting the information of of all we've studied. And yet we know we haven't studied everything. We have, you know, had our own biases and our own focus and our own lenses or blinders on with respect to what we've studied. So in, in all of it that this whole, you know, the study of everything, it would still be to some degree, still a snapshot of of what we've put in there. Um, and so in some way and in, in some cases these models are quite opaque, right. It isn't like I can just go to Jack and say, hey, how did you train this? You know, necessarily, um, and I guess that gets to in some cases as these hospitals and health systems and, you know, even outside of healthcare are making decisions of what models to use, right? I mean, do you just go to the one that is the biggest or is it better to have sort of more highly specialized ones that you have more control over? And is that is that something that you're seeing people making decisions about?

00:32:56 Vasant Dhar: Yeah. So, you know, it'll depend on the problem, you know. And because you're absolutely right, the question is like how how were these large language models trained? Like, what data did they look at? And no one's entirely sure about what that data was, except that they seem to do remarkable things. But the question is, you know, do you expect them to be right and truthful all the time? No. I mean, modern AI is not designed to be truthful. It's designed to make sense, you know, and so truth has sort of become a little bit of a casualty. But, um, to your question, it'll really depend on the kind of problem we're looking at. Right? So for routine kinds of cases, I think people will tend to trust the AI, right? Chances are it's seen lots of the relevant data. You know, the consequences of error are not that high. It saves you a lot of time. Trust it. On the other hand, if it's, you know, if it's something that's, uh, you know, a cancer of some sort, uh, an unusual situation, then you really have to doubt the machine, right? Like, you know, could it possibly have seen lots of cases like that? Probably not. Um, and that's where human judgment comes in. Um, and, you know, and to the extent that people will, you know, to the extent that, um, we can get language models that are sort of fine tuned to certain diseases, right? Those will have to be developed probably in-house. That is, you can't expect like an open AI to provide a medical solution out of the box that will cover the entire field of medicine. You know equally well, right? That's probably a bit of a stretch. Um, and chances are that, you know, hospitals providers will probably have to take some of these models and fine tune them for the sorts of Problems that looking at. Right. So if they're looking at, let's say Parkinson's or some sort of aggressive cancers, then chances are they'll have to fine tune these lmes with that data to be able to get better results from them. But we're a long ways off from that. So I think in the interim, we're going to see a situation where these tools will be useful. But for unusual problems, you'd want the physician to weigh in and really make a judgment about whether the machine is on the right track or whether it, you know, may not be right. There's at the moment, there's still is no substitute for human experience when it comes to complex cases. Right. That we wouldn't trust the machine for that because the cost of error is too high. And the other reasons, as you point out, is we don't know what the training data is that was used. You know, it could have been, you know, it could have had biases in it. It could it could be incomplete in many ways. Right.

00:36:02 Megan Antonelli: And that transparency, despite, um, Sam Altman being named, you know, the top most important person in healthcare by modern healthcare or whatever he was named as this morning, this week or recently, you know, they're they aren't transparent. You know, that model certainly isn't transparent about, um, what you know, what has trained it and certainly what patient, you know, as patients make those decisions of what we can and can't use and or what physicians should, should use. Um, what do you think there is in terms of that accountability? Right. I mean, I think we've heard a lot of discussions around what can what can health systems do or what should health systems be doing to kind of, um, either protect themselves or structure accountability if AI is used and then makes an incorrect decision?

00:36:49 Vasant Dhar: You know, I think physicians and by the way, uh, it's very difficult to change human behavior, right? I mean, we we we we are trained in a certain way. We follow processes. So it's very difficult to change human behavior. And I suspect that, you know, experienced physicians maybe, who are in the fifties and 60s are probably not going to change that much. Right? These changes happen generationally, right, that younger physicians who are sort of more into using evidence in creating databases, I suspect what they will start doing and what they should really start doing, is to start keeping a record of when they trusted the machine versus when they didn't trust the machine. And I mean, literally, you know, doing scorekeeping, um, and learning about this because to me, that's the only way to really learn about how well this black box is doing is, you know, through trial and error. Um, and if if, you know, if many people start doing this and sharing their experiences, we could get to some really useful databases that tell us areas in which these machines do really well versus those where, you know, human judgment is absolutely essential. We're not there yet, but I can see that we will go down that road. It's a question of when and not if, um, you know, and it's a question of also people who are trained to think in this way, who will be more willing to do it. Right.

00:38:32 Megan Antonelli: And as we, as we transition, there will be some in the middle, some early adopters. I mean, I think we see it now, um, what when you think about it, I mean, healthcare or outside of healthcare with all the different industries and sectors that you're looking at in terms of their appetites to sort of adopt and move forward. And, you know, I mean, your book is called thinking with machines. So, you know, there's so many different areas where that's applicable. What's where is your area where it's like the highest risk and you find it to be most, most scary versus, um, the greatest promise.

00:39:09 Vasant Dhar: Well, um, you know, my, my, uh, take on this is that, um, the high risk cases are, you know, like I said, you know, aggressive cancers. You know, those are high risk cases. You probably shouldn't have a machine making those kinds of decisions. Now, having said that, I think that one of the really promising areas of health is in integrating different modalities, such as images or sounds or smells. I mean, smell is actually like a really interesting new frontier in AI. there's this woman called Joy Milne who can smell Parkinsons one hundred percent accuracy, right? They've done blind tests on her and she can smell it. She says it smells like wet cardboard or something like that, you know. And she only realized it because her husband had Parkinson's. And then twelve years later, they went to a support group of Parkinson's patients, and she said she smelled it on everyone who walked into the room. Right. And that's when she realized she could smell it. And there were no biomarkers that existed. So to me, these are like really interesting sort of frontiers where the, you know, you know, just like you plug a, you know, the car into your computer. It's I mean, to me it's like similar. Right. That you we plug ourselves into a machine. It smells us, it observes us. It, you know, does ultrasounds and stuff like that. And to me, those are, you know, what I see as the sort of frontier of medical research is to be able to integrate these kinds of sensors. Um, but to come back to your question about high risk, low risk cases, you know, you know, like I said, I think lifestyle, diet, those kinds of things are low risk cases. They're no brainers. They're like, great, uh, use cases for AI. And, you know, as we begin to collect more data and do better record keeping, I think we will begin to trust AI more in the higher difficulty cases as well. Because, look, if you think about it, right, an experienced physician sees maybe a thousand, few thousand cases in their lifetime, you know, ten thousand cases if you, you know, if you're lucky, machine will see millions, billions of cases. Right. So in terms of the scope of data that it is seeing or will see, you know, and, you know, just the, the, the, the depth and the breadth of cases it's going to see. I mean, to me, the writing is on the wall, right? That that the machine will become much better at diagnosis and eventually it will become much better at even the high risk cases. You know, where the physician then becomes a sounding board? Um, you know who, who, who must who. And the physician's job is to is more like, okay, how do we fix this problem now? You know, similar to what a, you know, a mechanic's role is, how do we fix the car, right? So it's the therapy part and the care part that I think will become sort of more front and center for physicians, as it should be.

00:42:27 Megan Antonelli: Right. Well, and to go back to your speech at the CDC, I mean, in, in terms of like when will physicians, you know, will physicians be replaced by AI? And, you know, and I think that it's always I think we hear it in every sector. It's not it's not will AI replace the the physician. It's will, you know, the the physician will be placed by physicians using AI. Well, right. So using the tools and kind of amplifying them and you know, and the promise and the hope is in fact, that the more complex cases, the harder to find figure out things that we do miss are exactly where we'll be able to utilize it. Right. And so right now, that trust isn't quite there. But if we build it right, if we have the transparency, um, what I'm hearing is that that we'll get to that place where, you know, we will trust it enough to get there. Are there things that you think about that you see us doing in adoption or even in, in our dialogue about it, that thing that you think we're going in the wrong direction, or are you generally positive in terms of how things are moving forward? I mean, having sort of seen this, you know, over the last fifty years and then this sort of sprint in the last five.

00:43:41 Vasant Dhar: So, so, Megan, there's a there's a word that you used that I want to bring some attention to and that's amplified. Right. And that's and that's a key word because I see this even, um, you know, in my students, you know, I mean, one of the themes in my book is that we're seeing an impending bifurcation of humanity where the smart get a lot smarter. They are amplified, whereas people who use the machine as a crutch will tend to fall by the wayside. And the same thing applies to healthcare. You know, in order to be able to use a tool and get more productive, you first need to be trained and capable of understanding. Probably, you know, once you understand something, then you can use a tool to make you more productive. If you don't understand something, the tool isn't going to make you more productive. It's just going to tell you something and you can take it or leave it. So if you're in some rural area and you're not trained right, there's there's there's a shortage of physicians, and you have the I guess what, you're going to just use it because you're not capable of critiquing it. On the other hand, physicians who are trained and curious, um, you know, and have this sort of desire to sort of, you know, understand what the machine is telling them. We'll see their skills amplified. They're just they will become better physicians. So I'm optimistic that physicians, for the most part, who are well trained and curious and, you know, adopt the sort of evidence based view of medicine will become super docs, you know, that they'll become really good because now they have this oracle at their fingertips. They're trained, they can use their knowledge and judgment to actually critique the machine and, you know, and get confirmation that it's right or or, you know, doubt it when it's wrong. Right. That's a tremendously valuable human skill and that's not going to go away.

00:45:46 Megan Antonelli: No, absolutely. And I think and that's where I think the optimism is for me as well in terms of the gaps. And, you know, as you said, amplification of, you know, what is good and what and right. And, and the intelligence, unfortunately, there's also the side of it where I think and you mentioned in terms of those who do use it as a tool, who aren't, you know, who maybe don't have the foundation of it. And that's when I think about kind of this younger generation and the, you know, and where does the future of work go and, and will we be, you know, really replacing much of, you know, what's going on in, in the workplace with AI and then what does that leave for those in the workplace. Right. So I think, you know, a lot of that has yet to be answered. Have you is that an area that you're you talk about in the book as well.

00:46:39 Vasant Dhar: Yeah. And and and you know, because it builds on what I was saying earlier that, um, to me, it's not clear that AI is going to just replace human work, right. It'll replace certain kinds of work. Right? But technology has always done that, right. We saw that in the eighties with automation. And Detroit went through some really, uh, you know, heavy pain, you know, you know, as things were being retooled. So you'll always have some degree of replacement. But to me, the more likely outcome is that the expectations that we have from humans will go up, right. Humans become much more productive and we expect more from humans. And that's always been, you know, something that technology has nudged us towards is humans just need to up their game, right? Because as the tools get better, um, it's not sufficient to just sort of push a button and say, here's the answer. Right? That as the tools get better, humans need to get better. They need to upskill themselves. So and I'm seeing this in other areas as well, by the way, in finance as well. The same thing like an analyst might, you know, generate one report a month, but if he or she has a tool that can generate ten a day, then the expectation of that analyst goes up is the same thing with physicians. And I see the same in almost every area of human activity where your expectations of people just go up, because the tools that you have at your fingertips are so much better. Right. And so our expectations should go up. So I don't necessarily see this as being sort of a widespread, a widespread replacement of humans. Yes, it'll replace humans who aren't very good. And they need to be replaced. Uh, that's probably a good thing. But people who sort of are qualified have that curiosity. I think it'll just amplify them and make them so much better at what they do.

00:48:27 Megan Antonelli: And, I mean, I think we're seeing a little of that now. I think it might have actually been Scott Galloway again, who was talking about it in terms of the, um, but more broadly, even just the employment numbers, right where we're seeing these layoffs at large companies, um, large, you know, large scale layoffs, but yet productivity is going up, right? So employment is going down, but productivity is going up. And, you know, in healthcare where we've been talking about the, uh, physician and nursing shortage and the potential of the physician shortage for many, many years. And then the nursing shortage that has been, you know, sort of top of mind for for almost a decade now. Um, you know, I think that that's to some degree one of the reasons why there's been this quick adoption of of some of these tools that have been able to alleviate some of that, you know, automate, you know, work that can be automated and free up hours. But it is this, you know, it's an interesting thing. And you mentioned upskilling and how that happens, right? I mean, academia and education systems don't often follow fast enough to get the graduating classes out, you know, soon, you know, with with that understanding perhaps now, um, with sort of this more broad understanding of this, that that isn't the case, you know, is that in that it is so adopted, but, you know, as a professor, as someone who's seeing the kids come up, um, what are your thoughts on kind of, you know, sort of ensuring that the generations are ready for this?

00:49:57 Vasant Dhar: You know, um, that's a great question. And, you know, as educators, we're sort of groping our way through this phenomenon, right? The initial reaction of most educators was, oh, like, you know, how are we going to assess people? They'll cheat with ChatGPT, right? So there was a sort of defensive reaction. But once we've gotten past that, what we realized is that resistance is futile. This is an amazing tool and people should use it, right? And so the challenge really is on us as educators to change the way we teach. And to be honest, You know, I teach three hour classes and now, you know, the first hour of classes. Just free form discussion. Because to me, that is one of the best ways to learn is through discussion, you know, through through Q&A as opposed to sort of here are the facts and learn them. Um, and so it's it's changed education. It's changed the way we teach. I sometimes, you know, when some people ask me, sometimes I'm asked a question, sometimes my response is, you know what? ChatGPT will do a much better job of answering that question than I can so consulted. But here's the way I would look at it. So for us as educators, it's it's also up the game. You know, it, it's made us up our game and force us to think about how we can teach better, given this amazing tool that that people have at their fingertips. Right. What are we now adding to the education other than what they can get with, you know, just talking to ChatGPT. I mean, that's a great question. By the way, the other thing I want to come back to you, since you mentioned Scott and healthcare, is that one of the things he points to is that our costs in the US are double of what they are in other developed countries for the same quality of healthcare, right? We spend something like thirteen and a half thousand per capita, you know, in healthcare, whereas developed countries spend half of that. And if we get to that same level, that's something like two trillion dollars in savings, right? That's what the potential benefit is that we're looking at, like the impact of, you know, better technology, better, you know, trained physicians, better processes. Um, because at the moment, the system is sort of like I said earlier, we've got the worst of both worlds.

00:52:16 Megan Antonelli: Really? Yeah. No, he does he does talk about that. And I think with within healthcare, it is something that, you know, and sometimes we get in our own way of adopting and taking advantage of the technology that's there. And I think what has been exciting over the last, you know, sort of three years since ChatGPT has become, you know, sort of accessible and then bringing AI into the dialogue of everyone's talking about it and the fear element while still there. I'm still afraid of Waymo's, but, um. Whether I'm in my car or in one of them, I find the experience frightening. Um, but, you know, and you, you know, you you lean into that with your book, right? It's called The Brave New World. And and I think, you know, sort of that, that element of what should we or, you know, what do we need to do to make sure that it's amplifying the good and what is, um, you know, that this ends up being a good story with a good ending as opposed to something we should be afraid of. Um, and so I could I could talk about this all day, and it's fascinating. But just leave us with sort of the your thoughts on what it is we need to do to sort of ensure that, um, that we're amplifying the good and sort of the, the singularity of, of what it might become is a positive.

00:53:42 Vasant Dhar: I mean, you know, to me, the, the lowest hanging fruit really is in, uh, using the AI to make sense of all of this data that we generate every day. You know, because at the moment, a lot of it is just sort of falling by the wayside, you know, and not available for good decision making. So to me, that's like the lowest hanging fruit. And that's something we need to fix. Um, and it is fixable. Um, and, you know, if we can, um, you know, put into place processes that start making sense of all the data, that's going to be like a huge step forward.

00:54:23 Megan Antonelli: Right. Yeah. No, that's that is huge. And I think it is such a we think about it and we think about it. And you know I think I just saw some dancing robots and some you know, you think about the human element of this. But what it comes down to is the data. And this is just a new way of channeling that power of that data into make it knowledge to make it actionable knowledge. Right. And when there is so much data, um, particularly in healthcare, um, that, that, you know, whether as a physician, as a, as a nurse, as a patient, you're, you're going through so much data that this is a tool to sort of help you navigate that. Um.

00:55:01 Vasant Dhar: Yeah. And, you know, and, you know, ironically, as, as one of the things I muse about in the book is, you know, even as healthcare has become so specialized, um, that no one's keeping sort of track of the overall picture, you know, like, let's say Scott's case notwithstanding, because he's got concierge service and, you know, is privileged. But for the rest of us, maybe the AI will actually become that generalist, you know that, uh, that takes the big picture. It sort of integrates all the data from our wearables, from our lab results, from our patient visits and all of that stuff, and is actually capable of giving us good advice. I mean, how cool would that be?

00:55:41 Megan Antonelli: Right? Right. And exactly what, what folks need. I mean, and I think that that's, you know, where the, you know, the relationship part of it knowing, you know, and I certainly have many health conversations with ChatGPT about myself, about my kids and the, you know, when it remembers stuff. And I'm struck with that where it's like, I can talk to my physician and she doesn't always remember. I have to remind her a lot of the time, whereas ChatGPT doesn't. So it is it is an amazing tool and coming, you know, sort of navigating trust with the knowledge, you know, that it has, but also understanding. So, you know it's an exciting time. I really appreciate you coming coming with us. I'm excited to read your book. I am sorry I didn't get to it prior to this interview. Um, but I will it will be on, um, on my list for over the holidays. So I'm excited to, to to sit down with it and I hope you'll join us again, maybe in person at Health Impact where you can talk to talk to folks in person about all of this.

00:56:45 Vasant Dhar: I'd love to. Megan. Uh, you know, I've really enjoyed the conversation. I hope you enjoyed the book. I've written it for everyone. So it's for parents, students, grandma, policy makers, my colleagues. It's written for everyone, and I've written it that way. So I hope your listeners, uh, read it. Enjoy it as well.

00:57:05 Outro: Thank you for joining us on Digital Health Talks, where we explore the intersection of healthcare and technology with leaders who are transforming patient care. This episode was brought to you by our valued program partners Automation anywhere. Revolutionizing healthcare workflows through intelligent automation. Natera. Advancing contactless vital signs. Monitoring elite groups. Delivering strategic healthcare IT solutions. Sailpoint securing healthcare identity management and access governance. Your engagement helps drive the future of healthcare innovation. Subscribe to Digital Health Talks on your preferred podcast platform. Share these insights with your network and follow us on LinkedIn for exclusive content and updates. Ready to connect with healthcare technology leaders in person? Join us at the next Health Impact event. Visit Health Impact forums for dates and registration. Until next time. This is digital health talks where changemakers come together to fix healthcare.