Dr. Lloyd B. Minor, dean of the Stanford University School of Medicine, is a leading advocate for using technology to make health care more precise. Minor’s tenure has seen Stanford Medicine develop a focus on personalized, preventive health care and launch a new initiative on responsible artificial intelligence in health care. He also hosts a podcast called The Minor Consult. He spoke with POLITICO’s Digital Future Daily about the rise of predictive medicine and how AI in health care is both overhyped and underhyped. The interview has been edited for length and clarity. What’s one underrated big idea? The idea of prediction and prevention and early detection. In the United States, we have an amazing sick care system. We’ve pioneered major advances in transplantation surgery, ultra complex procedures for advanced heart disease, advanced cancer. But we don’t do a very good job of predicting and preventing disease or diagnosing it early. That’s a real opportunity. It should be possible for each of us to have a much more accurate profile of our propensity for disease and know what type of screening we need to undergo based upon our profile. What’s a technology you think is overhyped? The application of AI in medicine is both overhyped in terms of some of its immediate effects and underhyped in terms of its long-term effects. In the longer term, I think generative AI models are going to become more and more accurate. Already today, they are helping physicians to supplement our medical knowledge — knowledge of complex diseases that occur rarely, that even the experts in a narrow field don’t carry around in their head. An example of overhyping would be saying that AI is going to somehow eliminate fields of medicine or radically change the way health care is practiced over a short period of time. What could government be doing that it isn’t? Polls have shown that people today, understandably, are deeply skeptical about the application of AI to health and health care delivery because of the potential for doing real harm. But the polls also indicate that we need to have much more of a dialogue with the public about the status of AI today — particularly generative AI. Where’s it going in the future? How do we develop responsible ways of oversight regulation and compliance? Government plays a really important role in being a convener of these dialogues.
|