William Stead, AB’70, MD’74, HS’73-’77, is the McKesson Foundation Professor of Biomedical Informatics and Medicine at Vanderbilt University and is one of the founders of the field of biomedical informatics. In the 1970s, first as a medical student and then while a nephrology fellow and member of the faculty at Duke, Stead worked with Ed Hammond, BSEE’57, PhD’67, director of the Duke Center for Health Informatics, and others to build The Medical Record, one of the first practical electronic medical record systems.
What has surprised you about how information systems have evolved?
What I have been and remain surprised by is the mismatch between the rapid advance of technology and the slow pace at which we’ve learned how to change the way we work to get the best fit between our people and their roles, the process of work, and the technology.
We had exponential growth in the power of the computer, along with the complete convergence of computing, communication, and media into one medium that are today’s networked mobile IT infrastructure. This culminated in the years from 2000 to 2010, yet it took the pandemic to get us to figure out how to use telemedicine at anything close to scale.
Now we’re beginning to figure out, “When do we need a face-to-face visit? When do we need a video visit? When can we get by with a text message through the portal?” We’re not yet figuring that out systematically. So, the technology is way ahead of the people. And that’s been true for a long time.
What are the most significant ways information systems have changed biomedical research and patient care?
The biggest changes have been in research. There’s no way you could have the kind of collaborative research resources that we take for granted now, such as PubMed Central, GenBank, and clintrial.gov, without information systems. You couldn’t do them by paper. Computational pipelines, structural biology, how we visualize protein folding, you can’t do that any way but with a computer. It literally is impossible. Whole genome sequencing, machine learning, EHR-linked biobanks — those are all natively IT-enabled things. They basically shifted us to large-scale collaborative team science, working at multiple scales of biology.
In the clinical arena, things like order management, compliance, and billing are done fairly well. That said, the way those systems have been implemented, they take away cognitive attention from the clinical team, instead of augmenting the cognitive power of the clinical team. So, we have a real challenge because the electronic health records are essential, but at the moment, they’re highly distracting to the clinical team.
How do you envision artificial intelligence improving medical education?
People are saying, “Well, how are we going to give tests? Students may use AI to answer questions on the test.” Instead of trying to figure out how we protect the testing environment from the AI, I would think about how to design the test so that we know whether the doctor knows how to use their technology to get at the right answer. We’re going to have to design whole new tests. It’s not a matter of trying to figure out how to keep the technology out. There’s no point in doing that. Why do we want to train somebody to work one way to take a test and work another to take care of the patient?
What are the biggest potential benefits and potential risks of AI and machine learning?
I think the biggest benefits are clearly the ability to reduce or eliminate administrative tasks and the ability to generate hypotheses for experimental validation. Those are things that will be dramatically helpful.
Right now, we have human clinicians who are not supported to help them know whether they’re making the right diagnoses or not. There’s no good feedback system.
I imagine a world where we make it easy for the clinical team to note when they make a diagnostic decision, to note over time if and how that changes, and to get feedback about what happens downstream. Right now, that doesn’t happen. If you had that, then introducing AI into practice would be simple, because AI would simply be another member of the team. That’s what its actual role is going to be: to augment human intelligence.
The challenge is to figure out how to put in place the supports the clinical team needs to learn and improve as they practice and to incorporate the technology as a member of that team.
There has a lot been written about the risk of bias, since the only data AI has is what it's been fed, and that data will contain bias. That's a well-known risk. Also, the AI doesn't tend to know what it doesn't know. It doesn't know what its limits are. Of course, you can say the same thing about humans. That's one of the things we try to teach good doctors in medical school: how to know their limits. I don't think we figured out how to teach the technology that.
Story originally published in DukeMed Alumni News, Fall 2023.