AI and med school: the Harvard perspective

In a fascinating opinion piece published this week, JAMA Editor Dr Kirsten Bibbins-Domingo sat down with Harvard medical school’s Dean for Medical Education and Professor of Neurology, Dr Bernard Chang, to discuss the potential impact of AI on student doctors and their educational experiences.


Dr Chang believes that the advent of AI will have a similarly significant impact as the arrival of the Internet, when search engines made access to a wide range of information, including medical texts, available to students for a fraction of the effort or price of traditional textbooks. 

“The internet was around when I was in medical school in the mid-1990s, but it wasn’t really a source of medical information – certainly as students, we didn’t go to the internet to help in our courses or help to learn material,” he said. 

“And what many medical schools around that time did was evolve their curriculum from one that emphasized lectures, which of course are very efficient ways of transferring facts and knowledge, to small group discussion formats, which are more beneficial toward knowledge integration and knowledge analysis and interpretation and help students with oral presentation skills.” 

Dr Chang explained that students will need to be more ‘human’ in their doctoring skills than ever before, “working at the highest levels of cognitive analysis, engaging in the most personally nuanced forms of communication, and remembering the importance of the actual laying on of hands.” 

“We [must] quickly move our students toward doing even higher levels of cognitive analysis, higher levels of understanding the individual patient nuance, which I think might still be difficult for AI to handle,” Dr Chang said.  

“[This includes] higher levels of compassionate and culturally competent communication, which we know AI might have some difficulty with. And returning students to the primacy of the physical exam, which as far as I know, chatbots are not going to be replacing in the next few years.” 

He explained that even in its current infant form, ChatGPT could produce a ‘fairly good’ differential diagnosis when given a set of signs and symptoms, and as its accuracy and efficacy increased with development, it would allow more time to be spent on doctor/patient interaction and personalised forms of treatment. 

“That’s something that we still need to teach our students, but maybe more quickly than before we can move our students to a level at which they’re working with that differential diagnosis to make it individualized to their particular patient,” Dr Chang said. 

“To consider some of the particular nuances and specifics of their patient’s history or their patient’s lived experience that ChatGPT really can’t take into account – and that’s where their role as medical students and future doctors can be most useful.” 

Dr Chang highlighted the need for medical schools to establish a set of educational policies that ensured students were still learning the basics and were using AI tools as aids to their education and their work – rather than developing an unhealthy reliance on the technology, which has proved to be heavily influenced by the content its trained with.  

“Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence,” he said. 

“Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer.” 

“Another thing we’re telling our faculty is the importance of what’s now known as prompt engineering, which is knowing what questions to ask. It’s funny because that’s an old-fashioned thing we tell our students on the wards, right?  

“That when students say, ‘That patient was a poor historian,’ perhaps, in fact, it’s because you didn’t ask the right questions. And it’s the same thing with these generative AI tools. The quality of the prompt that you give it is proportional to the quality of the response that you’re going to get.” 

Dr Chang highlighted that the other area where AI could have a significant impact is on leveling the playing field for doctors and students for whom English is a second language, eliminating some of the inherent language bias in administrative and communicative processes. 

“It will force us to look more closely at the substance of what people are writing, the nature of their experiences, why they actually want to become physicians, what their visions for a career in medicine are,” he said. 

“And not just simply the surface readability or fluency of their language, because ChatGPT can make everybody sound fluent in that way.” 

The original and complete version of the interview is available here: https://jamanetwork.com/journals/jama/fullarticle/2811219?resultClick=1