page_banner

news

OpenAI’s ChatGPT (chat generative pretrained transformer) is an artificial intelligence (AI) powered chatbot that has become the fastest growing Internet application in history. Generative AI, including large language models such as GPT, generates text similar to that generated by humans and appears to mimic human thought. Interns and clinicians are already using the technology, and medical education can’t afford to be on the fence. The field of medical education must now grapple with the impact of AI.

There are many legitimate concerns about the impact of AI on medicine, including the potential for AI to fabricate information and present it as fact (known as “illusion”), the impact of AI on patient privacy, and the risk of bias being incorporated into source data. But we are concerned that focusing solely on these immediate challenges obscures the many broader implications that AI could have on medical education, particularly the ways in which the technology could shape the thinking structures and care patterns of future generations of interns and physicians.

Throughout history, technology has upended the way physicians think. The invention of the stethoscope in the 19th century promoted the improvement and perfection of physical examination to a certain extent, and then the self-concept of the diagnostic detective emerged. More recently, information technology has reshaped the model of clinical reasoning, as Lawrence Weed, inventor of problem-oriented medical Records, puts it: The way physicians structure data affects the way we think. Modern healthcare billing structures, quality improvement systems, and current electronic medical records (and the ills associated with them) have all been profoundly influenced by this recording approach.

ChatGPT launched in the fall of 2022, and in the months since, its potential has shown that it is at least as disruptive as problem-oriented medical records. ChatGPT has passed the U.S. Medical licensing exam and the Clinical Thinking Exam and is close to the diagnostic thinking mode of physicians. Higher education is now grappling with the “end of the road for college course essays,” and the same is sure to happen soon with the personal statement students submit when applying to medical school. Major healthcare companies are working with technology companies to widely and rapidly deploy AI across the U.S. healthcare system, including integrating it into electronic medical records and voice recognition software. Chatbots designed to take over some of the work of doctors are coming to market.

Clearly, the landscape of medical education is changing and has changed, so medical education faces an existential choice: Do medical educators take the initiative to integrate AI into physician training and consciously prepare the physician workforce to safely and correctly use this transformative technology in medical work? Or will external forces seeking operational efficiency and profit determine how the two converge? We firmly believe that course designers, physician training programs and healthcare leaders, as well as accrediting bodies, must start thinking about AI.

R-C

Medical schools face a double challenge: they need to teach students how to apply AI in clinical work, and they need to deal with medical students and faculty applying AI to academia. Medical students are already applying AI to their studies, using chatbots to generate constructs about a disease and predict teaching points. Teachers are thinking about how AI can help them design lessons and assessments.

The idea that medical school curricula are designed by people is facing uncertainty: How will medical schools control the quality of content in their curricula that was not conceived by people? How can schools maintain academic standards if students use AI to complete assignments? To prepare students for the clinical landscape of the future, medical schools need to begin the hard work of integrating teaching about the use of AI into clinical skills courses, diagnostic reasoning courses, and systematic clinical practice training. As a first step, educators can reach out to local teaching experts and ask them to develop ways to adapt the curriculum and incorporate AI into the curriculum. The revised curriculum will then be rigorously evaluated and published, a process that has now begun.

At the graduate medical education level, residents and specialists in training need to prepare for a future where AI will be an integral part of their independent practice. Physicians in training must be comfortable working with AI and understand its capabilities and limitations, both to support their clinical skills and because their patients are already using AI.

For example, ChatGPT can make cancer screening recommendations using language that is easy for patients to understand, although it is not 100% accurate. Queries made by patients using AI will inevitably change the doctor-patient relationship, just as the proliferation of commercial genetic testing products and online medical consulting platforms has changed the conversation at outpatient clinics. Today’s residents and specialists in training have 30 to 40 years ahead of them, and they need to adapt to changes in clinical medicine.

 

Medical educators should work to design new training programs that help residents and specialist trainers build “adaptive expertise” in AI, enabling them to navigate future waves of change. Governing bodies such as the Accreditation Council for Graduate Medical Education could incorporate expectations about AI education into training program routine requirements, which would form the basis of curriculum standards, Motivate training programs to change their training methods. Finally, physicians already working in clinical Settings need to become familiar with AI. Professional societies can prepare their members for new situations in the medical field.

Concerns about the role AI will play in medical practice are not trivial. The cognitive apprenticeship model of teaching in medicine has lasted for thousands of years. How will this model be affected by a situation where medical students start using AI chatbots from day one of their training? Learning theory emphasizes that hard work and deliberate practice are essential for knowledge and skill growth. How will physicians become effective lifelong learners when any question can be answered instantly and reliably by a chatbot at the bedside?

Ethical guidelines are the foundation of medical practice. What will medicine look like when it is assisted by AI models that filter ethical decisions through opaque algorithms? For nearly 200 years, the professional identity of physicians has been inseparable from our cognitive work. What will it mean for doctors to practice medicine when much of the cognitive work can be handed over to AI? None of these questions can be answered right now, but we need to ask them.

The philosopher Jacques Derrida introduced the concept of pharmakon, which can be either “medicine” or “poison,” and in the same way, AI technology presents both opportunities and threats. With so much at stake for the future of healthcare, the medical education community should take the lead in integrating AI into clinical practice. The process will not be easy, especially given the rapidly changing conditions and lack of guidance literature, but Pandora’s Box has been opened. If we don’t shape our own future, powerful tech companies are happy to take over the job


Post time: Aug-05-2023