While the medical use of artificial Intelligence may currently appear confined to the labs and the realm of research in Australia, the reality is that many GPs in other countries are already using AI in the clinic.
By Eric Martin
While the medical use of artificial Intelligence may currently appear confined to the labs and the realm of research in Australia, the reality is that many GPs in other countries are already using AI in the clinic.
The funding pouring into areas of AI research and development is now often at rates outstripping traditional areas of medicine such as antibiotics.
Last month, a European startup Nabla, which has already signed a partnership deal with a subsidiary of one of the biggest healthcare providers in the US โ Kaiser Permeate โ announced raising $24m to roll out the AI tool for doctors across its network.
The Nabla Copilot was launched in March last year and is already being used by nearly 20,000 clinicians across the US and Europe.
โCopilot is closely integrated with major electronic health records such as Epic and Nextgen and already handles more than 3 million consultations a year in three languages,โ Nablaโs co-founder and CEO Alexandre Lebrun said.
While Australian doctors may have glanced at the strategic recommendations made in policy statements by the various national medical groups, the sense of urgency behind the need for clear guidelines is now clear.
While the AMA believes that AI may support the delivery of healthcare that is safe, high quality and patient centred โ potentially advancing the healthcare system and the health of all Australians โ the association has flagged a wide range of concerns that have serious legal and policy implications for healthcare providers.
โThe integration of AI into models of healthcare delivery will create unforeseen consequences for the safety and quality of care and privacy of patient, as well as for the healthcare workforce and the medical profession,โ the AMA said.
It has repeatedly stressed that a human must always be ultimately responsible for communication throughout the patientโs journey, noting that โregulation must ensure that clinical decisions are made with specified human intervention points during the decision-making process.โย
โThe final decision must always be made by a medical practitioner and never by a person in a non-clinical role with the aid of AI, and this decision must be meaningful, not merely a tick-box exercise.
โIncreasing automation of decision making such as this could result in adverse outcomes for groups with diverse needs, particularly if the data used have systemic biases embedded in AI algorithms.โ
Given the high level of vulnerability to potential legal claims already faced by doctors, the AMA has highlighted the need for regulation that clearly establishes โresponsibility and accountability for any errors in diagnosis and treatment.โย
โThere will be many instances where a practitioner determines that the appropriate treatment or management for a patient is different from the suggestion of an AI or automated decision-making tool. In the absence of regulation, compensation for patients who have been misdiagnosed or mistreated will be impossible to achieve.โ
Healthcare organisations โ from hospitals to individual practitioners โ need to establish robust and effective frameworks for managing risks which ensure patient safety and guarantee the privacy of all involved.
The AMA has recommended registering AI-based software as a medical device with the TGA as one way to enforce robust standards nationally and ensure that adverse outcomes are reported.
โOur regulatory environment must ensure that AI tools developed by private profit-oriented companies do not undermine healthcare delivery nor trust in the system. If patients and clinicians do not trust AIs, their successful integration into clinical practice will ultimately fail,โ the AMA said.
The need for extensive education for doctors and other health professionals on the ethical and practical application of AI in a clinical setting, has been highlighted by every medical organisation in Australia, and the dean of Harvardโs Medical School, Dr Bernard Chang, recently gave an insight into how this training could be incorporated into future programs.
Dr Chang told JAMA that students would need to be more โhumanโ in their doctoring skills than ever before, โworking at the highest levels of cognitive analysis, engaging in the most personally nuanced forms of communication, and remembering the importance of the actual laying on of hands.โย
โWe [must] quickly move our students toward doing even higher levels of cognitive analysis, higher levels of understanding the individual patient nuance, which I think might still be difficult for AI to handle,โ he said. ย
โThis includes higher levels of compassionate and culturally competent communication, which we know AI might have some difficulty with, and returning students to the primacy of the physical exam, which as far as I know, AI is not going to be replacing in the next few years.โย
Dr Chang said that as AIโs accuracy and efficacy increased with development, it would allow more time to be spent on doctor/patient interaction and personalised forms of treatment.ย

