Some experts say doctors who think artificial intelligence is a fad about robots that will bypass the healthcare system risk being left behind and doing unnecessary work.
Cathy O’Leary reports
As little as a decade ago, there was scepticism that artificial intelligence could – and would – play a significant role in healthcare.
After all, how could machines do the job of health care staff in examining, diagnosing and treating patients?
Even though monitoring equipment has long been used in medical care, from ECGs to blood glucose monitors, there has been firm resistance to artificial intelligence, considered by many as a bridge too far amid concerns about safety and privacy as well as fears that machines will replace people.
And among those most wary have been doctors, for whom AI raises specific legal, professional and ethical challenges.
At the Royal Australian College of General Practitioners’ annual conference held in Perth late last year, a keynote speaker told the 1600 GP delegates that the risk was not losing their jobs to machines but rather being left behind.
Dr Jordan Nguyen, a world-renowned PhD biomedical and AI engineer, said that with two brothers who were specialist GPs he appreciated first-hand that many doctors were cautious about AI but often it was because they did not understand it.
“There are a lot of areas where AI will have an impact on general practice, and you’re going to hear about AI a lot more,” he said.
“It’s not that GPs are going to be replaced by robots and AI – so that’s not where the job losses are going to be – it’s more that GPs will be replaced by GPs who use AI.”
As the founder and chief executive of Psykinetic, a social business creating technologies to help people in the disability and aged care sectors, Dr Nguyen has helped develop, among other things, a mind-controlled wheelchair and an instrument to allow people to perform live music using eye movements.
Dr Nguyen told the RACGP conference that potential health applications of AI included automating administrative tasks, clinical decision support, enhanced patient triage, efficient e-health records management, predictive analysis for patient risk stratification, medical imaging assistance and virtual health help for patients.
He said there were many forms of AI, including digital ‘twins’ which were digital representations of the patient – a more sophisticated version of someone’s social media profile.
“You can then connect your data to the data of millions of people, which can then give you insights into your health that might not be obvious, such as predisposition to diabetes even though there is no family history,” he said.
“This is an application of machine-learning, and they’re now doing work to apply it to the human body, even to editing genes of living people.
“What it could mean is that some time down the track you’re able to say to your patients there is a very simple procedure if you’re going to have a child – it’s been deemed completely safe – and we can make sure your unborn child is never going to get cancer.”

Dr Nguyen told Medical Forum that resistance to the use of artificial intelligence within the medical profession was often because doctors had limited knowledge of its real-life applications and had concerns about patient privacy.
“I understand those concerns completely, and it’s a very important thing not to just go ‘all in’ when it comes to AI,” he said.
“That’s why it is important to show doctors practical applications because as soon as you can see it in the real world you can go ‘OK now I can see how it can be used’ rather than talking about the actual technology itself, which a lot of the time sounds scarier than it is.
“My brothers are both GPs and even with them there is some resistance when I talk to them, and while they’re identical twins, one is quite open to technology, while the other says ‘I’m too busy to have to understand any of this’ so it’s easier not to engage.
“My message is that this powerful technology can actually make your job easier, but I also realise that it’s about small steps, and it’s the stuff people are increasingly going to hear about, like easier transcription.
“It’s best to start small with something you’re comfortable with that will help you in your day-to-day role.”
Other speakers stressed that the benefits of AI were often dependent on the quality of the information and data underpinning it.
More GPs were starting to adopt medical AI scribe software – artificial intelligence programs that capture consultations in real-time by recording in the background and generating detailed notes ready to go into a patient’s medical record – going far beyond basic transcription.
Professor Richard Hobbs, from the University of Oxford, said AI and digital health were going to transform the approach to a more accurate and personalised approach to medicine.
“But there are huge obvious risks and concerns that patients and practices have, and it’s very important that clinicians direct how this transformation of medicine is going to occur,” Prof Hobbs said.
“AI is a syndrome, it’s like heart failure – there are multiple ways to it, and multiple methodologies to the outputs – it’s not a single mechanism.”
Big-ticket items include machine-learning, language processing and robotics.
“Part of the solution will be more automated management of health, such as AI-driven scripts, where the doctor’s and the patient’s speech is picked up, and this results in an automated clinical record, which is a start because it will save you time, and although you still need to check the entry at least it’s a draft for you.
“But once we get into the total language models, you will be able to end up with automated coding systems, and eventually we’ll have systems which will have been tested enough to be reliable, and then it will be easier for the clinician.
“The dynamic of the consultation will change because you will be able to focus more on the consultation, and you can record the bits that you want, and you can get consent at the same time.”
Meanwhile, many industry groups have stepped up to become more proactive in the AI space, including the Australian Medical Association which has developed policies around the use of automated decision making (ADM) and large language models (LLMs) in healthcare.
In its submission to the Select Committee on Adopting Artificial Intelligence last June, the AMA said AI could improve the efficiency and quality of healthcare but also created risks for patients and the medical profession, including bias, discrimination and errors.
“Risks include a potential over-automation of decision making, poorly defined measures of accountability, transparency and liability, adverse outcomes for groups with diverse needs and misuse of patient information,” the AMA submission said.
“AI must never compromise medical practitioners’ clinical independence decisions and professional autonomy. To avoid machine error or over-reliance on AI technology, decisions relating to patient care must always be made by a human, and this decision must be a clinically sound, meaningful decision, not merely a tick-box exercise.”
The AMA argues there is a grey area over who is responsible for errors in diagnosis and treatment related to the use of AI products, including compensation for patients who have been misdiagnosed or treated incorrectly.
It wants the Federal Government to address Australia’s poorly defined civil and criminal liability rules in relation to cases of damages caused by artificial intelligence systems.
RELATED: AI gaining on doctors
In Western Australia, the use of AI continues to expand in leaps and bounds, from remote primary care locations to hospital emergency departments.
WA-developed wearable biobands are showing promise in improving patient monitoring care and safety and reducing reliance on manual vital sign checks and in-ward equipment.
In a recent trial, 35 intensive care patients at a Perth tertiary hospital who were transitioning to a general ward were fitted with AI wearable bands which captured their health data, giving nurses and doctors real-time monitoring while they could attend to other cases, and avoid the need to wake up patients for monitoring.
The biobands were developed by WA medtech company Medivitals using funding from the State Government’s Future Health Research and Innovation Seed Fund.
The technology has already been successfully trialled in a hospital-in-the-home program, allowing remote monitoring of patients and avoiding unnecessary hospital admissions, with the results due to be published soon.
Its developers say it has strong potential in hospital emergency department waiting rooms, enabling real-time monitoring of patients waiting for care and identifying signs of deterioration so that high-risk patients can be prioritised.
Professor Warren Harding, co-founder of Medivitals, said the bioband used software which could be customised to the needs of individual patients, while AI assistance could provide alerts to clinicians about abnormal vital signs.
“The device ensures that high-risk patients are identified and prioritised even before they reach a hospital bed, and this predictive approach addresses critical challenges such as bed capacity overcrowding and staff shortages,” he said.
Prof Harding, who was recently appointed to the board of the Australian Digital Health Agency, said the technology could be applied in residential aged care and hospital-in-the-home care, allowing patients to avoid hospital while vital signs such as heart rate, oxygen saturation, temperature, blood pressure and respiratory rate were monitored remotely.
Professor Yogesan Kanagasingam, Medivitals co-founder and Chair of Digital Health and Telemedicine at the University of Notre Dame’s School of Medicine, said the technology also ensured data security and patient privacy.
Patient data was securely stored at the hospital or locally within Australia, ensuring compliance with privacy regulations, and by avoiding external servers and third-party cloud providers, the system minimised cyber risks.
Prof Kanagasingam, who is on the board of directors at the Indo-Asia Digital Health Centre for Innovation and Commercialisation, told Medical Forum there had been good feedback from patients and clinicians involved in the ICU hospital trial.
“The doctors think it’s very cool because the bioband technology is just providing the information but then the clinician still makes the decisions,” he said.
“Hospital-in-the home is also one of the major areas of interest for this technology, because there is only a limited number of beds in hospital, and this is a way to allow patients to stay at home and for doctors and nurses to manage them.
“And the ED use is potentially very exciting, as it can alert clinicians if the patient’s condition changes, so it allows doctors and nurses to focus on what they’re doing.”

AI in ophthalmology
Prof Kanagasingam said there were other AI projects well underway in WA, including a system of techniques to grade and diagnose eye diseases such as diabetic retinopathy, glaucoma, aged-related macular degeneration and cataracts.
It is the first AI system for eye diseases to receive regulatory approval in Australia, and the technology is now licensed to international company TeleMedC and used in Singapore, Europe, the Middle East and India.
Prof Kanagasingam said one of the most interesting recent trials involved bringing AI-based eye screening to two remote Aboriginal communities located near Fitzroy Crossing.
The partnership between the University of Notre Dame’s School of Medicine and the Foundation for Indigenous Sustainable Health secured $1.3 million in funding from the Australia-India Strategic Research Fund.
“We were the first to implement AI-based eye screening in such remote Aboriginal communities using Starlink satellite technology,” he said.
“The results were eye-opening – approximately 20% of those screened required urgent referrals to an eye doctor due to severe disease and remarkably many of them had never seen an eye doctor before.”
“We are also working with one of the world’s largest eye service providers, the Aravind Eye Institute in India, which sees over 8.5 million people a year.
“We are deploying our AI system from WA into rural and remote locations in South India where they train women from villages to screen local people and then use AI to grade and refer those who need treatment and surgery – this has been a very successful project.”
Meanwhile, the State Government has recently accelerated funding for a range of other AI research projects, including a personalised health platform known as Orva to help patients better understand their health; an AI-based initiative that can predict sepsis in emergency departments; and the use of AI data in diabetes care plans.
Want more news, clinicals, features and guest columns delivered straight to you? Subscribe for free to WA’s only independent magazine for medical practitioners.
Want to submit an article? Email editor@mforum.com.au