ChatGPT is a rapidly evolving field and is proving to be revolutionary especially in the field of healthcare wherein doctors are also using it to write medical notes. Taking a note of this scenario, the Australian Medical Association has asked its Federal Government that artificial intelligence protections should include ensuring that clinicians make the final decisions. In May 2023 five hospitals in Perth’s South Metropolitan Health Service were advised to stop using ChatGPT for writing medical records for patients after it was observed that some staff had been using the large language model for the practice.
The Australian Medical Association (AMA) has submitted discussion paper on safe and responsible AI to its Federal Government as it feels that there is a need for strong rules and transparency to protect patients and healthcare professionals around the use of artificial intelligence in the healthcare industry. In Australia the AI regulation gap needs to be addressed especially in healthcare as there lies the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy. The final decision must always be made by a human and this decision must be a meaningful one and not just a tick box exercise.
Thus, AI protections should include ensuring that clinicians make the final decisions and there is informed consent by the patient for any diagnostic or treatment done by using AI. The AMA has also emphasized on the protection of patient data and has demanded to follow ethical and equity approaches when the AI regulations are being framed.
AMA opines that the proposed EU Artificial Intelligence Act which will categorise the risks of different AI technologies and establish an oversight board, should be considered by the Australian Government while formulating AI regulations. It is imperative to say that AI will ultimately improve health outcomes for patients but it very important to get the systems right in the first place.