This article outlines the AMA’s position on the application of artificial intelligence (AI) and AI tools in healthcare. For the purpose of this position statement AI tools include the application of automated decision making (ADM) and large language models (LLMS) in healthcare. The application of AI to healthcare is a relatively new but rapidly evolving area. While AI has the potential to benefit healthcare, the clinical and societal implications of AI in the healthcare environment are largely unknown and uncertain. In such a fluid and rapidly expanding environment, the development and implementation of AI technologies should be undertaken with appropriate consultation, transparency, accountability and regular, ongoing review to determine its clinical and societal impact and ensure it continues to benefit, and not harm, patients, healthcare professionals and the broader community. This position statement outlines considerations and policy parameters before AI tools are integrated into healthcare delivery. It is not guidance for how and where to implement AI in healthcare.
Definitions
The following definitions apply to the description of this position: (a) Artificial intelligence is defined as the ability of a digital computer or computer-controlled robot to perform tasks usually associated with intelligent beings. (b) Automated decision making means any technology that either assists or replaces the judgment of human decision makers. (c) Big language models is a term that refers to AI models that can produce convincing natural language text output after being trained on large amounts of data. (d) Machine learning means an approach to artificial intelligence in which computers are used to analyze large amounts of data and discover how to perform tasks rather than being programmed to do them. (e) Robotics in healthcare. These are applied in surgery, using computers and software to precisely manipulate surgical instruments for various surgical procedures.
AI Principles in Healthcare
Ethical Principle: 1. AI ethics is a set of values, principles and techniques that employ widely accepted standards of right and wrong to guide ethical conduct in the development and use of AI technologies. 2. It is the AMA position that the application of AI in healthcare should only occur with appropriate ethics. There should be acknowledgement that this is a rapidly evolving field with varying degrees of understanding among clinicians, other healthcare professionals, administrators, consumers and the broader community. 3. The AMA supports AI in healthcare that is patient-centred and used to benefit the health and well-being of patients as well as the health of the broader community. The health interests of patients and the broader community should be the primary and guiding focus of all AI applications in healthcare. 4. AI should support doctors and the broader healthcare professions to serve the healthcare needs of patients and the broader community. AI should enhance but not replace clinical decision-making and should contribute to quality improvement and clinical care optimisation. 5. AI must uphold patients’ right to make their own informed healthcare decisions. 6. The AMA’s position is that AI should never compromise the clinical independence and professional autonomy of medical practitioners. 7. The AMA calls for the development and application of AI in healthcare to be accountable and transparent to patients, the medical and health professions, and the broader community. 8. The application of AI in diagnosis and treatment must establish clear accountability lines, outlining ultimate responsibility for any misdiagnosis or mistreatment. It is the AMA’s position that the final decision on patient care should always be made by a human.
9. The development and application of AI for healthcare should be inclusive, undertaken with appropriate consultation with the medical profession, other healthcare professionals and the broader community. 10. The AMA states that the application of AI in healthcare should never lead to greater health inequities for any population. 11. It is the AMA position that the use of AI in healthcare should protect the privacy of patient health information. It should continue to protect patients’ right to know what information is held about them, their right to access their medical records and their right to control its use and disclosure, with limited exceptions. 12. AI should protect the privacy of a patient’s personal information, including their medical records, by only disclosing their information to others, only with the patient’s express up-to-date consent or as authorized by law as required. This applies to both identified and de-identified patient data. 13. The AMA calls for the application of AI in healthcare to be subject to regular reviews and audits for quality assurance, safety and clinical enhancement purposes. These reviews and audits should be transparent and accountable. 14. The AMA supports ongoing research into the clinical, ethical, legal and social aspects and impacts of AI in healthcare.
Development and Implementation Principles
15. It is the AMA position that technology can enable healthcare that is safe, high quality and patient-centred. Technology can improve and advance our healthcare system and the health of all Australians. 16. The AMA believes that with appropriate policies and protocols in place, AI can support the delivery of healthcare in a number of ways including assisting in diagnosis, recommending treatment, at transitions of care, or facilitating communication between practitioner and patient. A human – usually a medical practitioner – should always be ultimately responsible for decisions and communications and should have meaningful involvement at all stages of the patient journey. 17. Before any AI tool is used in clinical care, including as a decision-making tool, it must first be assessed against the requirements for registration as software as a medical device by the Therapeutic Goods Administration. This will ensure that reporting of adverse outcomes occurs. 18. The AMA stated that human-delivered medical care should never be replaced by AI, but AI has the potential to assist in care delivery, reduce inefficiencies in the system, and lead to a more appropriate allocation of resources. AI is a means to achieving the goal of better healthcare, but can only support the doctor and patient to reach this goal. 19. Medical practitioners have a responsibility to advocate that patient health, well-being and privacy is at the forefront of all applications of AI in healthcare.
20. When adapting work processes to harness the possibilities of AI tools, healthcare organizations – from hospitals to individual private practitioners – must first establish robust and effective frameworks for managing risks that ensure patient safety and guarantee the privacy of all involved. 21. The integration of AI into models of healthcare delivery will create unforeseen consequences for patient care and privacy, as well as for safety and quality for the healthcare workforce and the medical profession. This requires changes in education, training, supervision, examination, workforce management, research, and the practice of medicine. 22. It is the AMA position that AI tools used in healthcare should be co-designed, developed, and tested with patients and medical practitioners. The AMA expects this to be part of a standard approach to developing and implementing AI in healthcare. 23. There will be many instances where a practitioner determines that the appropriate treatment or management for a patient is different from what an AI or an automated decision-making tool suggests. Healthcare organisations should never establish protocols where the clinical independence of the practitioner is reduced by AI or the final decision is made by a person in a non-clinical role with the assistance of AI. This extends to funders of healthcare delivery such as governments and insurers. 24. Tools using AI in healthcare should ensure inclusion and equity for all, regardless of race, age, gender, socioeconomic status, physical ability or any other determinant. 25. There are significant risks to increasing automation of decision-making as this can lead to adverse consequences for groups with diverse needs in healthcare, especially if the data used for machine learning contains systemic biases embedded in the AI algorithms. Therefore, the application of AI should be relevant to the target population, i.e. AI tools used in specific countries for specific populations will need to be trained on data specific to those populations. 26. The AMA maintains that the government has a key role in regulating the use and application of AI in healthcare, to ensure it is used appropriately. This regulatory environment must ensure that AI tools developed by private profit-oriented companies do not undermine healthcare delivery nor erode trust in the system. If patients and clinicians do not trust AIs, their successful integration into clinical practice will ultimately fail. 27. Medical defence organisations may have their own stipulations on the use of and engagement with AI for clinicians. Their policies should be appropriate in the context of emerging technologies. All medical practitioners should ensure they have appropriate indemnity insurance to enable them to integrate AI into their practice.
Regulation
1. It is the AMA position that AI requires regulation as does any other technology involved in the diagnosis and treatment of patients. Government regulation of AI in healthcare must put in place adequate protections around patients and consumers, as well as healthcare professionals, to build trust in the system. Those protections should be:
(a) support better patient outcomes, (b) ensure the final clinical decision is made by the medical practitioner, (c) the treatment or diagnostic procedure is always consented to by the patient, and (d) that patient and practitioner data are protected.
2. Appropriate regulation should be built on a strong evidence base, take advice from leading experts and be adequately supported by government to deliver a quality regulatory framework. Government level regulation should be in place, with specific governance arrangements tailored to individual services and programmes.
3. Regulation and oversight is important because the application of AI in healthcare poses significant risks, potentially through patient injury from system errors, increased risks to patient privacy, or systemic bias embedded in the algorithm.
4. The AMA believes that successful regulation of AI in healthcare will require a common set of agreed principles embedded in legislation that will establish a compliance baseline for all of them. Those principles should be framed around appropriate governance of AI that should ensure the following: (a) safety and quality of care provided to patients, (b) patient data privacy and security, (c) appropriate application of medical ethics, (d) equity of access and equity of outcomes through the elimination of bias in AI and machine learning, (e) transparency in how the algorithms used by AI are developed and applied, and (f) that the final decision on treatment should always rest with the patient and medical professional, while at the same time recognising instances where responsibility must be shared between AI (manufacturers), medical professionals and service providers (hospitals or medical practices).
5. In accordance with the above principles, new regulation is needed to mitigate existing and emerging risks with the application of AI in healthcare.
6. The prevention of bias in algorithms can be strengthened by equity in the inclusion of all populations in the data used for machine learning and AI. Data bias in AI algorithms can be avoided through mechanisms such as using diverse and inclusive programming groups with a wide and diverse range of high-quality and reliable data. Algorithms should be continually audited and updated to identify unintended biases and ensure they are based on the most current data available.
7. It is the AMA position that the privacy of patient and practitioner data should be a key objective for any healthcare organisation using AI, and inparticular; big language models. The collection and sharing of patient data should only occur with appropriate patient consent, as patients own their health data. Strong data governance arrangements should be in place.
8. Any future regulation of AI in healthcare will need to ensure that AI is only used where it will genuinely contribute to improving patients’ health outcomes. This will need to be supported by evidence about best practice use of AI.
9. Regulation should ensure that clinical decisions are made with specified human intervention points during the decision-making process. The final decision should always be made by a human, and that this decision should be a meaningful decision, not just a tick-box exercise.
10. It is the AMA position that regulation must clearly establish responsibility and accountability for any errors in diagnosis and treatment. In the absence of regulation, compensation for patients who have been misdiagnosed or mistreated will be impossible to achieve. The regulation should make clear that the ultimate decision on patient care should always be made by a human, usually a medical practitioner.
11. Regulation must ensure that choice for use of AI technologies in healthcare rests with the clinician and must not be imposed on them by the preferences of the hospital systems or other external decision makers, such as private health insurers or health administrators for example.
12. Regulation should not impose additional burdens of compliance on the medical profession. The aim is to ensure that participants feel safe in the application of AI, fostering innovation and progress in this important field.
13. Regulatory agencies have a key role in ensuring rigorous testing of AI programs and applications, transparent communication of test results, and subsequent monitoring of AI performance, which can eliminate any negative consequences.
14. The AMA calls for a national governance structure that advises the development of policy around AI in healthcare. This governance structure should include medical practitioners, patients, AI developers, health informaticists, advocates, healthcare administrators, medical defence organisations, and any other relevant stakeholders.
Equity and Safety
1. It is the AMA position that AI technologies should be implemented in a manner that does not exacerbate inequalities in healthcare, including but not limited to those related to race, gender, or socioeconomic status. It is therefore paramount that AI technologies in healthcare are developed and implemented with appropriate ethics.
2. The implications surrounding the use of AI in healthcare must be adequately addressed and resolved before any ubiquitous application. Key issues to be addressed with the use of smart machines in the making of healthcare decisions include:
(a) Safety and reliability – Prior to any rollout and application of AI, rigorous testing and clinical trials must be conducted to ensure they are safe to use, that patients’ health is not placed at risk by the use of AI technologies. In addition, regular monitoring, review and audits for quality assurance, safety and clinical improvement purposes must be conducted in a transparent and accountable manner. (b) Accountability – Ultimate responsibility for any misdiagnosis or mistreatment must be clear, in keeping with the AMA’s position that the final decision on patient care should always be made by a human. (c) Transparency and clarity—On a broader scale, it is important that doctors, other healthcare professionals, administrators, patients, and the broader community are informed and understand how algorithms are used in clinical diagnosis and decision-making, including the ethical and clinical criteria used to determine decision-making parameters (including any underlying bias). At the clinical level, patients should be informed when a diagnosis or a recommended course of treatment was determined by an AI program. (d) Privacy—The privacy of patient and practitioner data used for machine learning should be of paramount importance. This is also to eliminate any possibility of future discrimination based on someone’s health status. (e) Equity, fairness, and inclusiveness—Everyone should have equal access to technologies that are designed to ensure adequate healthcare; at the same time, data equity must be ensured, that all relevant population groups are represented in the data used for machine learning in AI technologies.
3. Equity of access to diagnostic services that are AI-powered must be accompanied by equity of access to adequate treatments for diagnosed conditions. Doctors have a duty to provide treatment once a diagnosis is established and that duty should not be diminished by unequal access to healthcare by patients and communities.
4. Unproven AI technology should not be used during emergency situations such as pandemics or disaster responses. The urgency of need should not be used as a justification for the application of unproven technologies.
5. The application of AI tools in healthcare should never result in unbalanced healthcare policy. Governments should not invest disproportionately in AI tools with the goal of reducing public expenditure in healthcare by reducing human engagement.
Read Also:
- Artificial Intelligence (AI) Techniques Used For Diseases Detection In Agriculture
- Impact of Artificial Intelligence (AI) In Banking
- Disadvantages And Challenges Of Artificial Intelligence (AI) In Banking
- Role Of Artificial Intelligence (AI) In The Banking Sector
- Introduction To CHATGPT And Other AI Content Generation Tools
- Artificial Intelligence (AI) In Agriculture: Current Status And Future Need
- A Glimpse About Artificial Intelligence (AI) In Agriculture
- Benefits And Challenges Of Artificial Intelligence (AI) In Agriculture
Leave a Reply