Share on Google+
Share on Tumblr
Share on Pinterest
Share on LinkedIn
Share on Reddit
Share on XING
Share on WhatsApp
Share on Hacker News
Share on VK
Share on Telegram

Artificial Intelligence (AI) And Technology In Health Care: Overview And Possible Legal Implications

The main concerns in the health care system in the United States today are cost, quality, and access to care.1 In an effort to improve our health care system, innovators have begun to develop technology and artificial intelligence to assist in reaching these goals. In a health care setting, artificial intelligence can be used to improve the efficiency and quality of patient care, as well as to advance medical research. Today, approximately 86% of health care providers use at least one form of artificial intelligence in their practices. However, under traditional tort liability it is unclear as to the potential legal implications and liability in the event of a medical error involving artificial intelligence technology. The use of artificial intelligence technology in a medical setting will inevitably create risk as not all potential consequences of the use of new technology are foreseeable. There is currently limited information as to the legal implications for tort liability for error involving artificial intelligence in medical settings as both the technology and the use itself are still evolving; however, traditional tort liability laws may be applicable.

Overview

Today, highly intelligent machines and sophisticated robots are performing complex tasks that were once thought to be within the exclusive capability of only humans. This partnership of humans with technology is being realized in many tangible ways and is evident in health care settings today through the use of artificial intelligence technology in providing patient care. In general, artificial intelligence uses technology, including programmed computer systems, to process large amounts of data and recognize patterns within the data to complete specific tasks. This form of technology makes it possible for machines to “learn from experience” so that they can perform human-like tasks. In a health care setting, artificial intelligence refers to the use of artificial intelligence technology and automated processes to diagnose and treat patients who need care. Artificial intelligence relies on the power of predictive algorithms, which guide health care professionals in their practice of medicine.

The mechanisms that give rise to the recommendations made by predictive algorithms are currently undiscovered and unknown in the way that the algorithms compute their “logic”. This is often referred to as “black-box” artificial intelligence. The “neural networks” behind the algorithms in black-box artificial intelligence are structured based on the human brain, so that the neural networks can self-learn, make decisions, and provide accurate responses. Despite the ability of this technology to provide accurate responses, and aid in improving the cost, access, and quality of patient care, the algorithms by which the technology operates have the potential to “become less intelligible to users and even the developers who originally programmed the technology.” This means that the artificial intelligence technology will not be able to demonstrate how it formed its conclusions. This can be problematic in a medical setting because the artificial intelligence cannot explain its decision-making process in the same way that a physician or health care provider would. Even if the algorithm could provide some explanation as to how the technology came to its conclusions, it would likely not have a useful meaning in medical health terms. Additionally, algorithms will become even more complex when more data is made available, which refines the future predictions made by the algorithm, but also causes the algorithm to change over time.

Artificial Intelligence and Black-Box Medicine

Artificial intelligence technology in the form of “black-box” medicine is already being used in health care systems in many capacities, and has the potential to provide substantial benefits to patients. These automated processes not only assist in the diagnosis and treatment of patients, but are also gaining importance in the background processes that must occur in order to properly treat a patient. Arguments have been made that the use of these forms of technology allows tasks to be completed more quickly, while allowing health care providers to treat patients more efficiently.

Artificial intelligence is currently being used to process and analyze patients’ test results, data collected through patient interviews, use data to determine appropriate diagnoses, and to present options for treatment methods while monitoring patients following appropriate treatments. For example, Dxplain, a “decision support system” developed in the Laboratory of Computer Science at Massachusetts General Hospital, offers Dxplain. Dxplain uses appropriate data to determine diagnoses using its knowledge base of diseases and clinical findings. This is an example of “machine learning techniques,” a subset of artificial intelligence that uses basic learning rules to find patterns in large amounts of data.

With the rise of vast amounts of data combined with the need for rapid access to data, medicine is now facing the need for technology and artificial intelligence containing machine learning techniques to reach the above-mentioned goals of improving the cost, access, and quality of medical care. DXPlain is just one example of a machine learning technology currently used in medicine. GermWatcher, a laboratory information system, is another machine learning technology currently used in medicine to “detect, track, and investigate infections in hospitalized patients.”

In addition to machine learning techniques, artificial intelligence is currently being used in medicine in other forms including robotic surgical systems.

As illustrated by these specific examples, artificial intelligence already assists in improving the quality and accessibility of healthcare, and has the potential to continue to provide significant advances in the field of medicine; however, this substantial potential comes with medical, technical, and legal challenges. Artificial intelligence in medicine must be safe and effective, and the question becomes how to protect and provide for patients while ensuring the efficient development and continued use of artificial intelligence technology in medicine.

Decision making with artificial intelligence

Artificial intelligence and machine learning algorithms will continue to have a significant impact on the decision-making processes, diagnosis, and treatment of patients in health care systems. The connection of these algorithms with patient data allows health care providers to “increase the accuracy and precision of their diagnoses and decisions” so that they are able to identify disease and treat patients with greater accuracy and precision than ever before. According to Shailyn Thomas in Artificial Intelligence, Medical Malpractice, and the End of Defensive Medicine, the introduction of artificial intelligence “to medical diagnosis and decision making has the potential to greatly reduce the number of medical errors and misdiagnoses—and to allow diagnosis based on anatomical correlates to be ignored. When such reliance is placed on an algorithm in decision making, potential malpractice claims become complicated by the reliance on artificial intelligence and the lack of explanation it provides for its decisions. When a physician pursues an inappropriate diagnosis or treatment based on reliance on artificial intelligence and machine learning algorithms, it becomes somewhat unclear who should be responsible when an error occurs.

Current Tort Liability Laws Applicable to Artificial Intelligence

Our current tort liability laws may not be comprehensive enough to apply to medical error resulting from decisions made by artificial intelligence. The main source of concern in determining liability for these errors stems from the fact that we as humans cannot “see” the reasoning made by artificial intelligence technology. Questions then arise as to whether traditional product liability laws should apply, whether the manufacturer of the technology should be held liable, or if the health care provider treating the patient using artificial intelligence should be held liable for errors.

Typically, liability for medical errors falls under a negligence framework. Tort liability law in general, including liability for medical errors, typically serves the purpose of compensating injured parties and preventing unreasonably dangerous conduct. Courts have typically enforced the standards of practice that the medical profession sets through tort lawsuits. When a case involves medical error, physician liability is determined based on the perception of the physician as a trusted expert. This means that the treating physician is fully accountable for his or her decisions, and thus if the care provided is determined to be negligent or reckless, the physician will be held responsible.

Standards also evolve over time based on advances in medical research and technology. In judicial determinations for tort claims involving medical error, expert testimony concerning customary practices in a specific field of medicine becomes important. While these standards typically develop through the practices of health care providers, the standard of care can also be influenced by practice guidelines set by professional medical organizations or legislative action. While various types of artificial intelligence technologies and machine learning algorithms currently used in medicine are continually evolving, the standard of care associated with artificial intelligence is still evolving. Therefore, customs have not been established as they have been for more traditional medical technologies and practices.

Additionally, the current standards used to determine physician liability in cases of error involving artificial intelligence technology become challenging to apply when the error is caused by the technology and not necessarily the physician. The algorithms used in artificial intelligence technology currently used in medicine are beginning to have higher accuracy rates than physicians. Thus, it may be difficult to hold the physician solely liable and continue to apply a traditional negligence framework to medical error involving artificial intelligence since the physician should not necessarily be blamed for following the algorithm and/or artificial intelligence technology. However, the counterargument becomes that artificial intelligence technology is primarily used as an aid to health care providers in decision making, allowing the final decision in terms of diagnosis or treatment to always rest in the hands of the provider. In these scenarios, traditional tort liability principles may apply since the decision making will arguably be primarily made by the provider and not the artificial intelligence technology.

Applying Current Legal Theories to Artificial Intelligence

Questions still surface when medical errors are caused by malfunctions of artificial intelligence technology. Our current legal theories apply to the operations of humans, so it is still unclear how the theories would apply to artificial intelligence technology if it is operating in a more autonomous capacity or if decision making does not rest entirely in the hands of the medical provider. Some argue that it is not entirely fair for providers to be held solely liable for the errors or malfunctions of artificial intelligence technology and machine learning algorithms, especially when the technology is operating by more autonomous means under the supervision of the provider. Therefore, other types of tort liability theories may apply to artificial intelligence in medicine such as products liability and vicarious liability theories when a traditional negligence framework for physician liability and medical error may not apply.

Vicarious Liability and the Use of Artificial Intelligence

One possibility is the application of the doctrine of superior response to place vicarious liability on the physician’s employer. If a physician is acting within the scope of their employment and simultaneously commits a negligent act that involves the use of artificial intelligence technology or machine learning algorithms, the physician’s employer could potentially be held liable for their wrongdoing. Under this doctrine, health care providers and hospitals could also be held negligent for failing to properly train and/or supervise employees in the use of artificial intelligence technology if an error was going to occur. In addition it has been questioned whether health care providers and hospitals should themselves evaluate the quality of artificial intelligence technology and machine learning algorithms before physicians use them in the course of treating their patients. Courts have not yet addressed this issue, as the technology is still developing, and the information to make such a determination is largely unavailable at this time. Currently, it is still unclear how courts will apply vicarious liability principles to the use of artificial intelligence in medicine.

Product Liability Laws Applicable to Artificial Intelligence

Under current products liability laws, the creators and manufacturers of artificial intelligence technology and machine learning algorithms that are currently being used in medicine could potentially be liable if an error occurs involving the technology. Liability could exist based on the product liability principle that if the product, the artificial intelligence technology, causes harm, then this harm is evidence of some defect within the technology.

The idea of ​​imposing liability on the manufacturer is based on the logic that the manufacturer must pay for any harm caused by the technology. Arguably then, the manufacturer is in the best place to develop the artificial intelligence technology so that it is safe and will prevent harm to users. The manufacturer and/or producers would also be in the best position to “absorb any economic loss stemming from the harm.” Medical devices and technology have at times been classified by courts as “inevitably unsafe products,” meaning that they are made in a way that makes them incapable of being completely safe. A negligence standard would then be applied to a design that is deemed defective under products liability theories. This negligence standard focuses on the manufacturer’s duty to “use reasonable care to design … a product that is reasonably safe.” The standard of “reasonable care” is then decided based on what the manufacturer knew or reasonably should have known at the time the plaintiff was injured by the product. These standards have historically been applied by courts to medical devices, prescriptions, and their respective manufacturers, applications of artificial intelligence technology, and machine learning algorithms currently used in medicine.

The application of a product liability theory to artificial intelligence can be complex. A designer of artificial intelligence technology does not necessarily know how the technology will function once it is used in a real-world medical setting. So, therefore, it may be unfair to blame a person whose work was far removed from the actual operation of the technology in a medical setting. So many entities and individuals, such as designers, engineers, and developers, work together to create artificial intelligence technology and systems. This makes blaming a single person particularly difficult. Furthermore, arguments have been made that enforcing strict liability under traditional products liability theory would be very difficult because the algorithms in artificial intelligence technology currently being used in medicine are characteristically imperfect. For example, an algorithm developed by Stanford researchers has an accuracy rate of less than 75%. Even though this technology would potentially be of benefit to the medical community, it would impose liability every time a patient is essentially misdiagnosed based on its accuracy rate under a strict liability theory.

Applying strict liability in this instance would not be beneficial because the production of this technology would likely slow down or stop altogether if such assured and immediate liability was imminent. This would be a setback for the medical community because the full potential of the use of artificial intelligence technology and machine learning algorithms would not be fully realized if development and production were to slow down or stop due to the lack of evidence of liability. This is why various argue that strict liability could be avoided altogether if the “inevitably unsafe products” approach were to be applied, as previously mentioned. However, questions then arise concerning the duty to warn under an unavoidably unsafe products approach. Under this approach, a manufacturer would have to rely on a learned intermediary to warn the end user of the manufacturer’s product. For example, the manufacturer or developer of artificial intelligence technology would need to rely on a health care provider to warn the patient of the potential risks associated with the use of the technology. Liability for pharmaceuticals and medical device manufacturers is often determined using this method, but it will likely not be easily applied to the use of artificial intelligence technology in medicine. First, because of the way that all types of artificial intelligence technology is tested before it is used, all types of potential risks or complications that may arise from its use will be discovered. This means that the possibility of an unknown error occurring is unlikely. Second, since algorithms are designed to be very accurate, the possibility of an unavoidably unsafe product approach will not apply to the use of artificial intelligence in medicine in the same way that it applies to medical devices or pharmaceuticals.

Summary

The use of artificial intelligence technology in a medical setting will inevitably create risks as not all of the potential consequences of the use of new technology are known at first. As discussed, there is currently limited information as to the legal implications for tort liability involving artificial intelligence in medical settings as the technology and use itself are still developing. Current tort liability laws may apply, yet it is somewhat unclear under traditional tort liability as to the potential legal implications in the event of a medical error involving artificial intelligence technology and machine learning algorithms.

Some argue in favor of applying traditional tort liability principles to artificial intelligence used in medicine as there is evidence of a “human hand” involved in machine-based decision making. Under this principle, a human who helps develop a piece of artificial intelligence technology or assists in its decision making is potentially “responsible for wrongful acts – negligent or intentional – committed by, or involving, the machine”. This argument is based on the idea that artificial intelligence technology is not fully autonomous, so it cannot be considered a “legal person” that can be held accountable under tort law for errors. The types of artificial intelligence used in medicine today fall into this category because the technology is not yet fully autonomous. For example, an app that can assist a physician in diagnosing a patient and the da Vinci surgical system, mentioned earlier, both function under human supervision and with the input of data by humans. The artificial intelligence currently used in medicine is not fully autonomous, such as the artificial intelligence technology used by driver-less cars or fully independent drone aircraft. Therefore, it is possible for current tort laws to be applied to the use of artificial intelligence and machine learning algorithms currently being used in medicine because there is still evidence of a “human hand” in their use. In addition, current product liability laws could potentially apply.

However many counter-arguments to this theory exist as some argue that it is unfair to blame those whose work was taken away from the actual use of artificial intelligence technology. Strict liability would arguably not be beneficial as the production of artificial intelligence technology would likely slow down or stop altogether if such certain and immediate liability were imminent under current products liability and strict liability theories. Many fear that this would be a setback for the medical community as the full potential of the use of artificial intelligence technology and machine learning algorithms would not be fully realized if development and production were slowed or stopped due to an accusation of liability.

Conclusion

Today, artificial intelligence and machine learning algorithms are being used in the practices of approximately 86% of health care providers. Innovators began developing these forms of artificial intelligence technology in an effort to improve the efficiency and quality of patient care, as well as improve the cost and access to care while advancing medical research. Artificial intelligence technology already assists in improving the quality and access to health care and has the potential to continue to provide significant advances in the field of medicine; however, this substantial potential comes with medical, technological, and legal challenges.

The question arises as to how safe and effective artificial intelligence in medicine will be, and the issue then becomes how to protect and provide for patients. Current tort liability principles may apply, but information at this time is largely unavailable to evaluate how these principles will apply. Courts have not yet addressed many of the issues surrounding and the use of artificial intelligence technology and machine learning algorithms in medicine. The goal in the future will become to ensure the efficient development and continued use of artificial intelligence technology in medicine, but these developments may change the way current laws are applied in the context of contemporary health care liability issues. As artificial intelligence technology continues to develop, courts will need to address these potential disputes surrounding the use of artificial intelligence in medicine, and the issues will likely become increasingly important requiring consideration in the future.

Read Also:

  1. Potential Of Artificial Intelligence (AI) In Healthcare
  2. Findings And Discussions On Artificial Intelligence (AI) In Healthcare
  3. Artificial Intelligence (AI) In Healthcare In India
  4. Ethical, Legal And Other Challenges In The Use Of Artificial Intelligence (AI) In Health Care
  5. Challenges And Precautions Of Using Artificial Intelligence (AI) In Healthcare
  6. Artificial Intelligence (AI) (Online Resource): The Panacea For SMEs In Healthcare
  7. Main Use Cases Of Artificial Intelligence (AI) In Healthcare

 

102330cookie-checkArtificial Intelligence (AI) And Technology In Health Care: Overview And Possible Legal Implications

Leave a Reply

Your email address will not be published. Required fields are marked *

Hey!

I’m Bedrock. Discover the ultimate Minetest resource – your go-to guide for expert tutorials, stunning mods, and exclusive stories. Elevate your game with insider knowledge and tips from seasoned Minetest enthusiasts.

Join the club

Stay updated with our latest tips and other news by joining our newsletter.

Tags

Advantages Of 6G Networks A Glimpse About Artificial Intelligence (AI) In Agriculture Air Pollution And Your Health A Journey Of Mobile Networks: From 1G To 6G and Uses Artificial Intelligence (AI) In Agriculture: Current Status And Future Need Artificial Intelligence In Agriculture Artificial Intelligence In Banking Artificial Money And Its Lifecycle Benefits And Challenges Of Artificial Intelligence (AI) In Agriculture Bitcoin Money Difference between Blogging and Vlogging Disadvantages And Challenges Of Artificial Intelligence (AI) In Banking features Google Gemini What Is It Use How To Use Historical background Of Money History How Can I Support Students To Use Chatgpt To Support Their Learning Impact Of Population Growth On Environmental Degradation Indian Banks Using Artificial Intelligence (AI) Introduction: The Nature Of Money Introduction About 6G Networks Introduction About The Use Of Artificial Intelligence In Education For Academic Writing Introduction About Wi-Fi Offloading Introduction And History Of Anthropogenic Pollution Introduction To CHATGPT And Other AI Content Generation Tools In Which Language Should I Do Blogging? Hindi Or English Key Aspects And Diverse Use Cases Of 6G Role Of Artificial Intelligence (AI) In The Banking Sector See Important Information Should We Start Blogging In 2025? Or Not Similarities and Differences of Euro and Bitcoin Currencies Some Of Other Pollutants In Ecosystem The Potential Of Artificial Intelligence In Finance The Role Of Money In The Financial System Types Of Emerging Pollutants And Negative Impacts Types Of Wi-Fi Offloading What Are The Types Of Artificial Intelligence Tools What Is Blogging Business? Information Related To It Where To Get Content Ideas For Blogging? Unlimited Ideas Which One Earns More Money Which Topic Should I Start Blogging With Why Wi-Fi Offload Wi Fi Offloading Technology Overview And Approaches Wireless Networks

error: Content is protected !!

Discover more from Altechbloggers

Subscribe now to keep reading and get access to the full archive.

Continue reading