Latest Technology

Ethical, Legal And Other Challenges In The Use Of Artificial Intelligence (AI) In Health Care

The applications of Artificial Intelligence (AI), along with other technologies such as big data and robotics, are promising transformational and disruptive changes in health care – either in terms of hospital and hospital management or in terms of pharmaceutical, mental health and insurance, preventive and predictive medicine. But these applications pose new threats and risks that will demand policy and institutional frameworks in AI development and use. This paper focuses mainly on challenges at the individual level. With the increasing availability of health data and the use of Artificial Intelligence (AI) to analyse such data for medical purposes, ethical, technical and resource-related questions will need to be addressed.

Other issues include quality, security, governance, privacy, consent and ownership, which have still received less attention. Those studying the design and application of Artificial Intelligence (AI) have raised concerns that humans should understand why and how Artificial Intelligence (AI) came to a certain decision. The processes followed by Artificial Intelligence (AI), and the speed at which it deals with large amounts of information, cannot be understood by humans. Most algorithms developed by ML are untestable and one cannot easily understand how and why the AI ​​derived a certain answer. The level of explainability is unacceptably low and, thus, there is a lack of trust in the processes offered by the Artificial Intelligence (AI); this significantly erodes trust in AI systems as a whole (Schmelzer, 2019). The main obstacles that arise during Artificial Intelligence (AI) integration in LMIC healthcare systems also depend on digital health technologies that struggle to scale.

LMIC governments are in many cases unable to create coherent policies for population health. They do not have the financial and technical competence to analyze disease burden, etc., while devising treatments and monitoring processes they can adopt across their states or the entire region. This prevents Artificial Intelligence (AI) tools for population health from scaling nationally. USAID, 2019 Quality: Producing consistent results relies on high-quality data for Artificial Intelligence (AI). These are often not available in resource-poor settings. A robust digital health infrastructure is needed for Artificial Intelligence (AI) tools. Among the barriers to feeding AI machines with the necessary historical and real-time patient data is the low EMR adoption rate in LMICs, at less than 40 percent (World Bank, 2019; USAID, 2019).

Even in high-income countries, it is the quality of data that determines the speed of use of AI tools. For example, thousands of different systems exist in a typical UK hospital, and not all of these systems communicate with each other. An interconnected data infrastructure with ‘fast, reliable and secure interfaces, international standards for data exchange as well as medical terminology that defines clear terminology for communicating medical information’ is essential (Lehane et al., 2019).

The protection of citizens’ health data is one of the key areas of responsibility for those dealing with sensitive data for AI purposes. Healthcare organizations must respond to growing cybersecurity challenges, and policymakers have the responsibility to create laws that ensure careful governance and security arrangements for stored data. For example, the collaboration between Google DeepMind and the Royal Free London NHS (National Health Service) Foundation Trust was heavily criticized in 2017 for the improper sharing of confidential patient data and their use in an app called Streams, which was designed to alert, diagnose and detect acute kidney injury. The Royal Free did not comply with the Data Protection Act in the UK when it transferred the data of 1.6 million patients to DeepMind. This is the decision of the Information Commissioner’s Office (the independent authority for upholding information rights in the public interest, promoting openness by public bodies and data privacy for individuals). When making its decision, it mostly took into account the fact that the application was still tested after all patients’ data had been transferred. Second, no proper and complete information was given to patients regarding the data used in the trial. This included data about patient outcomes and clinical reports. (Information Commissioner’s Office, undated; Hearn, 2017). These examples demonstrate the complexity in establishing good ethical and legal frameworks for data sharing, interoperability of systems and software ownership produced by such partnerships; and, according to the authors, a legal framework for clinical responsibility where errors occur (The Lancet, 2017).

These issues also include privacy. Often health data is owned by governments, and they may have incentives to sell this data to private companies. In many cases users become the ‘product’ (in effect, patients’ data becomes monetizable). For example, in the US, Walgreens, the pharmacy chain, collects data in prescriptions and then sends mailshots about clinical trials related to the customer’s disease. Walgreens is rewarded by fees for their work by those who collect patients for clinical experiments and by drug manufacturers. According to Kalev Leetaru in Forbes magazine, ‘[…] Walgreens does not clearly make consumers aware when shopping that prescriptions will be collected for the goal of conducting medical tests on them or that they will have a chance to opt out of having their confidential medical information used […]” (Leetaru, 2018). If companies like Walgreens can do this, it may be that technology companies that collect patient information can also sell individuals’ sensitive health data to third parties. There are further ethical considerations. What obligation do technology companies have to alert the population if their AI produces results that reveal society-wide concerns, such as a potential outbreak of a highly contagious infectious disease?

Even if technology companies that use AI for health reporting detect some health risks for their governments, history shows that governments often overlook such risks or ignore citizens. For example, social and economic stability as well as a political structure that was unable to warn of disease outbreaks led Chinese leaders to take a long time to alert the people and the rest of the world that severe acute respiratory syndrome was going to wreak havoc in 2003 (Huang, 2004).

Governance in this area is challenging. Policies related to health, technology and data protection are highly diverse from country to country and sometimes LMIC countries do not have the capacity or technical capability to develop similar policies related to population health. Furthermore, most of these countries also lack specific provisions of rules and regulations for data and technology use that are integral to the development of AI. Accuracy must also be considered. The most recent implications from the UK Information Commissioner’s Office are in relation to the accuracy of personal data in collection, analysis and application. For example, data analysis may yield results that are not representative of the larger population, and hidden biases in datasets may result in incorrect predictions about the individual (Information Commissioner’s Office, 2017). Responsibilities are also not well defined. How should a government or regulatory system understand who is responsible for flawed AI-derived recommendations, knowing that AI-produced results involve very complex processes – data collection, algorithm creation and use? Algorithms inevitably reflect the bias of the training data, and AI tools show bias reflecting the conditions in the high-income countries where they have been developed.

This is because algorithms require millions of historical health datapoints to provide accurate outputs appropriate for the geography and population, which are often missing in low-resource settings (USAID, 2019). The questions of how AI’s algorithms were designed, and with what inputs, remain to be answered as they are central to questions of their overall utility and whether they are appropriate for high, low, and middle income settings. A recent study from Facebook’s AI Lab illustrates this hidden bias. Five off-the-shelf object recognition algorithms, namely Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson, were asked to identify household objects collected from a global dataset. The object recognition algorithm made about 10 percent more mistakes when asked to identify objects from a household with a $50 monthly income compared to those with a household income of more than $3,500. The absolute difference in accuracy was also large: the algorithms were 15 to 20 percent more accurate for objects from the US as opposed to Somalia and Burkina Faso. Vincent, 2019.

Health-related AI applications will require robust infrastructure, legal and ethical frameworks. These issues need to be considered on the part of governments for health-related AI application development and implementation in high, low and middle income settings. It will also be necessary to consider business model sustainability by governments as well as commercial and non-profit organisations developing AI solutions. This will be difficult in low-resource settings where many key stakeholders do not have the finances to purchase such tools. As one representative of private insurance companies in East Africa noted, ‘I absolutely see the added value of risk management through AI tools, but I also know that such a solution will save me some money; unfortunately, nowadays I don’t have the budget to buy something that will start saving me some money 12 months from now.’ (USAID, 2019). It is one of many ‘LMIC governments that are already aware of the value that can be derived from deploying AI-based products, yet have neither sufficient resources for acquisition nor the human or internal IT resources to put the solutions to productive use.’ Ibid.

Equity issues do not just apply to countries, but come from the so-called digital divide, differences in the degree of access that parts of the same society have to advanced technology, such as 4G networks and smartphones. Artificial Intelligence (AI) health-enabled tools by mobile phone technology are an example of how better-connected populations and patients will receive services such as medical advice and information through devices that poorer populations may not have access to. Governments engaged in integrating Artificial Intelligence (AI) tools into health care systems will need to consider not only ethical and legal issues (such as privacy, confidentiality, data protection, ownership and informed consent), but also fairness, if Artificial Intelligence (AI) and related technologies are to contribute to achieving health-related Sustainable Development Goal (SDG) targets. Ubenwa: This is an Artificial Intelligence (AI) application under construction in Nigeria as a means of achieving SDG 3.2 because by 2030, this application will eliminate preventable deaths of newborns and children under the age of five because it provides 95% cheaper diagnosis than all existing diagnostic software.

The Artificial Intelligence (AI) ​​deployed is a type of ML system that can accept an infant’s cry as an input and use the amplitude and frequency patterns within the cry to give an instant diagnosis of asphyxia at birth. Results from tests conducted by the diagnostic software Ubenwa revealed a sensitivity of over 86 percent and a specificity of 89 percent. An example of its use is in a mobile application that uses the processing power of a smartphone to give an instant assessment of whether a newborn is at risk of asphyxia (Lewis, 2018). Ubenwa is not only cheap and therefore very much available in low-resource settings, but is also non-invasive (Ubenwa.ai.). Technology trajectories and their impacts will vary according to local socio-economic conditions, so they will not be the same everywhere.

A relevant and useful case study in this context is from India. The Indian government has recently released its AI strategy and listed healthcare as one of the priority areas for application in India (NITI Aayog, 2018a). The current government wants India to become a “garage” for developing AI solutions for the rest of the world. Most of the challenges India faces – whether in the type of diseases or the nature of health infrastructure – are similarly present in many other developing economies.

Read Also:

  1. Challenges And Precautions Of Using Artificial Intelligence (AI) In Healthcare
  2. Some Examples Of Areas Where Artificial Intelligence (AI) Is Being Widely Used
  3. Future Prospects Of Artificial Intelligence (AI) In Education
  4. Disadvantages of Artificial Intelligence (AI) in Education
  5. Why Google And Other Adsense Companies Not Gives Adsense Approval On Artificial Intelligence (AI) Written Content
86670cookie-checkEthical, Legal And Other Challenges In The Use Of Artificial Intelligence (AI) In Health Care
Anil Saini

Recent Posts

Types Of Student Visas

If you intend to study outside your country, one of the most important things you…

10 hours ago

Nature And Purpose Of Student Visa

A student visa is a specific category of legal permission granted by a nation to…

11 hours ago

Definition Of Student Visa

Student visa refers to a legally binding, valid permit of a foreign government that allows…

11 hours ago

Supercharging Your YouTube Traffic

• Well so far we’ve looked at: • what tools you’ll need; • how to…

2 days ago

Engaging Your YouTube Audience

Okay, so you’ve got your audience into your video. Now what? What is your ultimate…

2 days ago

Repurposing Your YouTube Videos

A really powerful way to boost your message is to repurpose your content. Repurposing is…

2 days ago