The gradual emergence of erudite AI systems and the collaboration of multiple knowledge such as AI, Internet of Things (IoT), and Internet of Living Things (IOLT) acts as a drastic risk to our privacy. Although there are benefits that flow out of the implementation of AI in the healthcare sector, a number of ethical issues flow out of it due to the lack of regulation regarding its use and control. One of the concerns is of data privacy issues, which arise with the implementation of AI. It brings about the need for consistency between inventions and privacy and effective protection of data privacy mechanisms that can be developed sideways with AI implementation.
As it is known that for the purpose of machine learning and deep learning, a huge amount of data is required to fulfill its requirement for advancement and testing- this can be considered as one of the major drawbacks of AI mechanisms. Moreover, with the help of AI-enabled healthcare tools the government, corporates and individuals are able to use the personal data processed and stored by these technologies to make statistical inferences, such as physical traits, race, creditworthiness, insurance risk, employment or academic ability and so on. Although, ostensibly, it is the anonymous data of patients that is used for technological development, certain risks exist. The value of leniency dictates that healthcare providers “do no harm”; however, breaches of patient privacy can cause serious harm and can also lead to unintended consequences, which can potentially have a negative impact on one’s occupation or insurance coverage and can also allow cyber attackers to obtain social security statistics and personal economic data.
Increasingly, generating anonymous data and removing identifiable information from large databases can be a formidable task. Instead, it is very natural that, even with extremely hard labours, there will be at least a potential risk of RE identification. This risk is very much associated with ophthalmology. But this is not the only area, as it is now possible to apply facial recognition software systems to three-dimensional correction of computed imaging of the head. In addition, distinctive features from the periocular region are used to recognize the age of patients through machine learning algorithms. Also from fundus images, gender, age and cardiovascular risk factors can be recognized. Data that is not even a medical image has the potential to recognize a person by its relationship with other information as patients’ data adds up over time.
So, at the foot of advancement, we are losing our privacy to a great extent, which is a mere attempt. As rightly explained by Christina P. Moniodis,
“The creation of new knowledge complicates data privacy law because it involves information that the person did not have and could not have disclosed intentionally or otherwise. Moreover, as our state becomes an ‘information state’ through an increasing reliance on information – such that information is described as the ‘lifeblood’ that sustains political, social, and business decisions – it becomes impossible to conceptualize all of the possible uses of information and all of the resulting harms. Such a situation poses a challenge for courts that are called upon to effectively anticipate and remedy invisible, evolving harms.”
Despite some privacy and security concerns associated with the implementation of AI, countries and governments around the world are developing and innovating AI technologies and investing for its development. In 2018, India has actively recognized the use of AI expertise in various sectors from healthcare to education through the National Strategy for Artificial Intelligence which authorized NITI to launch the National Program on AI and the report of the AI Task Force. The use of AI mechanisms is entangled with every aspect of human life ranging from their finances, physical traits, genome, face, emotions, environment, culture and religion, which adds to the issue of data protection and privacy concern.
A. Misuse of Health Data by Authoritarian Governments
Data protection serves several personal, psychological and social functions of the right to privacy. An individual seeks protection for his or her health information when there is an apprehension that the information may be made a subject of discrimination, embarrassment or harassment. The exploiter of health information can be anyone. It can be close or enemy. Disclosure of information may please people who have the curiosity to know everything about others and create idle gossip. Moreover, health information can also be abused by totalitarian regimes such as the Nazi regime. Adolf Hitler’s idea to create a pure race is a serious violation of inclusive societies. Arrogant rulers have their own definition of healthy or unhealthy individuals; consider unhealthy people as a liability on the so-called perfect society. Autocratic governments discriminate their citizenry based on race, gender, sexual orientation, etc.
Many governments have benefited from the advancement of AI, improved IoT, and IOLT. For example, the use of portable genome sequencers MinION and Matriarch which uses AI technology in epidemiology, assisting in predicting the risk of diseases. Another example is Sequenom Inc., using AI, which interprets genetic code into relevant data. Government and other regulatory bodies, based on these data generated by AI, make learned decisions to combat and control the spread of disorders and avert pandemics.
But what if the government has a perverse motive to pool these data. A phenomenon like persecution of religious minorities is widely practiced across the world, especially in Africa and Asia. They are subjected to ethnic cleansing in their territories. Their identities are now stateless persons or refugees. Social taboos are lynching individuals who have a sexual orientation that does not conform to the majority opinion. It is apt to mention the fact that the unprecedented capabilities of new media technologies and AI mechanisms can reveal all these factors to these types of governments. With the help of artificial intelligence relating to genetic traits and other physical traits, discriminatory governments can achieve their political advantage. In K.S. Puttaswamy v. Union of India, the Court clearly stated that the scope of data protection safeguards includes the ‘principle of non-discrimination’, ensuring that the pooling of data should be in such a manner as not to show bias on the ground of race, or ethnic origin, or political affiliation, or spiritual views, or health or sexual orientation.
B. Surveillance Capitalism by Big Corporations
Increasingly, private companies are deeply interested in individuals’ health information. A growing number of employers encourage corporate fitness through fitness equipment in order to “create a culture of wellness,” “improve participant health status,” “increase employee productivity,” and “boost acquisition and retention.” Big data on health information has the potential to empower controllers to act in a biased manner. This notion can be best explained by the concept of “surveillance capitalism.” According to Susha Zuboff “Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of this data is applied to service improvement, the rest is annexed as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence,’ and fabricated into predictive products that predict what you will do now, soon, and later.” Finally, these prediction products are traded in a new kind of marketplace that I call the behavioral futures market.” Surveillance capitalists have grown enormously rich from these trading processes, as many corporates are willing to bet on our future consumer behaviors.
In surveillance capitalism, the commodities to sell are personal data of individuals or consumers, and the generation and production of this information is done by mass surveillance on the internet. Consumers’ behavior is learned and thoroughly analyzed by such surveillance and stored specifically to influence our further behavior. Here is the role of five players, Google, Facebook, Amazon, Microsoft, Apple, to be notable. They shop our behavior and experiences with the help of technological advancement like AI. Next, they encash these personal data of the consumer by selling it to third-party companies. Such data can be the key factor to earn money by attacking the consumer with ads, or to make informed decisions that may be detrimental to the consumer’s interest thereby violating his privacy. For example, now-a-days whenever we are facing trouble with our health, there is a tendency to search for the remedy at hand. So, we search in the internet by using our browsers in phones or laptops or sometimes by surfing various social media sites. The search history saves our behavior and our concerns which are used as a commodity by various browser and social media companies. Thereafter, whenever we come back to those sites, we are attacked with the remedies we were looking for, even in those sites where we did not search for such a remedy for our health. This is the simple example we face every day. This activity shows that our personal concerns are circulated internally within various companies that know our health conditions and information. These sensitive information are transferred to other third-party companies as a harmful asset to our interests.
Similarly, when we use fitness bands, they keep track of our health status and accumulate all such data to use as wealth. Even small companies are encroaching on personal data by such operations. Third party brokers are selling our personal information to big corporates, thus leading to accumulation of wealth (personal data) in the hands of online companies.
Pharmaceuticals using Artificial Intelligence can colonize health by inventing addictive and de-addictive features of a drug, by knowing the preferences and behavior of consumers by medical diagnosis and online monitoring done by various corporates. Life insurance companies can manipulate the agreements with their customers, and by buying consumer’s personal data from other companies, can decide the policy unilaterally. This would be against social security measures, which are promised under the principles of constitutionalism. It disrupts the bargaining power of patients and consumers.
Where health information is a valuable asset and wealth for healthcare providers, who are misusing it for commercial purposes, hacking of medical databases thrives. Hackers are in the business to track medical history of individuals, including HIV reports. One such incident happened in 2016, when the hacking of a Mumbai-based diagnostic and test centre database exposed medical reports (including HIV reports) of about 35,000 patients. This particular database contained information of patients across India, and many were unaware that their details had been exposed. Again in K.S. Puttaswamy vs Union of India, the court was of the view that sharing medical information of an individual without his approval violates privacy.
The advertising, management and sale of products and services provided by such healthcare agents depend on medical tracks of individuals. Health-related fears and concerns can be created through abuses of artificial intelligence. Consumer behaviour patterns give data controllers to form opinions among the public. Rumors or fake news make money for wrongdoers.
Read Also:
The IQOO Z10, which will go on sale in India on April 11, 2025, will…
IQOO Z10 The IQOO Z10 is a mid-range mobile phone that was conceptualized by Vivo's…
The VIVO V50E smartphone is a feature-packed smartphone that integrates stunning design with workhorse hardware…
The Vivo V50E, slated to be launched in the market in early April 2025, is…
The 10th class exam result is the final result of the Secondary School Certificate (SSC)…
Background and Launch – (July 1, 2023) The PM Internship Scheme is a vision of…