Artificial Intelligence And Cybersecurity in Covid-19 pandemic

The COVID-19 pandemic is impacting our lives in whole world  in unprecedented ways.  Since the outbreak in Wuhan, China in 2020, the virus has been spreading rapidly and steadily across the world. International organizations and scientists have begun to apply new technologies such as artificial intelligence to track the pandemic, predict where the virus will appear, and develop effective responses.

First, many institutions are using AI to assess and discover drugs or therapies that can help treat COVID-19 and to develop prototype vaccines. AI has also been used to detect whether people have the new coronavirus by identifying visual signs of COVID-19 on images from lung scans. It has monitored changes in body temperature through the use of wearable sensors and provided open-source data platforms to track the spread of the disease. In the early stages of the pandemic, DeepMind used their AlphaFold AI system to predict and publish protein structures. Now that the vaccines from Pfizer, Moderna, and AstraZeneca have been approved and are finally being used around the world, this important effort AI and other new technologies are also being used to manage climate change. For example, the UK Medicines and Healthcare products Regulatory Agency (MHRA), in partnership with the UK unit of Genpact, a global professional services firm specializing in digital transformation, is using AI to track potential adverse effects of vaccines on different population segments. Is using.

AI has also been used in applications other than medicine. This has helped in the fight against misinformation on social media, tracking sensationalist or alarmist words and identifying reliable and authoritative online references. Many countries around the world have adopted AI applications to support the enforcement of lockdown measures, such as facial recognition systems to identify people not wearing masks or mobile applications to track people’s social contacts.

However, in the fight against COVID-19, AI has also exposed its inherent limitations. Existing systems learn by finding patterns in data. To achieve the expected performance, the system must be trained with high-quality inputs that model the desired behaviors. While this process has been successful in AI applications with stepwise conditions and clear parameters, the process is much less predictable in real-life scenarios. COVID-19 is so new and complex, and the clinical and biological datasets needed to train AI systems are still scarce.

Similar limitations have been observed in the use of AI in the financial world. March 2020 was the most volatile month in the history of the stock market. It’s no surprise that trillions of dollars in market capitalization were wiped out due to the pandemic. However, the market shock also affected dollar-neutral quant trading strategies (which hold equally long and short positions), even though most hedge funds were using AI to identify their portfolio composition. In fact, quant funds that were using highly complex AI models may have suffered the most losses. The reason AI performs poorly is that it is not suitable for rare events like COVID-19; There have been very few such shocks in the market, so the system has not been able to learn from past data.

Hence, AI’s role in the fight against COVID-19 is two holders. On the one hand, AI can support operators in their answers to this unprecedented health crisis. On the other hand, these systems need to compete appropriately before considering the internal boundaries and trusting them. This two-edged relationship between AI and COVID-19can provide the reader a useful metaphor to understand the interaction between AI and cyber security. Like a fight against epidemic, AI can both empower and interrupt cyber security. In the case of epidemic, deficiencies in the application of AI are mainly due to the unavailability of adequate quality data. However, in terms of cyber security, the risk naturally depends on the way AI function and learning and often result in the refinement of the underlying AI technology. Overall, this report will argue that AI can significantly improve cyber security practices, but can also promote new types of attacks and further increase security threats. The report will highlight this mobility and suggest what measures should be considered to combat these risks.

What is Europe’s position on the intersection of AI and cybersecurity?

The European Commission’s Joint Research Center report on AI in the EU, published in 2018, addressed various aspects of AI adoption, from economic to legal perspectives, including cybersecurity. The report acknowledges the dual nature of AI and cybersecurity and the potential threats they pose to the security of systems. Recognizing that ML is often not robust against malicious attacks, it suggests “the need to better understand the limitations in the robustness of ML algorithms and to design effective strategies to mitigate these vulnerabilities.” “More research is needed in the area.”

“On 19 February 2020, the European Commission published the White Paper on Artificial Intelligence. It outlined a strategy aimed at promoting the AI ecosystem in Europe. According to the White Paper, the EU will allocate funds, which will combine private resources Together, it is expected to reach €20 billion per year. In addition, it will involve creating a network of centers of excellence to improve the EU’s digital infrastructure and help small and medium-sized enterprises (SMEs) transform their business models. It is envisaged to develop mechanisms to allow better reengineering. Based on the recommendations of the High Level Expert Group on AI, the EU also defined fundamental requirements for AI implementation.

According to the white paper requirements for high-risk AI applications may include the following key features:

  • Training data – data and record-keeping related – information to be provided
  • Strength and accuracy
  • Human monitoring
  • Specific requirements for specific AI applications, such as applications used for remote biometric identification purposes.

The AI white paper contemplated adopting a flexible and agile regulatory framework limited to ‘high risk’ applications in sectors such as healthcare, transport, policing and judiciary. Following a public consultation between 23 July and 10 September 2020, a follow-up regulation to the White Paper on AI was published on 21 April 2021.

The European Commission’s “Regulation on the European approach to Artificial Intelligence” promotes ad-hoc protections for high-risk AI systems based on a secure development life cycle. However, when it comes to cybersecurity, the proposed text could more clearly outline some additional and necessary steps to achieve the security of AI systems. The proposed requirements relate to high-quality datasets, documentation and record-keeping, transparency and provision of information, human oversight, robustness, accuracy and cyber security.

As far as cybersecurity is concerned, the regulation provides that high-risk AI systems in whole world  “shall be vulnerable to unauthorized third-party attempts and other varous types of attempts to alter their use or performance by exploiting vulnerabilities in the system.”  It also stipulates that technical solutions aimed at ensuring the cybersecurity of high-risk AI should include measures to prevent and control attacks attempting to manipulate training dataset inputs (‘data poisoning’). So that an error (‘adversarial example’) or flaws in the model can occur. These requirements represent a fundamental step towards ensuring the required level of security of AI systems.

This CEPS Task Force supports this approach and proposes a number of recommendations to provide more concrete guidance on how to secure AI systems.

Promoting the AI sector in a timely manner is particularly relevant for Europe. Given that established market models are characterized by strong network and scale effects, first-mover advantages in the adoption of AI technologies are particularly strong. While fostering its AI ecosystem, the EU needs to define how to make AI systems safe and reliable, and take into account how EU policies are designed to get the most out of such an AI ecosystem. What cyber security roadmap should be considered at the level?

Some definitions

While the literature is state-of-the-art, there appears to be a lack of a shared definition of what AI is. The definitions below provide a better understanding of how AI is conceptualized for the purposes of this report.

The Organization for Economic Co-operation and Development (OECD) defines an AI system as “a machine-based system that can make predictions, recommendations, or decisions affecting real or virtual environments, for a given set of human-defined objectives.” ” This definition has also been adopted by the European Commission in its “Regulation on the European approach to artificial intelligence”.

In this study we completely distinguish between symbolic and non-symbolic Artificial Intelligence (AI) In symbolic (or traditional) AI, programmers use programming languages to generate explicit rules that are hard coded into the machine. Non-symbolic AI does not rely on hard coding of explicit rules. Instead, machines are able to process a broader set of data, deal with uncertainty and incompleteness, and autonomously extract patterns or make predictions
.
Machine learning is a key tool in today’s  Artificial Intelligence(AI) systems. According to the OECD, ML is a set of techniques that allows machines to learn in an automated way through patterns and inferences, rather than through explicit instructions from a human. ML approaches often provide machines with multiple examples of correct results. However, they can also define a set of rules and let the machine learn by trial and error.” ML algorithms are generally divided into three large categories: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the data given to the ML algorithm already contains the correct answer (for example, is this email spam?) whereas in unsupervised learning, the algorithm uses prior information about how to divide the data into groups. Cluster without. Both systems are capable of learning and making predictions based on this information. Reinforcement learning instead involves creating a system of rewards within an artificial environment in order to teach an artificial agent how to move through different states and how to act in a given environment.

Neural networks are a sub-category of ML. These systems are characterized by layers that compute information in parallel and are composed of interconnected nodes that pass information to each other. The patterns of this knowledge represent knowledge in these systems. Neural networks, as defined by the OECD, are multilayered, complex computer systems that operate by combining a large number of small operations. Together, these layers – which can number in the thousands or even millions – form a complex statistical machine. This architecture allows the network to learn and understand complex correlations between input data and output results. Neural networks are excellent tools for a wide range of artificial intelligence applications because of their ability to process information in a layered, connected manner. This processing allows neural networks to understand and predict complex patterns and correlations. In other words, neural networks link between inputs and outputs. Modify your own code to find and optimize.” Deep learning is a large neural network subset composed of hierarchical layers that increase the complexity of the relationships between inputs and outputs. It is an architecture capable of implementing supervised, unsupervised and reinforcement learning.

AI for cyber security and cyber security for AI

AI presents great opportunities in cybersecurity, but like any powerful general-purpose, dual-use technology, it also brings great challenges. AI can improve cybersecurity and defense measures, increasing the robustness, resilience, and responsiveness of systems, but AI will increase the sophistication of cyberattacks in the form of ML and deep learning, enabling faster, better targeted, and more destructive attacks. .

The application of AI in cybersecurity also raises security and ethical concerns. Among other things, it is unclear how responsibilities should be assigned for autonomous response systems, how to ensure that systems are behaving as expected, or what security risks the increasing anthropomorphization of AI systems poses.

This report will therefore explore all about the the dual nature of the relationship between AI and cybersecurity. On the one hand, the report will explore the possibilities that AI adoption offers for enhancing cybersecurity, which is especially important if one considers the increase in cybersecurity breaches accompanying the COVID-19 crisis. On the other hand, the report will focus on how cybersecurity for AI should be developed to make the system safe and secure. In this regard, the report will explore the concept of AI attacks, how likely AI enabled systems are to be subject to manipulation such as data poisoning and adversarial examples, and how to best protect AI systems from various types of malicious attack.

Read Also:

  1. How Artificial Intelligence (AI) Increase Unemployment Day By Day
  2. Artificial Intelligence (AI) in Human Resources
  3. Artificial Intelligence (AI) in Travel and Tourism
  4. Artificial Intelligence (AI) in Space Exploration
  5. Bad Effects of Artificial Intelligence (AI) on Humans
59900cookie-checkArtificial Intelligence And Cybersecurity in Covid-19 pandemic

Leave a Reply

Your email address will not be published. Required fields are marked *

Hey!

I’m Bedrock. Discover the ultimate Minetest resource – your go-to guide for expert tutorials, stunning mods, and exclusive stories. Elevate your game with insider knowledge and tips from seasoned Minetest enthusiasts.

Join the club

Stay updated with our latest tips and other news by joining our newsletter.

error: Content is protected !!

Discover more from Altechbloggers

Subscribe now to keep reading and get access to the full archive.

Continue reading