Latest Technology

What is Artificial Intelligence(AI) (With Definition, Developers and how does it works, and its Types)

The term Artificial Intelligence was coined by computer scientist John McCarthy at the Dartmouth Conference in 1956. Popularity in the academic world has known this place of birth of AI. In itself, the term was coined with a lot of inspiration of human imagination and drew the attention of society towards this possibility and its impact on life in the future. It is no more than the placement of the ability of machines to behave like the human brain, that is to do things that are much closer to the thinking of humans than things like reasoning, learning, solving problems, and seeing.

However, much older concepts than this existed for AI. Some of the most important concepts during this period came from the mind of British mathematician and computer scientist Alan Turing, including ideas related to the Turing Test, which was a theoretical test to determine if a machine could actually produce responses indistinguishable from a human.
In 1950, the paper “Computing Machinery and Intelligence” became the cornerstone of AI studies.

Symbolic AI has been at the center of the AI ​​tradition for the most part of the 1960s and 1970s, for the reason that it originates from the strategy of programming computers to manipulate abstract symbols in their effort towards solving problems based on logic.
Some of this turned into the first AI programs, among which is ELIZA-1966, a pioneering natural language processing by Joseph Weizenbaum. It could simulate a conversation with any human being. Furthermore, computing resources were scarce. Also, the amount of data needed for those breakthroughs was not available.

Today, as in the 1980s, interest in AI rises again with the investment if not yet infusion into knowledge. Only with the existence of rules in medical or financial decision making did expert systems have a chance to simulate the ability of human experts to make decisions in said fields.
Again, this could not be sustained as such limitations in processing capacity and cost were not viable enough to allow the scope of exploration that an AI required; Then, this led to another decrease in interest in AI, which was not really so titled, but colloquially known as the “AI winter.”

AI was reborn in the advent of the new millennium when it began to witness improvements in its processing capability, the large amount of datasets being acquired, and new learning approaches, among which, but not limited to, deep learning.
Deep learning was the approach that flourished when, in 2012, it finally won out, researchers from the University of Toronto actually succeed in designing a deep neural network that achieved unprecedentedly fast accuracy on image recognition. So there was again interest to gain research about AI and its more commercial applications. This has made AI grow in many different ways that touch every corner of the health, transportation finance, and even entertainment industries.

It grows these days, piping up developments in such fields as the processing of natural language, autonomous systems, and robotics.
Even brighter is the future of AI when integrating into life, the better. The decision-making and ethical issues faced by society towards AI are not talked about. As the use of AI evolves, so does mankind’s understanding of how it can seriously and profoundly change life.

Defining Artificial Intelligence: A Multifaceted Concept

By artificial intelligence, the development of computer systems or software that can perform tasks that would otherwise seem to require human intelligence. Generally, it deals with a very wide range of activities such as problem-solving, learning, reasoning, decision-making, language comprehension, and perception. AI essentially aims to mirror or simulate human cognitive powers with the help of huge datasets and very complex algorithms. However, whatever the case may be, it is very difficult to define AI with any level of specificity because it has evolved over the decades and has embraced many different subfields that have different lines of approach and goals. Of the two broad categories under which AI can be classified, narrow AI or weak AI, also known as strong AI, ranks prominently.

Historically, the first ideas about AI were in the 1950s, based on the work of British mathematician and logician Alan Turing on “imitating machines.”
The seminal paper that put forward this idea, the “Turing Test,” published in 1950, brought forth the concept of testing whether a machine could exhibit intelligent behavior similar to that of a human. This meant that the field of AI as an academic discipline finally came into being, so did the quest by early researchers to find ways to put human reasoning into machines.

The Dartmouth Conference of the mid-1950s gave the field momentum, followed by John McCarthy, Marvin Minsky, and others celebrating the term “artificial intelligence.” From there, decades of research work followed, and the fruits laid in the state of the first expert systems in the 1970s and 1980s, attempts to apply them to specific areas such as medical diagnosis.
But early systems were dismissed as primitive and involved manually entered programming.

This led to the current term narrow or weak AI, where enhanced performance excelled at one task but failed to apply it to other applications. The best example is when IBM’s Deep Blue outsmarted Garry Kasparov, the chess grandmaster, in 1997.
Specialists, for example, talk about virtual personal assistants, such as Siri or Alexa, harnessing twentieth-century technology, natural language processing and machine learning capabilities to perform specific tasks such as reminding someone of a single thing, updating the current weather conditions, or even opening a given page on the internet.

General AI or strong AI is the concept that machines have human-like intelligence and can understand, learn, and perform a wide variety of intellectual tasks. For a long time, general AI haunted the fabric of science fiction as sacred space: human-like robots, in other words, or creatures that could boast with consciousness to the degree humans do.
Narrow AI is precisely the opposite; It is honed for performing only a limited number of tasks and restricting itself to that scope. General AI attempts to fully approximate human cognitive abilities but so far remains an abstract construct mainly for technical and ethical reasons.

Thus, narrow vs. general AI distinctions are relevant only to current constraints and limitations, at least to the date of writing about the future of artificial intelligence. The existence of narrow AI systems and specialized systems, as defined above, more narrow or less robust in abilities, then translates into a failure to generalize. General AI may hold perhaps the greatest promise: that understanding humans is within its reach and is still a goal towards which our understanding of intelligence and machine learning is only partially drawn.

The Journey of AI: From Symbolic Logic to Machine Learning

Over the years, it has seen a couple of unprecedented developments in approach and methodology, from a rudimentary conceptual framework to advanced AI. The basic roots of AI are originally based in the early 1950s when the field was adopted as an academic discipline. Early AI research was implemented with symbolic logic and rule-based systems that simulate human reasoning through somewhat complex sets of predefined rules. It was during the 1956 Dartmouth conference that the term “Artificial Intelligence” was first coined; And since then, scientists have begun to look forward to how it might be possible to simulate human intelligence in machines. At that time, rule-based systems were abundant; Often referred to as “good old-fashioned AI” or GOFAI, since scientists believed that human cognition could be replicated simply by encoding logical rules.

However, this approach was soon realized within a fairly short period of time to be largely constrained.
In the 1970s and early 1980s it was determined that rule-based systems were seriously underpowered, especially when working on complex and ambiguous real-world problems. Such systems fail miserably on tasks that require subtle intuition, and even in accommodation of unpredictable inputs that would eventually lead to something labeled an “AI winter,” less funding in AI research and Period of interest.

More precisely, it was a culmination of machine learning in the late 1980s and early 1990s, which replaced explicit programming with some form of approach capable of making decisions on patterns drawn from data. Deep learning, a type of machine learning that uses artificial neural networks, then peaked with advances in the computing power of computers and the availability of goliath amounts of data in the early 2000s.
It was inspired by what’s in the human brain: lots of neurons with interconnected terminals but deep learning was a network machine designed to understand huge sets of data and extract really subtle features from them, so This allows to break through ground in image recognition, natural language processing and other complex tasks.

Since then, deep learning has been the driving force for most AI. But some other notable work includes some of the inspiring neural network work that Yann LeCun did back in the late 1980s—not all of them, however, point to other milestones—such as AlexNet’s win in 2012. , and the Transformer architecture as recently as 2017—and all of these launched AI capabilities at unprecedented accuracy and sophistication. It is today evolving into novel architectures and methodologies, such as reinforcement learning and generative models. Unlocking doors to new possibilities of AI-enhanced capability, both in creativity and complex decision making and outside data analysis or understanding.

Ethical and Societal Considerations

This is nothing less than a definite, exponential revolution in the history of artificial intelligence. However, this will be highly dependent on the advancement of technology as well as high or sometimes flawed solutions to ethical dilemmas. It dates back to 1956, when the term “artificial intelligence” was coined at the conference that would later become famous as the birthplace of this branch of science—the birthplace of visionary dreamers like John McCarthy and Marvin Minsky. One area in which they wanted to imagine machines thinking like humans. In fact, the first AI systems were rule-based programmers, according to which they could solve specific tasks by their unique logic rules. This was symbolic AI, but later also became known as “good old-fashioned AI”. These rules-based systems dominated the field in the 1960s and 1970s.

In the 1980s a new domain-related field emerged—expert systems, which could make decisions on concrete domains of activity ranging from medical diagnosis to financial planning. On the other hand, it had some drawbacks: inflexibility and excessive tediousness in manually implementing the rules.
Therefore, AI research failed to meet expectations and for the first time, the field actually experienced what is today widely called “AI winter”, when funding and interest decreased dramatically.

This was the decade from the late 1990s to the early 2000s – a turning point, with ML and more accessible data and computing than ever before.
Machine learning allowed AI to move from rigid rule-based practice to data-driven learning. The year 2012 marked a breakthrough in Deep Learning, a subfield of ML inspired by brain structure. It was deep neural networks that achieved great accuracy in both image and speech recognition that reignited the fire of interest and investment in AI.

The past few years can be said to epitomize the trends in artificial intelligence, with advanced capabilities over the years expanding dramatically to create intelligent systems with incredibly wide applications. Deep learning has been applied to new applications in natural language processing and even autonomous vehicles, recommendation systems, and many more.
Such developments reintroduce some important questions over biases in training data resulting in unfair or discriminatory outcomes by unsupervised AI algorithms. The second important point is to ensure privacy as many AI systems are based on a lot of personal data and this has given rise to debates over the security of data as well as its ownership.

Another essential question is job replacement by automation, so the overall economics of AI is also discussed. Furthermore, with advances in capabilities, questions of accountability are only increasing regarding how much AI systems should be allowed to make decisions about life and death or otherwise, such as certain types of autonomous vehicles or military uses.
With such progress, a key challenge for policymakers, researchers and technologists has been whether AI aligns with social values ​​and does not violate basic rights.

This promises both benefits and responsibilities for the future of AI. Therefore, now is the opportune time in its journey through industry transformation as it proactively addresses the challenges of such ethical implications. Its benefits can only be harnessed if human rights are protected, transparency and fairness are maintained and accountability in AI systems is maintained through a collaborative approach by governments-private entities-society.

Developers of Artificial Intelligence(AI)

Artificial intelligence, as a term, was an idea for decades, but the nature of artificial intelligence became a mature field due to the influence of several prominent personalities and organizations who have pioneered its development and idea throughout this history of AI. Some of the most important contributors and milestones in this history of AI can be seen as follows:

1. Alan Turing (1912–1954): Often called the father of computer science and artificial intelligence, Turing’s work in the 1930s laid the foundation for today’s computers.
In his 1950 paper Computing Machinery and Intelligence he proposed the famous Turing Test—a criterion for testing whether a machine could be said to exhibit human-like intelligence, although Turing’s concept of the universal machine was not used in theoretical computers. Has been extremely influential in the development of science.

2. John McCarthy (1927-2011): In 1956, McCarthy organized the Dartmouth conference considered the formal birthday of AI as a field of science. He coined the term “artificial intelligence”, and he played a central role in creating the programming language Lisp, invented in 1958.
Lisp quickly became the AI ​​research standard, and was used to implement all kinds of detailed algorithms.

3. Marvin Minsky (1927-2016) and Seymour Papert (1928-2016): Together they founded the MIT AI Laboratory in 1959. With Papert, Minsky’s work led to the Perceptron, published in 1969, which criticized early neural net efforts as being inadequate in their simple forms.
While this may have seemed a setback, work based on these ideas ultimately led to much more complex networks.

4. Geoffrey Hinton (1947): “Regarded by some as one of the “fathers” of machine learning, Hinton is credited with first unleashing the power of backpropagation in 1986 and changing the face of neural networks today.”
“Rewrote the deep learning revolution we are experiencing.” Much of his work since the 2000s has been towards stimulating topics such as speech recognition and image classification; Because of his influence through both this and Google and the University of Toronto, his influence cannot be ignored.

5. Yann LeCun (born 1960, age 62+): known for his work in the fields of computer vision and deep learning, focusing on convolutional neural networks (CNN) in the late 1980s and early 1990s , as well as his research on backpropagation algorithms and applications in image recognition, led to some of the most recent breakthroughs in AI technology.
Other prominent players in the meta, such as Facebook, are:

6. Andrew Ng (1976–present): Ng made significant contributions to making AI more accessible to the world. In 2011, he co-founded Google Brain, which focuses on large-scale machine learning as well as deep learning.
He has taught thousands of students with thousands of courses on machine learning concepts; All these helped in advancing engineering. Andrew also co-founded Coursera in 2012 to advance education and promote research in AI.

7. OpenAI 2015: It is designed to bring AGI benefits to humanity.
Models developed by OpenAI, for example, GPT-2 in 2019 and GPT-3 in 2020, have demonstrated the potential of AI with natural language processing capabilities. OpenAI set a benchmark that sets it apart from other firms and has promised to be fair with AI.

8. DeepMind (founded 2010): Acquired by Google in 2015, which made it very famous through its AlphaGo program that defeated world champion Go player Lee Sedol in 2016. DeepMind research has greatly advanced reinforcement learning and brings real applications to healthcare and solving complex problems, such as its protein folding work through 2020’s AlphaFold.


9. IBM Watson (launched in 2011): It was IBM Watson that made so many public audiences fall in love when the demonstrable AI could win the popular quiz show Jeopardy! The demonstration showcased the possibilities of AI in natural-language understanding. These are already being used with IBM Watson in healthcare, finance and customer service environments – an example of the versatility of AI technologies.


10. Baidu AI Team: Baidu is a leading worldwide technology company founded in 2013. His main research focus has been on technologies related to natural language processing and autonomous driving. Therefore, the work of the Baidu AI team makes great contributions to advancing the applications of AI in China and everywhere.


11. Tesla (Autopilot: 2015): It is an electric vehicle pioneer that has made a whole lot of advancements in autonomous driving technologies through its AI-powered features like Autopilot and Full Self-Driving. The multi-layered use of AI has deeply impacted the future of the automotive industry.


12. Facebook AI Research FAIR, launched in 2013: Nonprofit organization that produces open-source research contributions that are pushing the boundaries of AI: moving research on computer vision, natural language processing, robotics, and more into a more collaborative and innovative space.

These milestones and statistics also reflect the progress of how AI has evolved in terms of cutting-edge research, technological advancements, and ethical considerations. On this path, new contributors will surely emerge who will continue to shape the future of artificial intelligence.

How does Artificial Intelligence(AI) works

Artificial intelligence is created to allow computer systems to approximate human-like intelligence. AI systems must provide capabilities that demand human-like intelligence, such as thinking and solving problems, learning, perception, and understanding through language. AI depends on a combination of several technologies or approaches. Some of these techniques/approaches are discussed below:

1. Machine Learning: It comes under the category of AI in itself. Under this, algorithms are designed that help computers learn from data and take decisions based on this data or even predict the work to be done.
Some other types of machine learning are supervised learning, unsupervised learning, and reinforcement learning. Under supervised learning, algorithms are trained on labeled data. They are actually trained to identify patterns of unknown things on the database in the case of unsupervised learning. There, using reinforcement learning based on sequential decision making, they train algorithms to behave in relation to rewards and punishments.

2. Neural Network: Neural networks are the most straightforward of the deep learning structures that are derived from machine learning itself; Neural networks consist of layers of interconnected nodes known as neurons that process and transform data with an architecture inspired by the human brain.
Deep neural networks have proven to be very powerful in finding patterns and more complex representations about large data sets, especially in applications regarding image and speech recognition.

3. NLP: It enables computers to learn and interpret human language. This enabled computer interpretation as well as the creation of computers.
Some of the more commonly known applications are represented by text analysis, sentiment analysis, translation, language translation, and chatbots. The GPT-3 model is trained with an AI-based model so that the content—as responded to—feels human-centric.

4. Computer Vision: Computer vision teaches computers to see the visual world.
This enables the machine to read images, understand what’s happening in a video, and much more. This includes image recognition, object recognition, facial recognition, and even helping to guide self-driving vehicles.

5. Expert Systems: These AI systems can capture the acumen of human experts in making decisions across areas or domains.
These use knowledge bases and rules that help answer questions when making decisions in addition to providing expert advice. Applications of expert systems mostly resemble the medical and financial fields.

6. Reinforcement Learning: In this, an AI agent is trained on how to take some sequence of decisions in the environment and tries to maximize the reward signal.
Reinforcement learning algorithms are typically applied in cases where an agent interacts with the environment to learn the best strategies by trial and error—for example, in robotic weapons or gaming.

7. Algorithm development: This is the category of developing algorithms that starts from relatively simple models, for example, linear regression or decision trees and moves to deep learning architecture, where AI systems can automatically understand patterns and relationships.
.

8. Feedback Loop: AI systems are actually characterized by a feedback loop in which after receiving evaluation, the outputs of that system need to be evaluated and developed further for improvement. The process should be iterative as, over time, it improves the overall definition of AI.


That is, AI learns from data and experience; They adapt their behavior and continue to perform more and better over time. This field is so vast, and is still broadening itself, that new techniques and approaches are designed for increasingly complex tasks.

Types of Artificial Intelligence(AI)

Artificial intelligence, which has a very long history, is divided into several types depending on what it can do and what it can be used for. Here are the types along with a very brief historical overview:

1. Narrow or weak AI

Narrow or weak AI systems are described as “narrow” in the sense that they are specifically programmed to accomplish a particular task or group of tasks and “weak” in the sense that That it cannot think for itself like the human brain. The concept began to gain ground in the 1950s, led by the formal establishment of AI as a field of research at the 1956 Dartmouth Conference. In fact, some pioneering applications are chess-playing computers, for example, IBM’s Deep Blue which defeated world champion Garry Kasparov in 1997, and even a more recent application is voice assistants like Siri, which was launched in 2011. , and customer service chatbots that can apply natural language processing to answer all very specific questions.

2. Normal or strong AI

The general AI, Artificial General Intelligence (AGI), is considered truly human-like in its cognitive abilities, in the sense that it can understand, learn, and perform any intellectual task that a human can do. . So far, for the 2020s, AGI is still largely speculative. Since the 1950s, some researchers, for example, Alan Turing have theorized about this when he first introduced the Turing Test to measure a machine’s ability to imitate human-like intelligence. However, it remains to be seen in the future how this AGI can actually be developed.

3. Machine learning

Machine learning is a specific area of ​​AI that has its roots in the 1980s and is based on the idea of ​​developing algorithms whereby a computer can learn directly from experience using data. In fact, the term “machine learning” was coined by Arthur Samuel, the founder of machine learning, in 1959. It uses techniques such as supervised learning and unsupervised learning, as well as reinforcement learning, to enable computers to detect patterns in data without being directly programmed.

4. Deep learning

Deep learning can be defined as a subset of the field of machine learning. According to surveys, its origin dates back to the 1960s, when neural networks were discovered, but this era began in the 2010s, when AlexNet won the ImageNet competition in 2012. Deep learning uses multilayer neural networks as it supports speech and image recognition processes.

5. Reinforcement learning

Reinforcement learning has become a field in its own right over the past 20 years, with seminal work by researchers including Richard Sutton and Andrew Barto. Reinforcement learning trains AI agents to perform tasks in a given environment and then choose actions of a nature that maximize the rewards received. These types of applications include the game playing that DeepMind’s AlphaGo did in defeating the world champion at the game of Go in 2016.

6. Natural Language Processing (NLP)

Coming into existence in the 1950s, NLP is one of the fields that focuses on helping computers understand and generate human language. Of all the major breakthroughs, it was in the late 2010s when Transformer models like BERT (2018) and GPT (2018) seriously revolutionized translation tasks, sentiment analysis, and conversational AI.

7. Computer vision

Computer vision is said to have begun in the 1960s through efforts to create systems capable of interpreting simple images. The real development occurred in the 2000s and 2010s and has been deeply influenced by the influence of deep learning. Currently, it is involved in systems such as facial recognition, driverless cars, and object detection.

8. Expert systems

In the early 1970s the first programs were created that could simulate the decision-making skills of a human expert in a particular field—the first stage of what became known as expert systems. The best-known expert system is MycIN, which was developed in the 1970s to detect bacterial infections.

9. Neural network

It began with the development of neural networks in the 1940s as the first models inspired by the brain. Then came the second major area of ​​interest through early models of the perceptron developed by Frank Rosenblatt in 1958. The basic components of deep learning are the current forms of modern neural networks. They have been found to be extremely useful in various applications, such as pattern recognition and clustering.

10. Cognitive computing

Cognitive computing is the branch of AI that went mainstream with IBM Watson’s victory on the game show Jeopardy! in 2011. It is a sub-domain of AI that mimics human thinking power using natural language processing and pattern recognition in addition to other related sub-disciplinary techniques of AI. This discipline is concerned with transforming difficult, high-level decision-making tasks, mimicking human problem-solving skills.

All of the above categories of AI reflect the historical development of the field over several decades, with successive breakthroughs building on previous successes, and all of which have taken us a step forward toward progressively more sophisticated forms of intelligence.

Robotics:
Artificial intelligence-based robotics are designed to act autonomously or semi-autonomously to complete a specific task, and it has really gained momentum since this era. The history goes back to the 1950s when George Devol designed the first digitally operated robot called UniMate.
In 1961, Unimet was installed in a General Motors assembly line to perform repetitive operations, such as stacking shining metal parts. The passage of time is important, and by the 20th century, the application of AI in robotics evolved, paving the way for technology that allowed robots to perform more complex operations. By the year 2000, robots could perform tasks in homes, such as vacuuming the floors, where the first Roomba was released in 2002. So far, AI-powered robots have made complex surgeries possible, such as the da Vinci Surgical System which was approved by the FDA in 2000. Robotics has continually teamed up with AI to provide more advanced and autonomous capabilities.

1. Autonomous system

Decades have passed since the invention of autonomous systems, but in the last few years there have been huge discoveries both practically as well as conceptually. Self-driving vehicles began in the 1980s, when a team of engineers from the University of Munich teamed up with Mercedes-Benz to develop the world’s first driverless car. Google’s self-driving car project, which began in 2009, appears to be growing from interest and innovation in the topic. Now, companies like Tesla and Waymo are testing their autonomous vehicles on public roads. Similarly, there are drones, which are permitted with AI, which use computer vision and AI algorithms for surveillance and delivery of packages. Smart home systems: This concept, where homes can do things themselves, was introduced in the 2010s, and enabled technologies like the Amazon Echo to change mankind’s interaction with their homes.

2. Virtual Agent

Virtual agents or virtual assistants originated when ELIZA was created at MIT in the 1960s, one of the earliest NLP systems. Fast forward to 2011, when Apple released Siri, putting virtual assistants in many consumer electronics. And then Google Assistant, Amazon Alexa, a few more. Their capabilities and uses spread; However, the purpose of these AI entities is to understand human questions, address tasks, provide information, and simply attempt to gossip. Virtual agents are no longer invisible and are now part of our daily lives: in our phones, at home and in customer service computers around the world.

3. Context of sentiment analysis

Simple keyword-based methods were used in the early years of sentiment analysis to detect whether text was positive, negative, or neutral. The popularity of sentiment analysis came with the advent of social media in the early 2000s. It’s truly amazing how much AI techniques have improved sentiment analysis in the last decade. In deep learning models, the neural network has a more accurate definition of emotion and can actually detect sentiment and sarcasm. Nowadays, companies check customer opinions on social media, analyze reviews and even track brand reputation in almost real time.

4. Predictive Analysis

Predictive analysis has its origins in established statistical techniques in the 1940s and 1950s. However, it has only been widely applied in the late 1980s and 1990s for computer-based forecasting models in most industries. Big data and machine learning exploded during the 2000s, leading to much more complex predictive analytics as companies began to use large amounts of information to predict trends and behaviors. Predictive analytics is currently applied to all types of domains such as financial forecasting where stock prices and various market trends are predicted. Moreover, this technique is very useful for marketing companies. These companies use this technology to access information that is important in understanding customer behavior and creating targeted advertisements. In the near future, if this AI continues to advance according to its history, predictive analytics will penetrate decision making in all kinds of areas much more deeply than it does today.

Father of Artificial Intelligence(AI)

John McCarthy is one of the American computer scientists who played a role in making AI a recognized field. In 1955, McCarthy coined the term “artificial intelligence”, meaning the ability of machines to do what humans typically do intelligently. The following year, McCarthy organized a very important event in 1956 that formally developed AI as an academic field, named the Dartmouth Workshop. This workshop brought together some of the brightest minds of the time – people like Marvin Minsky and Claude Shannon – to discuss all the possibilities of machine intelligence. McCarthy’s work meant more than just stringing together a few words and important events.

In the same year, he also created a language called Lisp, which over time became the basis of AI research and is effective to this day. He developed concepts such as time-sharing and contributed to some of the most fundamental concepts in logic and automated reasoning, which would then form the foundation on which all future progress and innovation in AI would depend.
Additionally, he began teaching at Stanford University in 1962, where he continued his work on AI systems, their hypothetical applications, and functions. There were other scientists who did seminal work on the development of AI. Yet there may never have been any true visionary for this vision and groundwork other than McCarthy.

Mother of Artificial Intelligence(AI)

The concept of AI does not come with an origin story in the sense of a single founder. In other words, it is the product of work by many different people who contributed at different times. Thus, the history of AI begins long before the 20th century: it began with the first early reflections on the nature of thought, reasoning and computation.

One of the first contributors to ideas that would eventually find their place in AI is an English mathematician named Ada Lovelace, who helped Charles Babbage in his work with the Analytical Engine in the 1840s. In his notes on the subject, he predicted machines capable of transforming symbols and performing beyond the mere calculation of numbers.
Lovelace did no work on AI, but established a foundation for discussion of machines that might one day be able to do things other than computation.

Fast forward to the 20th century, Alan Turing, a British mathematician and logician, comes into the picture.
In 1936 he created the Turing Machine—an abstract construct that could mimic any possible computational process. Thus began the era of algorithmic computation for AI. Turing ultimately inspired him to do more research on machine intelligence when, in 1950, he published his seminal paper, “Computing Machinery and Intelligence”, in which he used Turing’s method to assess a machine’s intention to exhibit intelligent behavior. Developed a method called test.

Published history dates back to the founding of AI as a discipline, which was formalized by a conference held at Dartmouth in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. It was at that conference that the term “artificial intelligence” was used and very ambitious objectives were set for producing machines that operate according to human-like thought processes.
During the 1960s and 1970s, AI work was flourishing, particularly in the areas of symbolic logic, natural language processing, and early forms of machine learning.

But the 1980s also experienced a series of setbacks, known as “AI winters,” when expectations grew too high and technological progress was too little.
The field once again flourished in the 1990s and 2000s, as computational power, available data, and new technologies in machine learning increased. In 1997, IBM’s Deep Blue made headlines for defeating world chess champion Garry Kasparov.

AI evolved rapidly in the last decade of the century, with deep learning as a field, with developments such as Google DeepMind’s AlphaGo defeating the world champion at Go in 2016. AI permeates applications that include, but are not limited to, autonomous vehicles to personalized recommendations, as would not be possible with such a rich history built by so many contributors and a constantly evolving technology landscape.

Read Also:

  1. Artificial Intelligence And Cybersecurity in Covid-19 pandemic
  2. Applications of Artificial Intelligence and Associated Technologies
  3. Artificial Intelligence in Internet Services
  4. Artificial Intelligence (AI) and The Future of The Internet
  5. Artificial Intelligence (AI) and The Future of The Internet

 

12720cookie-checkWhat is Artificial Intelligence(AI) (With Definition, Developers and how does it works, and its Types)
Anil Saini

Recent Posts

Rapid Growth of Smartphones and Gaming Review

Sustained and impressive economic growth over the past three decades has made China a global…

1 week ago

Study On The Rapidly Growing Influence Of Smartphones In China’s Mobile Gaming Industry

Currently, the smartphone industry is one of the most profitable and fastest growing business sectors,…

1 week ago

Impact Of Modern Gadgets On Children’s Health: A Narrative-Based Study

Information and communication technology systems have brought a certain comfort to the world, and today…

2 weeks ago

How To Set Up A Reseller Hosting Business

Web hosting is the business of providing storage space and easy access to a website.…

2 weeks ago

How to Start a Web Hosting Company

Hello! I'm here to take you step-by-step on how to start a web hosting business.…

2 weeks ago

18 Top and Most Important Types of Catchy Blog Titles That Get You More Visitors

Writing your blog title is a great type of copywriting and it's a play on…

3 weeks ago