Home / What is Artificial Intelligence (AI)?

What is Artificial Intelligence (AI)?

artificial intelligence

Artificial intelligence uses computers and machines to simulate the human mind’s problem-solving and decision-making abilities.

John McCarthy’s concept of artificial intelligence (AI) in a 2004 paper is one of many definitions that have appeared over the past few decades, “It is a branch of engineering and research that deals with the creation of intelligent devices, such as computer programs. However, AI does not have to adhere to biologically observable methodologies in its pursuit of understanding human intellect.”

As a result of Alan Turing’s landmark paper, “Computing Machinery and Intelligence,” published in 1950, the beginnings of the artificial intelligence discourse were marked decades before this concept. Is it possible for robots to think? Turing, who has been dubbed the “father of computer science,” wonders in this paper. A human interrogator would then attempt to discern between a machine and human text response using the “Turing Test,” which is already well-known. However, despite the fact that this test has been the subject of extensive criticism since its publication, it remains an important element of AI’s history and the philosophy of linguistics.

As a result, Artificial Intelligence: A Modern Approach was published by Stuart Russell and Peter Norvig. Four possible goals or definitions of AI are discussed below, which differentiates computer systems on the basis of rationality and thinking vs acting:

Human approach

  • Artificially intelligent systems
  • Humanoid-like computer systems

The best way to go

  • Rationality-thinking systems
  • Systems that are able to make sensible decisions

In a nutshell, artificial intelligence is a topic that combines computer science and robust datasets in order to solve problems. The terms “artificial intelligence” and “machine learning” are often used interchangeably to describe this broad field. To construct expert systems that can make predictions or classifications based on input data, these disciplines use AI algorithmic techniques.

Today, there is still a lot of hype surrounding AI development, which is to be anticipated of any new emergent technology. According to Gartner’s hype cycle, product innovations such as self-driving cars and personal assistants follow “a typical progression of innovation, from initial enthusiasm to disillusionment and finally to an understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes in his 2019 MIT speech, we are approaching the trough of exaggerated expectations. As discussions about the ethics of AI begin to emerge, we can witness the first signs of the trough of disillusionment.

Artificial intelligence classifications—weak AI vs. strong AI

Artificial Narrow Intelligence (ANI), sometimes known as Weak AI, is AI trained and focused on a limited set of activities. Most of today’s AI is powered by weak AI. A more realistic term for this form of AI would be “narrow,” as it is everything but weak, enabling sophisticated applications like Siri, Alexa, and self-driving cars like Tesla.

(AGI) and AI Super Intelligence (ASIS) are the building blocks of strong AI (ASI). For a computer to be able to think like a person, it must be able to solve problems, learn, and plan for the future. This is known as artificial general intelligence (AGI), or general AI. ASI—also known as superintelligence—would surpass the human brain’s intelligence and capabilities. Despite the fact that strong AI is currently a theoretical concept with no real-world applications, researchers are still working on its development. HAL, the superhuman, rogue computer aide from 2001: A Space Odyssey, might be the best example of ASI in the meantime.

Machine learning vs. deep learning

To avoid confusion, it’s important to understand the differences between deep learning and machine learning. There are several subfields in artificial intelligence, but deep learning is essentially a sub-discipline within the larger area of machine learning.

Neural networks are the building blocks of deep learning. In the context of deep learning, “deep” refers to a neural network with more than three layers, which includes the inputs and outputs.
The main difference between deep learning and machine learning is the way they learn. Much of the feature extraction is automated via deep learning, which eliminates some of the manual human involvement necessary and allows access to larger datasets. “Scalable machine learning,” as Lex Fridman remarked in the same MIT lecture from above, is a subset of deep learning. Classical machine learning, or “non-deep” learning, is more reliant on human interaction than deep learning, which is more autonomous. In order to grasp the differences between data inputs, human professionals construct the hierarchy of characteristics, which normally requires more structured data to learn.

Although “deep” machine learning can use labeled datasets, or supervised learning, to inform its algorithm, it doesn’t necessarily need to. When it comes to unstructured data (e.g. text or photos), it is able to ingest information in its raw form and automatically identify the attributes that separate different types of data. While machine learning relies on human intervention to process data, deep learning does not.

Applications of artificial intelligence (AI)

AI systems are being used in a variety of ways in the real world today. Listed here are some typical examples:

Natural language processing (NLP) is used to convert human voice into a written format, which is known as speech recognition, computer speech recognition, or speech-to-text. Many mobile devices use speech recognition to perform voice searches, such as Siri, or to make messaging more accessible.

Increasingly, virtual agents are taking the place of human service representatives in the customer journey. FAQs and personalized advice, such as cross-selling products or recommending sizing for users on websites and social media platforms, are transforming the way we think about customer engagement across websites and social media. Slack and Facebook Messenger are two examples of messaging bots on e-commerce sites with virtual agents, as are functions typically performed by virtual assistants and voice assistants.

Using computer vision, computers and systems can analyze digital photos, videos, and other visual inputs and take action based on the information they receive. Images can be recognized, but the capacity to provide recommendations distinguishes it from image-based activities. Photo tagging on social media, radiological imaging in healthcare, and self-driving cars in the automotive industry all benefit from computer vision powered by convolutional neural networks (CNN).

Cross-selling methods can be improved with the help of recommendation engines, which analyze data gathered from previous purchases to identify patterns that can be exploited. This is used by online businesses to recommend relevant add-ons to customers during the checkout process.

Stock portfolio optimization: AI-driven high-frequency trading platforms make hundreds or even millions of deals each day without human intervention, designed to optimize stock portfolios.

The History of Artificial Intelligence: Significant Dates and Persons

‘A machine that thinks’ has been around since the Greeks first thought of it in ancient times. When it comes to the progress of artificial intelligence from the introduction to electronic computing, the following events and milestones are worth noting:

  • Computing Machinery and Intelligence, published by Alan Turing in 1950. When it comes to answering the issue “Can machines think?” the legendary WWII codebreaker Alan Turing comes up with the Turing Test, which he claims can prove whether or not computers are as intelligent as humans. There has been a long-running discussion about the value of the Turing test.
  • The first official AI conference was held at Dartmouth College in 1956, where John McCarthy coined the phrase “artificial intelligence.” After developing the Lisp programming language, (McCarthy would go on to do so). It wasn’t until the following year that the Logic Theorist, the first-ever functioning AI software program, was created by Allen Newell, JC Shaw, and Herbert Simon.
  • Mark 1 Perceptron was developed by Frank Rosenblatt and was the first computer that ‘learned’ via trial and error. Later, Marvin Minsky and Seymour Papert write Perceptrons, which serves as both a milestone work on neural networks and a deterrent to further neural network research.
  • The 1980s: Backpropagation neural networks are widely employed in AI applications because of their ability to self-train. Convolutional neural networks are a type of deep neural network that can identify and categorize images more accurately than the average human. Baidu’s Minwa supercomputer employs this type of neural network.
  • With the help of a deep neural network, DeepMind’s AlphaGo program beat the world champion Go player Lee Sodol. In light of the enormous number of possible steps (nearly 14.5 trillion after just four moves!), the triumph is remarkable. For a reported USD 400 million, Google purchased DeepMind later on.

Leave a Reply