Artificial intelligence is the replication of human intellectual processes like learning, testing, and adaptation, by computers. Its areas of application include image recognition, language translation, surveillance, and speech recognition. This article discusses artificial intelligence meaning, history, method and examples, as outlined below;
Artificial Intelligence Definition
Artificial intelligence is the execution of activities commonly associated with humans, by computers.
Although there are many different forms of artificial intelligence, it is generally defined by the ability of computer systems to carry out sophisticated processes that require some form of adaptive intellectual capacity.
Factors that must be considered when defining artificial intelligence include;
-Simulation of human intellectual processes
-Relevance in various industries
-Links shared with other branches of computer science
Artificial intelligence is a branch of computer science that involves developing systems and applications that are able to perform functions typically requiring human intelligence .
The above definition highlights the importance of systems and applications in defining artificial intelligence. Based on this importance, we can provide another definition of artificial intelligence.
Artificial intelligence (AI) is the assemblage of systems and applications that are designed to carry out human-simulated processes.
It is also necessary to note that the terms human intelligence, human-related, simulated, in the definitions so far, refer to smart computer systems. This brings us to yet another definition;
Artificial intelligence is the branch of computer science concerned with the development of smart devices, and the use of these devices to perform sophisticated, intellectual tasks.
When we say ‘intellectual,’ we refer to the exhibition of attributes associated with human psychology. This is another perspective from which artificial intelligence can be viewed.
Artificial intelligence is the concept of building and applying computer tools that are equipped with functionalities like analysis, adaptation, reasoning and learning.
These functionalities are all associated with humans, and the ability of computers to simulate and exhibit the same functionalities, is the basic foundation of artificial intelligence.
While it is possible for artificial intelligence systems to function independently, they often work collaboratively in an integrated digital network of devices and applications.
We may therefore state that artificial intelligence is an integrated application of computers designed to handle human-like cognitive tasks. Examples of the application of artificial intelligence in this area include Internet of Things (IoT) and Smart Building Technology, also known as Domotics.
Because artificial intelligence is generally developed on the basis of human abilities, it is often regarded as a simulation, or emulation, of these abilities.
Therefore, in some definitions, artificial intelligence is portrayed as an aspect of computer science that mimics the cognitive functions associated with humans, such as problem-solving and learning .
History of Artificial Intelligence
One of the earliest known works in the field of modern artificial intelligence is a study conducted by Walter Pits and McCulloch in 1943, in which they carried out analyses and made propositions for the design of an artificial neuron system .
Based on this study and other findings, the Hebbian Model was developed by Donald Hebb six years later, in 1949 . This rule proposes a system of modification of artificial neuron connections.
A 1950 publication titled Computing Machinery and Intelligence by the English polymath Alan Turing, proposed a test to assess the advancement of artificial intelligence, by measuring the thinking capability of a computer .
The test is known as the Turing Test, and has been relevant for decades, in artificial intelligence assessment.
In 1952, a brain neuron model known as the Hodgkin-Huxley model, was developed by Andrew Huxley and Alan Hodgkin .
The model simulates the mechanisms of ion-transfer which control decision-making and execution processes in the axion. This model was also instrumental in defining the mechanism of cognition, and further inspired the development of artificial intelligence.
Herbert A. Simon and Allen Newell are credited with having developed the first artificial intelligence program in 1955, called Logic Theorist .
The program was designed on the basis of mathematical algorithms, and was effectively applied in proving and manipulating mathematical theorems.
In 1956, John McCarthy; an American computer scientist, coined the term ‘Artificial Intelligence’ alongside Rochester, Shannon and Minsky in a publication released at the Dartmouth Conference in New Hampshire . This was one of the key steps in the establishment of AI as a distinct field of study and as a branch of computer science.
Within the period of development of the field of computer programming, it became clear that computer programs were to play a foundational role in the design and implementation of artificial intelligence.
The first chatbot was designed in 1966, by Joseph Weizenbaum; a German professor of computer science at MIT . This application was created as part of efforts to assess the practical capabilities of artificial intelligence, and was named ELIZA.
George Devol developed Unimate; one of the first programmable robots to ever be invented, in 1954 .
1969 was the year in which the first general-purpose mobile robot was built . This served as one of the earliest practical representations of artificial intelligence.
Wabot was the name assigned to a humanoid robot developed in Japan in 1972 .
Between 1974 and 1980, there was a decline in the development and productivity of the artificial intelligence sector.
Within this period, which is referred to as the ‘First AI Winter’; minimal support was provided in the form of financial, bureaucratic and intellectual capital, for the execution of AI projects .
The main reason behind the first AI winter was a decline in general interest with regards to artificial intelligence.
However, this decline in interest was in turn caused by some limitations in the functionality of existing AI systems at the time. Examples of such limitations include very low processing speeds, and small system memory.
With the introduction of integrated computer networks in the 1980s, artificial intelligence experienced a resurgence in interest, funding, and advancement.
At the same time, a new development known as the ‘Expert System’ was introduced . This system was designed by simulating the decision-making ability of humans.
In 1980, the American Association of Artificial Intelligence held its first national conference at Stanford University .
The expert system was based on a simple software design which uses programmed functions to analyze and solve problems. Expert systems found application in the medical, commercial and banking sectors. They remained relevant till the late 1980s.
From the year 1987 to 1993, another decline in funding and development occurred in the field of artificial intelligence. This is referred to as the ‘Second AI Winter’ .
The reason behind the decline in funding for AI projects at this time was the slow rate of advancement of the technology, which was coupled with high costs of development and implementation.
From the early 1990s, another new development in AI technology emerged, called ‘intelligent agents‘ . The intelligent agent technology was an improvement on the efficiency and analytical abilities of earlier developments.
It is important to note that intelligent agent technology is the precursor to recent artificial intelligence technologies like virtual assistants and big data analytic systems.
Also, the early 1990s saw the introduction and growth of the concepts of machine learning and deep learning. These developments were a big leap from earlier AI technologies, because they included adaptive functions to AI systems (which had earlier had only analytical capabilities).
Robert Schapire an American computer scientist, in 1990 introduced the concept of Boosting.
Based on this concept, artificial intelligence technology was improved, by the use of repetitive algorithmic assemblages.
One of the functionalities of artificial intelligence which was improved by these developments in the 1990s is speech recognition. The introduction of deep learning and machine learning was very instrumental toward the development of speech recognition technology.
In 1997, Deep Blue; an artificial intelligence program developed by IBM, competed successfully against world chess champion Gary Kasparov . This was an illustration of a significant level of advancement of AI technology.
The year 2002 saw the advent of home-based AI applications, with the creation of the first robotic vacuum cleaner, called Roomba . This was also one of the early developments in the field of smart house technology.
Artificial intelligence began to prove its financial viability from the mid-2000s. This involved the boom in virtual assistance, data sharing and social media technologies, with the emergence of companies like Facebook and Twitter .
It is also relevant to note that artificial intelligence is one of the aspects of technological development which are considered crucial for the achievement of sustainable development.
Recent years have seen AI innovation from technologic companies like Google, IBM, Amazon and Facebook.
In 2014, an AI chatbot named Eugene Goostman was said to have passed the Turing test, suggesting that the chatbot had equal interactive capabilities with humans .
There have however been disputes to the conditions of the test and the implications of the results.
Future developments in artificial intelligence are likely to gain application in various sectors such as transport, manufacturing, agriculture, energy, finance and health.
How Artificial Intelligence Works
Artificial intelligence works by the use of algorithms to analyze and process data, thereby identifying patterns that can be used to make intelligent decisions. Through analysis, artificial intelligence can learn, adapt, and draw correlations which it may apply to improve its performance.
In the process of data assessment, artificial intelligence combines multiple datasets which are compared and examined to identify correlative, overlapping features .
The analysis of data by AI systems is a continuous and iterative process which is used to learn about the data, while measuring and testing the AI system’s level of expertise .
Artificial intelligence works on the basis of analysis, decision-making, measurement and testing. These are all typically-human cognitive functions, and they form the basis on which AI tools and systems are programmed.
–Analysis is the first step in the AI cognitive process. At this stage, the artificial intelligence application assembles datasets, which it examines using a set of defined rules known as algorithms .
Algorithms provide a guide on how the data should be handled in order to perform a given task. In the process of following this guide, the AI system is soon able to identify distinct patterns with which it can better understand and handle the given data.
Based on the patterns and correlations observed during analysis, the artificial intelligence system is able to make decisions with regards to handling the data, in order to carry out a function or perform a task .
In decision-making, the AI system develops its own rules and methods on how to make the best use of the data which is fed into it.
This can be considered similar to the human learning process, and is the focus of AI technologies like machine learning and deep learning.
The outcome of the decision-making process is usually a modified set of algorithms that enhance the performance of the AI system.
–Measurement and Testing can both be seen as part of the overall AI operation process. Artificial intelligence works based on continuous evaluation, assessment and improvement.
The goal of measurement and testing is to continuously review the patterns in the available data, as well as the algorithms used by the AI system to handle and process this data.
By reviewing these parameters, the AI system can remain efficient and effective. Measurement and testing help improve the system’s performance, while identifying and correcting errors in the mode of operation.
We can also understand how artificial intelligence works by considering the working principles of various branches of artificial intelligence like deep learning, machine learning, computer vision, robotics, neural networks, and expert systems.
–Deep Learning works by analyzing data continuously using a logical algorithmic framework .
Based on the analyses, patterns are identified, which are used to classify the data. The overall process of analysis and classification is used to predict outcomes, and make decisions.
–Machine Learning works by the three-stage process of training, validation and testing .
In the training stage, data is fed into the AI system. This data is called the Training Data, and it is used by the AI system to develop a set of rules or algorithms.
When the training data is analyzed repeatedly, patterns are identified, which are used to develop the algorithms.
Validation is the stage in which the algorithms that have been developed from the training data, are assessed to determine their suitability and effectiveness. In order to validate an algorithm, the AI system compares the actual results of data with its own predictions.
Testing is the stage in which the algorithms are continuously modified. This modification is usually on the basis of the outcome of validation. Provided the algorithm is found to be unsuitable to effectively handle the data, it is modified, thereby improving performance.
–Computer Vision works by analyzing visual data in detail, and identifying regularities and definite patterns that can be used to classify the data.
Artificial intelligence systems which have computer vision functionality, are designed to closely examine images and interpret them as a series of pixels . This interpretation helps the system to understand the visual data, such that is can process, alter and define it.
–Robotics works by the assessment of data using a set of programs or algorithms, followed by decision-making, and the execution of a function based on the decisions made.
Typically, a robot comprises of physical hardware components which are integrated with software that controls the hardware components.
When decisions are made by the software, it relays these decisions as a set of rules (algorithms) to the hardware components, or to other software components, which then perform a task or execute a function according to the command.
The mechanical parts of most robots make them contextually similar to biological systems that are capable of locomotion, according to data analyzed and decisions made by the brain.
–Neural Networks work like human neurons , by identifying patterns, correlations and links between aspects of data, and using these parameters to process the data and make decisions.
A neural network receives and handles data in series of layers. These include the input, output and intermediate layers.
When the artificial intelligence (neural network) system identifies a relationship or similarity of pattern between datasets, it establishes a link or node between these datasets. The assemblage of links and relationships between datasets is what constitutes the neural network.
Based on the links that have been identified, the neural network develops a pattern of assessment (or ‘reasoning’) that is best suited to the data.
It uses this pattern to make predictions and execute functions with the data.
–Expert Systems work by applying data from a knowledge base to solve problems in a particular field which correlates with the data .
Essentially, expert systems do not function like other artificial intelligence systems which actively analyze data and make decisions.
Rather, they are equipped with a predefined database that can provide solutions to a given scope of problems. The system examines the problem and determines its relationship to the data in its knowledge base.
Expert systems therefore have a narrow range of applicability.
Examples of Artificial Intelligence
Practical examples of artificial intelligence are;
- Virtual Assistants
- Facial Recognition and Detection
- Self-driving Vehicles
- Internet Search Engines
- Maps and Navigation
- Robots in Manufacturing
- Healthcare Management Automated Tools
Artificial intelligence is the simulation of human cognitive functions like analysis, learning, reasoning and decision-making, by computers.
The origin of modern artificial intelligence can be traced to the mid-twentieth century, from which studies and developments in the AI field commenced.
Artificial intelligence works by the analysis of datasets, recognition of patterns, development of algorithms, and execution of functions.
Some branches of artificial intelligence are machine learning, deep learning, neural networking, computer vision, and robotics. Each of these aspects of AI work based on the same basic principle.
Examples of artificial intelligence include facial recognition systems, self-driving cars, chatbots, manufacturing robots, virtual assistants, maps and navigation tools, and healthcare automated tools.
1). Amisha; Malik,.P.; Pathania, M.; Rathaur, V. K. (2019). “Overview of artificial intelligence in medicine.” Journal of Family Medicine and Primary Care 8(7):2328. Available at: https://doi.org/10.4103/jfmpc.jfmpc_440_19. (Accessed 31 March 2022).
2). Bellis, M. (2019). “Who Pioneered Robotics?” Available at: https://www.thoughtco.com/timeline-of-robots-1992363. (Accessed 31 March 2022).
3). Bhat, S. (2020). “Expert Systems in Artificial Intelligence (AI).” Available at: https://www.mygreatlearning.com/blog/expert-systems-in-artificial-intelligence/. (Accessed 31 March 2022).
4). Bi, Y.; Guan, J.; Bell, J. (2008). “The combination of multiple classifiers using an evidential reasoning approach.” Artificial Intelligence 172(15):1731-1751. Available at: https://doi.org/10.1016/j.artint.2008.06.002. (Accessed 31 March 2022).
5). Brown, R.E. (2020). “Donald O. Hebb and the Organization of Behavior: 17 years in the writing.” Mol Brain 13, 55. Available at: https://doi.org/10.1186/s13041-020-00567-8. (Accessed 30 March 2022).
6). Colson, E. (2019). “What AI-Driven Decision Making Looks Like.” Available at: https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like. (Accessed 31 March 2022).
7). Deoras, S. (2016). “AI Winter – What Is It and Whether It Is Relevant Today?” Available at: https://analyticsindiamag.com/ai-winter-whether-relevant-today/. (Accessed 31 March 2022).
8). Dickson, B. (2018). “What is the AI winter?” Available at: https://bdtechtalks.com/2018/11/12/artificial-intelligence-winter-history/. (Accessed 31 March 2022).
9). Foote, D. (2022). “A Brief History of Artificial Intelligence.” Available at: https://www.dataversity.net/brief-history-artificial-intelligence/. (Accessed 31 March 2022).
10). Frankenfield, J. (2021). “Artificial Intelligence (AI).” Available at: https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp. (Accessed 30 March 2022).
11). Fumo, D. (2017). “Types of Machine Learning Algorithms You Should Know.” Available at: https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-953a08248861?gi=84b6abd60e3e. (Accessed 31 March 2022).
12). Gualtiero, P. (2004). “The First Computational Theory of Mind and Brain: A Close Look at Mcculloch and Pitts’s “Logical Calculus of Ideas Immanent in Nervous Activity””. Synthese. 141 (2): 175–215. Available at: https://doi.org/10.1023/B:SYNT.0000043018.52445.3e. (Accessed 30 March 2022).
13). Gugerty, L. (2006). “Newell and Simon’s Logic Theorist: Historical Background and Impact on Cognitive Modeling.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50(9):880-884. Available at: https://doi.org/10.1177/154193120605000904. (Accessed 30 March 2022).
14). Jacob, R. (2016). “Thinking Machines: The Search for Artificial Intelligence”. Distillations. 2 (2): 14–23. Available at: https://www.chemheritage.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence. (Accessed 31 March 2022).
15). Kelley, K. (2022). “What is Artificial Intelligence: Types, History, and Future.” Available at: https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/what-is-artificial-intelligence. (Accessed 30 March 2022).
16). Kenyon, T. (2021). “How are social media platforms using AI?” Available at: https://aimagazine.com/ai-strategy/how-are-social-media-platforms-using-ai. (Accessed 31 March 2022).
17). Kerr, J. (2013). “The history of the Roomba.” Available at: https://fortune.com/2013/11/29/the-history-of-the-roomba/. (Accessed 31 March 2022).
18). Lemke, H. U.; Melzer, A. (2019). “Back to the roots of AI and their relevance for health care today.” Minimally Invasive Therapy & Allied Technologies, 28:2, 65-68. Available at: https://doi.org/10.1080/13645706.2019.1596955. (Accessed 30 March 2022).
19). Lincoln, K. (2018). “Deep You.” Available at: https://www.theringer.com/tech/2018/11/8/18069092/chess-alphazero-alphago-go-stockfish-artificial-intelligence-future. (Accessed 31 March 2022).
20). Liu, N. Xie, F.; Siddiqui, F. J.; Ho, A. F. W.; Chakraborty, B.; Nadarajan, D. G.; Tan, K. B. K.; Ong, M. E. H. (2022). “Leveraging Large-Scale Electronic Health Records and Interpretable Machine Learning for Clinical Decision Making at the Emergency Department: Protocol for System Development and Validation.” JMIR Res Protoc 2022 (Mar 25); 11(3):e34201. Available at: https://www.jmir.org/themes/797-artificial-intelligence. (Accessed 30 March 2022).
21). McFadden, C. (2021). “15 Engineers and Their Inventions That Defined Robotics.” Available at: https://interestingengineering.com/15-engineers-and-their-inventions-that-defined-robotics. (Accessed 30 March 2022).
22). Nagarajan, D. (2019). “Continuous delivery of data drives continuous intelligence.” Available at: https://www.ibm.com/blogs/journey-to-ai/2019/03/continuous-delivery-of-data-drives-continuous-intelligence/. (Accessed 31 March 2022).
23). Nagyfi, R. (2018). “The differences between Artificial and Biological Neural Networks.” Available at: https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7. (Accessed 31 March 2022).
24). Nuyts, R. (2019). “The Rocky History of Artificial Intelligence.” Available at: https://health2047.com/2019/01/30/the-rocky-history-of-artificial-intelligence/. (Accessed 30 March 2022).
25). Opperman, A. (2019). “What is Deep Learning and How does it work?” Available at: https://towardsdatascience.com/what-is-deep-learning-and-how-does-it-work-2ce44bb692ac. (Accessed 31 March 2022).
26). Schofield, J. (2014). “Computer chatbot ‘Eugene Goostman’ passes the Turing test.” Available at: https://www.zdnet.com/article/computer-chatbot-eugene-goostman-passes-the-turing-test/. (Accessed 31 March 2022).
27). Schwiening, C. (2012). “A brief historical perspective: Hodgkin and Huxley.” The Journal of Physiology 590(Pt 11):2571-5. Available at: https://doi.org/10.1113/jphysiol.2012.230458. (Accessed 30 March 2022).
28). Venables, C. (2019). “An Overview of Computer Vision.” Available at: https://towardsdatascience.com/an-overview-of-computer-vision-1f75c2ab1b66. (Accessed 31 March 2022).
29). Viejo, C. G.; Torrico, D. D.; Dunshea, F. R.; and Fuentes, S. (2019). “Development of Artificial Neural Network Models to Assess Beer Acceptability Based on Sensory Properties Using a Robotic Pourer: A Comparative Model Approach to Achieve an Artificial Intelligence System.” Beverages. Available at: https://doi.org/10.3390/beverages5020033. (Accessed 31 March 2022).