A Perspective on Artificial Intelligence
This is a paper I wrote back in high school, thought is was worth sharing. I did get some things wrong (Symbolic Manipulation is the not the main field of research in AI), but I think it’s still somewhat relevant right now.
Artificial Intelligence is a relatively new field of scientific endeavor, only starting in the mid-1950’s with the AI Research Project in Dartmouth College. Since then, many approaches have to been taken to achieve the ultimate objective — replicating human intelligence in computers. Such a feat of engineering would be a one of the greatest achievements in human history. But before we delve into the methods for making an intelligent computer, we must first ask ourselves: What is human intelligence? According to the paper “Mainstream Science on Intelligence”, human intelligence is a mental capability that involves the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” In this article, I will detail the widely accepted approach to Artificial Intelligence, it’s problems, and justify why senses and biological feedback mechanisms are essential for human intelligence.
The dominant approach to AI is Symbol Manipulation, or Symbolic AI. Symbolic AI is based on the premise that reality is inherently logical and therefore truth can be obtained by reasoning. The approach gives a computer symbols (in place of concepts and objects) and rules for manipulating the symbols. With this, the computer should be able to reason out knowledge by manipulating the symbols with the given rules. Symbolic AI was researched extensively from the 1950’s to the 1980’s. The most notable of the Symbolic AI projects was the Logic Theorist, a computer that was programmed to prove logic statements, and the General Problem Solver, an attempt at a universal problem solver. The Logic Theorist succeeded early on in proving many of the theorems in Russell and Whitehead’s Principia Mathematica (McCorduck). Inspired by the success of the Logic Theorist, the General Problem Solver came four years later, employing general heuristic rules to solve problems. It eventually failed because, in general, “domain specific skills and knowledge do not generalize across areas” (Psych. Dept. of University of Toronto).
Later on, Symbolic AI ran into a odd stumbling block. Many advanced disciplines, like mathematics, were conquered by this approach with relative ease, but common sense was seemingly impossible to penetrate. This problem eventually came to be known as the Common Sense Knowledge Problem. For example, if I told you that I was in Paris, you would by extension also know that my left arm was in Paris. The computer, on the other hand, couldn’t tell you that. How could such an intellectually advanced computer be stumped by this simple matter? The problem is twofold.
First, our knowledge of common sense is derived from our context. What people generally refer to as common sense is the shared knowledge from our shared experiences as humans. You knew my left arm was in Paris because you (hopefully) share with me the experience of having a left arm, and knowing that it is attached to your body. The human experience is built from our sensory experience, which makes it crucial for the understanding of common sense and human intelligence.
Second, even with the shared human context, the computer would need to be able to discern the relevance and significance of things in its current situation. The question is, then, is it possible to represent significance digitally? Hubert Dreyfus, a influential American Philosopher and critic of early AI, disagrees, saying, “significance can’t be constructed by giving meaning to brute facts — both because we don’t normally experience brute facts and, even if we did, no value predicate could do the job of giving them situational significance.” In short, no, significance cannot be represented digitally. Our biological systems serve as the barometer of our well-being. As humans, we constantly strive to optimize our situation; we seek to maximize pleasure and minimize pain. Even if some people might choose the more painful option in a situation, it is with the promise of a reward later on. Biological feedback mechanisms create the standard by which we judge something’s significance.
To understand human intelligence’s dependence on biological feedback mechanisms and senses, it helps to look at it from an evolutionary standpoint. In the beginning, there were single-celled organisms without brains, which over the course of time evolved into land-dwelling creatures with brains. These creatures developed brains in order to process sensory information more efficiently and allow the coordination of actions better suited for survival. Brains developed for the body. Isolating the human brain from its bodily context will do you no good — the human brain, and its desired intelligence were designed around the body’s senses and biofeedback mechanisms. The only way, then, for a computer to understand significance in context would be for the computer to have senses and biological feedback mechanisms just like humans.
A more recent approach to AI that encompasses these aspects of intelligence is embodied cognition. According to Margaret Wilson, the theory of embodied cognition claims that “cognitive processes are deeply rooted in the body’s interactions with the world.” Because this theory also considers the senses and biofeedback mechanisms in the process of human cognition, it soundly addresses the problems of significance, relevance and common sense presented by Symbolic AI.
One could argue that our ability to think about things not relevant to their current situation demonstrates that our bodies are not essential for advanced cognition. But as discussed by Nunez et al, even the our understanding of the most abstract concepts are grounded in our sensory experiences. In their paper, they illustrated how the comprehension of the functional continuity depended on simple metaphors of “moving, growing, oscillating, approaching values, and reaching limits,” all of which are basic human experiences. Great mathematicians like Leonhard Euler often use motion metaphors when describing and defining concepts in mathematics. Our abstractions are based, in some way or another, on our sensory experiences.
You could also contend that human intelligence is achievable in a statistical manner. That is to say, you could program the robot to act like a human would according to a statistical model of human behavior in that specific situation. It would be able to learn from human behavior, and would act according to context. The problem with this approach is that it provides the outward appearance of intelligence, but lacks the things that cause intelligent behavior to happen. Because of this, a computer using the statistical approach would not be able to actively synthesize new ideas or connections between concepts, and therefore would not be considered human-intelligent.
In conclusion, although symbol manipulation has achieved great results in the field of Artificial Intelligence, computers must be given senses and biological feedback mechanisms in order to achieve human-caliber intelligence. The brain and the body are intricately interconnected and cannot be considered separately with regard to the pursuit of Artificial Intelligence.
“Artificial Intelligence | Early Work in AI.” Artificial Intelligence | Early Work in AI. N.p., n.d. Web. Apr.-May 2014.
Dreyfus, Hubert L. “Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian.” Philosophical Psychology 20.2 (2007): 247–68. Web. Apr.-May 2014.
Gottfredson, Linda S. “Mainstream Science on Intelligence: An Editorial With 52 Signatures, History, and Bibliography.” Intelligence 24.1 (1997): 13–23. Web. Apr.-May 2014.
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, MA: A.K. Peters, 2004. 161–70. Web. Apr.-May 2014.
Nunez, Rafael E., Laurie D. Edwards, and Joao F. Matos. “Embodied Cognition as Grounding for Situatedness and Context in Mathematics Education.” Educational Studies in Mathematics 3rd ser. 39.1 (1999): 45–65. Web. Apr.-May 2014.
Wilson, Margaret. “Six Views of Embodied Cognition.” Psychonomic Bulletin & Review 9.4 (2002): 625–36. Web. Apr.-May 2014.