The Field of Artificial Intelligence: Biological 'Thinking'
Machines Attempting to Create Mechanical
Artificial intelligence, abbreviated AI, is a combination of the fields of science,
physiology, and philosophy. The main purpose of AI is to create machines that can think.
But in order to determine if a machine is thinking, ". . . it is necessary to define
intelligence. To what degree does intelligence consist of, for example, solving complex
problems, or making generalizations and relationships?" ("Introduction" Internet).
Answers needed to be found for these questions before anyone could begin work on a self
reliant learning machine. So after years of painstaking research and perseverance,
Scientist were able to initiate AI and it has come a long way since.
Before electronics, AI was only theory, but it was one component that pushed the
commencement of the 'electronic birth' in 19431. This 'birth' gave scientists the tools
necessary to physically invent an intelligent machine. The original dozen scientists
quickly grew to thousands of engineers and specialists("Introduction" Internet). When
started in 1956, AI was an ". . . idealism . . . that was going to be a powerful force for the
good of humanity. But that idealism is being squeezed out, instead, by hypocrites who
crave money, status, and power" ("Artificial Intelligence, and Robot" Internet). This is
all too prevalent today. Sadly, ". . . 'experts' have turned AI into a battle for territory,
obstructing progress, obscuring their trivialities behind impressive-sounding jargon, and
turning this fundamental, urgently important domain of science into an exclusive club,
with artificially limited 'union cards'. . ." ("Artificial Intelligence, and Robot", Internet).
Although scientists with these 'union cards' were able to keep AI research an 'exclusive
club', we must come to understand that artificial intelligence has been in development
for many years and is integral to the computer field, has practical uses and could prove to
be an advantage to society. Most importantly, functional AI is very probable in the future.
Although the computer was around for the technological movement, the link
between computers and human intelligence wasn't made until approximately 1950.
Scientists believe that this first realization should be credited to Norbert Wiener. He was
one of the first Americans to make observations on the principle of feedback theory. To
understand this theory, it is easiest to use the example of the normal household
thermostat: "It controls the temperature of an environment by gathering the actual
temperature of the house, comparing it to the desired temperature, and then responding
by turning the heat up or down"("Beginnings" Internet). The most important research in
feedback loops was that Wiener theorized ". . . that all intelligent behavior was the result
of feedback mechanisms. And these mechanisms could possibly be simulated by
machines"(qtd. in "Beginnings" Internet). This discovery by Wiener would greatly
influence the thinking behind AI in the future.
Later, in 1955, two scientist, Newell and Simon, developed The Logic Theorist.
This program was considered the first true AI program. It is agreed that ". . . the impact
that The Logic Theorist made on both the public and the field of AI has made it a crucial
stepping stone in developing the AI field"("Beginnings" Internet). With these new
technologies and ideas, the AI field greatly lacked organization. This was corrected by
John McCarthy, regarded as the father of AI, he ". . . organized a conference to draw the
talent and expertise of others interested in machine intelligence for a month of
brainstorming"("Beginnings" Internet). This conference was deemed "The Darmouth
summer research project on artificial intelligence"("Beginnings" Internet). From that
point, the name artificial intelligence stuck to any research to do with the creation of
Now that they had a name for their research, scientists needed to find some
reasons for continuing it. The scientists began seeking realistic goals that AI could
accomplish. Patrick Henry Winston, a professor from the Massachusetts Institute of
Technology(MIT), stated that "The central goals of artificial intelligence are to make
computers more useful and to understand the principles which make intelligence
possible"(1). Nevertheless, one must not overlook the fact that with AI ". . . a computer
system can be trained quickly, has virtually no operating cost, never forgets what it
learns, never calls in sick, retires, or goes on vacation"("Scope" Internet). But still the
question remained on how to use it. For example, at one time, ". . . people once
considered an intelligent computer as a possible substitute for human control over
nuclear weapons, citing that a computer could respond more quickly to a threat"
("Scope" Internet). However, before jumping face first into AI, the military found the
need to integrate AI systems slowly into tools and weapons. It first put AI software to the
test during Desert Storm. AI-based technologies were used in missile systems (utilizing
the feedback ideals of AI for accurate radar to targeting), heads-up-displays in cockpits2,
and other advancements. In addition to military applications, AI has also moved into the
common home. Computer programs for ". . . the Apple Macintosh and IBM compatible
computer, where such programs as voice and character recognition have become recently
available"("AI Put To The Test" Internet). Through AI, simple things like steady picture
camcorders have come to the general public. With greater demand and a larger market
for AI-related technology, new advancements are becoming available more rapidly than
ever thought possible.
So if AI is already in use, why continue researching it? Winston, the professor
from MIT, believes that in the near future, ". . . we must use our energy, food, and human
resources wisely and we must have high quality help from computers to do it." As the
world grows larger and more complex than imagined, "the computers must help not only
by doing ordinary computing, but also by doing computing that exhibits
intelligence"(Winston 2). Winston also proposes many recommendations of what he
thinks AI could or should be doing soon. He states that in farming, computers should help
control pests, prune trees, and enable selective harvesting of mixed crops. In
manufacturing, computers should be doing assembly and inspection jobs of all kinds. In
hospitals, computers should help with diagnosis, monitor patients, manage treatment, and
make beds. Winston believes that computers with intelligence would be an invaluable
resource to humans(2).
Despite the realization that people are constantly finding new uses for AI, it
would be a newcomer's mistake to forget that it may also hold dangers to the traditional
acceptance of what a machine can do. We have learned from experience that people
don't always welcome new methods or materials as soon as they are available. This holds
true for AI, but scientists believe this is the wrong thing to do. First of all,
. . . we should be prepared for a change. Our conservative ways may
standing the way of progress. AI is a new step that is very helpful to
society. Machines can do jobs that require detailed instructions followed
and mental alertness. AI with its learning capabilities can accomplish
those tasks but only if the worlds [sic] conservatives are ready to change
and allow this to be a possibility. It makes us think of how early man
finally accepted the wheel as a good invention, not something taking away
from its heritage or tradition("What We Can Do" Internet).
Also, in order to be ready to welcome the advantages accompanying AI, and to prevent
. . . we must be prepared to learn about the capabilities of it. The more we
get out of the machines the less work is required by us. In turn less injuries
and stress to human beings. Human beings are a species that learn
by trying, and we must be prepared to give AI a chance seeing AI as a
blessing, not an inhibition("What We Can Do" Internet).
Finally, people must prepare for the worst with AI. As we do know from history, nothing
starts out perfectly. So ". . . something as revolutionary as AI is sure to have many kinks
to work out"("What We Can Do" Internet). But people always seem to have the fear that
". . . if AI is learning based, will machines learn that being rich and successful is a good
thing, then wage war against economic powers and famous people?"("What We Can Do
" Internet). These are the risks we have to be prepared for and must be willing to take in
order to advance technology. However, although the fear of the machines is there, we
must remember that ". . . their capabilities are infinite"("What We Can Do" Internet). To
control AI, we need to bear in mind that
AI machines are like children that need to be taught to be kind, well
mannered, and intelligent. If they are to make important decisions, they
should be wise. We as citizens need to make sure AI programmers are
keeping things on the level("What We Can Do" Internet).
We are responsible for making sure they do their job correctly, so that ". . . no future
accidents occur"("What We Can Do" Internet).
So now the warnings and the base for AI development are both there. The current
projects in AI are appearing to function with undeniable success. Consequently the future
of AI is limitless, so people think. But one fundamental question about AI remains
unanswered: Is it possible for computers - simple machines - to actually think in the same
manner as does the human mind? As Marc Leepson points out in the Editorial Research
Report "Artificial Intelligence",
There is no doubt that computers can be programmed to make inferences.
But it has yet to be proven that an inanimate object can be imbued with
human knowledge and the ability to learn. A computer, after all, has no
intrinsic intelligence. It is a machine that manipulates symbols that it
recognizes, but does not understand the meaning of the symbols it
Therefore, although being able to crunch millions of numbers or symbols per second,
computers still cannot emulate the many and widely varied processes of the human mind.
According to Tom Alexander, author of many published AI reference materials, such
processes include ". . . the rich associations, metaphors and generalizations that language
invokes in people and that constitute the essence of meaning and thought . . . which
consists less of logic and recognizing symbols than it does of mental images and
analogies -- things no one has been able to define in terms computers can grasp"(106).
Today's programs only create an ". . . empty mimicry. . . " of human intelligence and
action. They still only have ". . . the limited repertoire of a clockwork doll rather than a
respectable simulation of human intellect"(106).
This simulation of human intellect is commonly referred to as the "Aha!" theory.
This theory is due to the fact that humans can sometimes solve a problem spontaneously
without using any traceable logical path. In spite of that, researchers believe that one day
it may be possible to program a computer with something virtually identical to human
common sense and intuitive capabilities. "Common sense is a question of how much you
know about a domain," stated Assistant Professor from Computer Science Ramesh Patil
of MIT. "The kind of reasoning that goes on in making common-sense reasoning we are
starting to get a very good handle on." Nevertheless, he said, "building a program that
would have as much common sense as you and me is still out of reach"(qtd. in Leepson
635). Currently the only computers capable of this are found in science fiction novels,
but it is believed that in just a matter of time a fully thinking machine will be as common
as the wristwatch.
Until that time, however, we must remember the current situation. Herbert Simon
best states it by saying,
It is not my aim to surprise or shock you -- but the simplest way I can
summarize is to say that there are now in the world machines that can
think, that can learn and that can create. Moreover, their ability to do these
things is going to increase rapidly until -- in a visible future -- the range of
problems they can handle will be coextensive with the range to which the
human mind has been applied(qtd. in "Introduction" Internet).
We must keep in mind that AI has been developing for over 30 years and it may take that
much longer before another major breakthrough, like The Logic Theorist, happens again.
The artificial intelligence field may bring with it too many benefits to abandon the
Alexander, Tom. "Why Computers Can't Outthink the Experts." Fortune 20 August
"Artificial Intelligence, and Robot Wisdom." Online. Internet. 15 May 1997.
"Artificial Intelligence Put to the Test." Online. Internet. 15 May 1997.
"Beginnings of Artificial Intelligence, The." Online. Internet. 15 May 1997.
"Introduction to Artificial Intelligence, An." Online. Internet. 15 May 1997.
Leepson, Marc. "Artificial Intelligence." Editorial Research Report 16 Aug 1985:
"Scope of Expert Systems, The." Online. Internet. 15 May 1997.
"What We Can Do With Artificial Intelligence." Online. Internet. 15 May 1997.
Winston, Patrick Henry. Artificial Intelligence. Reading, Massachusetts: