Notes From CS Undergrad Courses FSU
This project is maintained by awa03
The notion of gaining knowledge often constitutes a transgression against the laws of god (nature more aptly put) is deeply ingrained into western thought. This can be seen through examples such as the garden of Eden or Prometheus. In other words the quest for knowledge is a dangerous one.
Some other examples of this phenomenon within literature being Frankenstein monster, as well as Dante and Milton's work, and many Shakespearean plays. Some more practical examples within real life may include the creation of nuclear weapons and now the rising threat of Artificial Intelligence. Modern technology has only caused these fears to rise, creating a sentiment of imminent danger in many.
In my eyes this seems to follow a narrative of blissful ignorance. To know less is to be less aware of the negatives, thus your view of life is purely subjective and it is possible to alienate yourself from the reality of consequence.
The logical starting point within history for artificial intelligence is Aristotle. Change to Aristotle was the basis of science, science being described by him as "The study of things that change". He distinguished between matter and form. This separation is the philosophical basis for symbolic computing as well as data abstraction.
Within Aristotle epistemology (analysis of how humans "know" their world), he refereed to logic as an instrument. This was because he felt the study of thought itself was the basis of all knowledge. Within his Logic he investigated whether propositions can be true because they are related to other statements that are true.
if we know that “all men are mortal” and that “Socrates is a man”, then we can conclude that “Socrates is mortal”.
Renaissance thought introduced science as a means of replacing mysticism, in order to further understand nature. Scientists and Philosophers alike realized that thought, the way knowledge is represented and manipulated in the human mind, was an essential part of scientific study. The most major development within the modern view of our world, shaping this change, was the Copernican revolution. Due to religion viewing the earth as the center of the solar system, the solar system being created for man, there was a major separation between the scientific observations and the teachings of religion.
Francis Bacon offered an additional method of scientific methodology being the features. Bacon described the form as equivalent to the sum of features. Bacon additionally stated that objects can be described by what features it does not possess. This is foundational for the creation of the Truth Table.
The Pascaline, create by Blaise Pascal, was a famous calculating machine allowing for arithmetic computation using mechanics. However it was limited to addition and subtraction, but it served as the basis for the automation of mathematical computation. This thought process birthed the idea of automating the actions of humans. This can be seen through Pascals statements in Pensees (1670).
The arithmetical machine produces effects which approach nearer to thought than all the actions of animals"
Pascals work inspired Hottfried Wilhem von Leibniz in 1694 to produce the Leibniz Wheel. This machine introduced both multiplication and division, marking a major step towards the creation of "Artificial Intelligence".
Rene Descartes was a central figure towards the development of modern thought and our understanding of the mind. Descartes attempted to formulate a basis of reality purely through introspection. This was done through the systematic rejection of senses since they proved untrustworthy. This lead him to even doubt the physical world, and even his own existence.
I think therefore I am.
[!NOTE] I would like to note, here's the philosopher in me coming out, that the statement I think therefore I am presumes the existence of the self circularly.
This is an important moment within thought, due to it being one of the first separations between the idea of the mind and the physical world. This idea underlies the methodology of AI, as well as many other fields even extending to high level mathematics.
The foundation for the study of AI, in relation to the separation between the mind and the body, follows that the mind and body are not fundamentally different entities and mental processes can be expressed in terms of mathematics much the same as physical processes. Mental processes are achieved through physical systems (EX. The Brain), therefore they are bound together.
For rationalists the external world is created through the clear and distinct ideas of mathematics. Many AI programs favor rationalist thinking as their basis for creation. Some critics see this rationalist bias as a part of failure on AI for solving complex tasks such as understanding human behavior.
"Nothing enters the mind except through the senses"
Bayes theorem demonstrates through learning the correlations of effects of actions we can determine the probability of their causes.
Immanuel Kant was strongly influenced by the writings of Hume. He began to attempt to synthesize the Rationality and Empiricist traditions.
[!NOTE] Hi me again, Kant was definitely influenced by Hume but 20 years down the drain.
For Kant knowledge contains two collaborating energies, these are an a priori component coming from the subjects reason as well as a a posteriori component coming from active experience.
Kant claims, passing images or representations are bound together by the active subject and taken as the diverse appearances of an identity, of an “object”
Euler introduced the study of representations that can abstractly capture the structure of relationships in the world as well as discrete steps within a computation about these relationships. The creation of graph theory allowed for state space search, which is used largely within artificial intelligence. The nodes of a state space graph represent the possible stages of a problem solution, the arcs being inferences, moves in a game, or other steps within a problems solution. State space graphs provide a tool for measuring the structure and complexity of problems, as well as analyzing the efficiency, correctness, and solution strategies.
Charles Babbage difference engine was a machine for computing values of certain polynomial functions and was the forerunner to his analytical engine. He included within his analytical engine many modern solutions such as the separation of the memory and the processor. He referred to these as the store and the mill.
The importance of Boole’s accomplishment is in the extraordinary power and simplicity of the system he devised: three operations, “AND” (denoted by ∗ or ∧), “OR” (denoted by + or lor), and “NOT” (denoted by ¬), formed the heart of his logical calculus.
Boole's influence has carried to modern forms of logic, including the design of modern computers. Bools system additionally provided a basis for binary arithmetic. Frege later added the logical "IMPLIES" and "EXISTS" operations, which formalized the mathematical concept of a proof using these concepts. First order predicate calculus offers all of the tools necessary in creating a theory of representation for Artificial Intelligence.
Russell's and Whiteheads work is very important to the foundations of artificial intelligence. Their goal was to derive the whole of mathematics through formal operations on a collection of axioms.
Intelligence is a form of information processing. Formal logic is an important representational tool for AI research. Tools are essential to understanding information, tools shape the basis for semantic networks and models of semantic meaning. A tool is developed to solve a specific problem, it is then refined to fit other use cases, leading to the development of new tools.
The turing test attempts to measure the performance of an intelligent machine against that of a human being. This is arguably the only thing we can compare against to decipher intelligence. The test isolates the machine and the interrogator in order to ensure the interrogator is unbiased. The interrogator is free to ask any question in order to uncover the computers identity. Turing argues that if the interrogator is unable to identify the human than the machine is assumed to be intelligent.
Important features of Turing's test are:
Some criticisms of this test is that it: does not test ability requiring perceptual skill or manual dexterity, and that he needlessly constrains machine intelligence to a human mold (intelligence is not only bound to human intelligence). Machine intelligence may just in fact differ from human intelligence.
Lady Lovelace Objection states that machines can only do as they are told and consequently cannot perform original actions.
Another objection towards machine intelligence asserts that it is impossible to create a set of rules that will tell an individual exactly what to do in every possible circumstance. Humans however can respond to an infinite amount of situations in a reasonable manner. However as we can see through modern generative AI this is not the case.
Herbert Simon argues, however, that humans only posses originality and variability due to the richness of their environment.
An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself.
Intelligence emerges from the interactions of individual elements of a society. Culture is just as important in forming human intelligence as humans are in forming culture.
So far the discussions have been from the rationalist perspective. Forms of relativism question the basis of language, science, and society. Wittgenstein instructed us to reconsider the basis of meaning in natural and formal languages.
Witty old boy stated that our understanding of what something is, is our culture understanding of the object. Take the example of a throne, what makes it different than a chair. Wittgenstein extended these theories to science and mathematics as well.
For Husserl as well as Heidegger intelligence was not knowing what is true but rather knowing how to cope in a world that was constantly changing and evolving. For existentialists / phenomenologists intelligence means survival in the world, rather than a set of logical propositions about the world.
Neural models of intelligence emphasize the brains ability to the world in which it is situated by modifying relationships between neurons. They capture patterns of relationship. Intelligence may be reflected by a collective of behaviors of large numbers of very simple interacting, semi autonomous agents.
Agents are elements of society that can perceive aspects of their environment either directly or indirectly through cooperation with other agents.
Knowledge-based systems, though they have achieved marketable engineering successes, still have many limitations in the quality and generality of their reasoning. These include their inability to perform commonsense reasoning or to exhibit knowledge of rudimentary physical reality, such as how things change over time
Newell and Simon proposed that, the necessary and sufficient condition for a physical system to exhibit intelligence is that it be a physical symbol system. A system must understand, and interpret symbols. The Chinese Room argument posed that if a subject feeds English (for the example: could be any language) into a room with another subject translating each line into Chinese and then feeding a response back in Chinese, it would appear that the subject within the room understand Chinese. However we know this not to be true. The Chinese room argument holds to show that digital computers, regardless of how intelligent they may seem cannot be shown to have a mind, understanding, or consciousness. This is however not an argument against AI research, because it does not explicitly limit the amount of intelligence that a machine can display, rather a lack of understanding of what it is perceiving.
[!Note] I love this argument (Chinese Room) because it shows a limitation of AI, as well as dispels some worries about AI. The AI cannot "want" to take over the world and kill all humans, it is at the behest of us because it knows not what it does. Connected in a way to the love lace objection in this way as well.
This is the distinction to what we call strong and weak AI. Strong AI possess the ability to understand the information that it is being fed, and respond based on its understanding. Weak AI on the other hand may simulate a response that can perceived as an understanding, however, it does not truly understand the subject matter. It is rather simulating the ability to think. The argument thus must form more around intention rather than our perceptions of thinking. This would effectively disprove Simon and Newels hypothesis however.
The two fundamental concerns of AI are knowledge representation, and searching. Search is a problem solving technique that explores a set of problem states, successful and alternative stages. Take for example a chess board. A chess board may be trained on multiple positions as well as their successive moves. Newel and Simon argue that this is essential to the basis of human problem solving.
Heuristics allow us to determine alternatives to explore space. This also constitutes a large area of AI research. Heuristics operate in much the same way (supposedly) as human intelligence. We seek to find alternative forms to our solutions (in a game setting this can be a win or a loss) in order to optimize our results, as well as find a more meaningful path (in a game setting this may be seen as a win). This is common within many board games, such as chess for example. The exploration of alternate paths can lead us to various outcomes is crucial to finding a suitable path (win in chess).
It is interesting to note that back in 2007 the entire state space was mapped out in checkers, this allows for it to be fully deterministic.
Early efforts of theorem proving were difficult seeing as any reasonably complex logical system can generate an infinite amount of provable theorems, however, without guidance through heuristics a large number of the proofs are irrelevant. This issue caused for the development of alternative ad hoc strategies that humans seem to use in solving problems, this is the basis of expert systems. In the modern day we do have tools in which logic is automated, however, it is used mainly for tools in order to ensure the design of computer programs and control of complex systems
An important insight gained from early work in problem solving was the importance of domain specific knowledge. This is the basis of expert systems. Expert systems are constructed through obtaining domain specific knowledge and relying upon these to solve specific issues. The domain expert will critique its behavior until it has achieved the desired level of performance.
One of the earliest Expert systems was known as the DENDRAL. The Dendral was designed to infer the structure of organic molecules from their chemical formulas and mass stenographic information about chemical bonds present in the molecules. Dendral addresses the problem of a large search space by applying the heuristic knowledge of expert chemists to structure the problem. Extensions of the DENDRAL are currently used in chemical and pharmaceutical laboratories throughout the world.
"Common Sense" problems are more loosely defined, which means that they are more difficult to solve in the world of computing. This is because they do not have specific knowledge that we can place onto them. This has led to many problems in developing expert systems due to:
However, Expert systems have proved their value in many important applications.
Natural language understanding has been a long standing goal of artificial intelligence, due to its importance in generating human intelligence. Real understanding behind natural language processing requires background knowledge about the domain of discourse and the idioms used in that domain as well as an ability to apply general contextual knowledge to resolve the omissions and ambiguities that are a part of normal human speech.
Consider, for example, the difficulties in carrying on a conversation about baseball with an individual who understands English but knows nothing about the rules, players, or history of the game. Could this person possibly understand the meaning of the sentence: “With none down in the top of the ninth and the go-ahead run at second, the manager called his relief from the bull pen”? Even though all of the words in the sentence may be individually understood, this sentence would be gibberish to even the most intelligent non-baseball fan.
Modeling Human Performance
Often non human problem solving, chess for example, is more performant than the human counterpart. Many psychologists have found the language and theory's of Computer Science in order to better describe human intelligence. These techniques have improved vocabulary for describing human intelligence, as well as offering a way to empirically test, critique, and refine their ideas.
Planning and Robotics
R&D has begun in an effort to design robots to perform tasks with a degree of flexibility as well as responsiveness to the world around them. Planning is a difficult problem due to the size of the space as well as the possible movement sequences.
One method that humans typically use is is hierarchical problem decomposition. Take for planning a trip. The problems that you need to address when planning a trip are packing, flights, hotels, location, events, etc. These problems are all separately handled however they join together to form a larger problem of planning a trip. This is known as hierarchical problem decomposition. While humans can easily accomplish this task it is difficult to perform on a computer, issues need to be well understood, meaning it requires sophisticated heuristics and extensive knowledge of the problem domain.
Robots must respond to change within their environment, as well as be able to detect and correct errors in its own plan in order to be considered intelligent (Not the only criteria).