The original intent of the AI research, started in the 1950's, was to create human-level (or higher-than-human-level) artificial intelligence. This goal has not been attained. Though Turing (in 1950) proposed a test to evaluate AI by means of (keyboard) conversation with human beings, there is no AI program that understands human language well and carries out intelligent conversation as of now. In terms of real world behavior, no animal-level intelligence has been implemented on machines: no robot would survive in the real world environment. Most of current researches in AI are aimed at solving smaller problems (narrow AI).
Here, after glancing over the history of AI, I'll quickly comment on the current status of researches aiming for realizing human-level artificial intelligence.
In the trend of the traditional symbolist approach, researches in cognitive architectures bore some fruits such as SOAR and ACT in the 1970's & 1980's. These cognitive architectures have kinds of knowledge, solve given tasks by planning and learn from experience. Some of these systems are still maintained and have incorporated subsymbolic approaches.
Let us now turn to the treatment of human language. After the formalization of syntax by Chomsky also in the 1950's, quite a few computational grammatical theories were proposed and implemented by the 1980's. As they were not necessarily successful in terms of actual use, engineering researches shifted to statistical methods in the 90's. While statistical methods have been useful for information retrieval such as Web searches and certain forms of machine translation (e.g., Google Translate), they tend to neglect the understanding of semantics and pragmatics of human language utterances.
The recent major trend of theoretical AI consists in the research of machine learning methods. After neural network (connectionist) approach became popular in the 1980's, the genetic algorithm and the support vector machine became popular (one after another). Currently, Bayesian approaches are in and deep learning is also gaining popularity.
Now, researches aiming for realizing human-level artificial intelligence have some signes of revival. Since the term artificial intelligence has come to refer to the collection of techniques for solving domain-specific tasks, the originally intended AI is now called 'Artificial General Intelligence' or 'Strong AI' (though the latter originally refers to the philosophical position that a computer can have the mind). AGI conferences have been held since 2008. In terms of real world behavior, more researchers are involved in robotics to explore the 'embodied' aspects of intelligence. In particular, cognitive/developmental robotics is a new interesting field in which robots are to learn as human children do. In terms of human language use, we see another new research area in "symbol emergence," where artificial systems learn rules and concepts in language rules, instead of getting pre-programmed.
Another new area is found in the modeling of brains. While previous neural network researches dealt relatively simple mathematical models, the new researches tackle the modeling of brain structures to create a new cognitive architecture or an architecture for machine learning. There is also an approach called 'Bayesian brain' that assumes that the brain performs statistical data processing in the manner of Bayesian statistics.
Now, let me point out some aspects lacking from the current AI researches for realizing human-level intelligence. As a prerequisite, a comprehensive cognitive architecture incorporating cognitive functions such as planning and kinds of memories is a must. Researches without such a cognitive architecture (or without planning to have one) are considered to be piecemeal and will not achieve the human-level intelligence by themselves.
In terms of human language use, the current cognitive architectures do not have sufficient language processing functionalities. While one could add a state-of-the-art language analyzer such as one with unification grammar (e.g., HPSG or LFG) to a cognitive architecture, such a system requires considerable human labor to furnish lexical entries and grammar. Moreover, the traditional computational linguistic models assume classical categories in their semantics, so that they cannot withstand the criticism from cognitive linguistics that the semantic categories human-beings use are not classical ones. So, a cognitive architecture must incorporate a mechanism that discovers and learns rules and concepts in the way symbol emergence researches assume.
AI has a problem called the Frame Problem (Dennett's version): an intelligent system must solve a task in real time by finding relevant knowledge to be practical. Here, the system does not have time to list all of its knowledge and evaluate relevance. Associative memory (which may be implemented as a neural network) could practically avoid the problem, as such memory would retrieve relevant knowledge first. In this regard, a practical cognitive architecture must be based on some form of associative (subsymbolic) memory.
One might argue that the cognitive function demarcating the human being is that of imagination. Fauconnier and Turner argues that the crux of human mental life is what they call conceptual blending in imagination. If they are correct, then to realize a human-level intelligence, it is necessary to implement conceptual blending.
In sum, realizing a human-level intelligence would require a comprehensive cognitive architecture with subsymbolic association, language processing by means of symbol emergence and conceptual blending. Perhaps, the symbol emergence part is the hardest, as it would require a very effective unsupervised machine learning (clustering) algorithm.
Finally, as a desideratum, cognitive architectures to be invented for AGI should be as simple as possible (for theoretical and software engineering reasons). It would be nice to have an architecture that realizes the above-mentioned functions with simpler mechanisms and fewer principles.
For more information on AGI, please refer to the resource page of the AGI society.
Another new area is found in the modeling of brains. While previous neural network researches dealt relatively simple mathematical models, the new researches tackle the modeling of brain structures to create a new cognitive architecture or an architecture for machine learning. There is also an approach called 'Bayesian brain' that assumes that the brain performs statistical data processing in the manner of Bayesian statistics.
Now, let me point out some aspects lacking from the current AI researches for realizing human-level intelligence. As a prerequisite, a comprehensive cognitive architecture incorporating cognitive functions such as planning and kinds of memories is a must. Researches without such a cognitive architecture (or without planning to have one) are considered to be piecemeal and will not achieve the human-level intelligence by themselves.
In terms of human language use, the current cognitive architectures do not have sufficient language processing functionalities. While one could add a state-of-the-art language analyzer such as one with unification grammar (e.g., HPSG or LFG) to a cognitive architecture, such a system requires considerable human labor to furnish lexical entries and grammar. Moreover, the traditional computational linguistic models assume classical categories in their semantics, so that they cannot withstand the criticism from cognitive linguistics that the semantic categories human-beings use are not classical ones. So, a cognitive architecture must incorporate a mechanism that discovers and learns rules and concepts in the way symbol emergence researches assume.
AI has a problem called the Frame Problem (Dennett's version): an intelligent system must solve a task in real time by finding relevant knowledge to be practical. Here, the system does not have time to list all of its knowledge and evaluate relevance. Associative memory (which may be implemented as a neural network) could practically avoid the problem, as such memory would retrieve relevant knowledge first. In this regard, a practical cognitive architecture must be based on some form of associative (subsymbolic) memory.
One might argue that the cognitive function demarcating the human being is that of imagination. Fauconnier and Turner argues that the crux of human mental life is what they call conceptual blending in imagination. If they are correct, then to realize a human-level intelligence, it is necessary to implement conceptual blending.
In sum, realizing a human-level intelligence would require a comprehensive cognitive architecture with subsymbolic association, language processing by means of symbol emergence and conceptual blending. Perhaps, the symbol emergence part is the hardest, as it would require a very effective unsupervised machine learning (clustering) algorithm.
Finally, as a desideratum, cognitive architectures to be invented for AGI should be as simple as possible (for theoretical and software engineering reasons). It would be nice to have an architecture that realizes the above-mentioned functions with simpler mechanisms and fewer principles.
A Map around AGI
For more information on AGI, please refer to the resource page of the AGI society.
No comments:
Post a Comment