In this article, a few requirements for the realization of AGI are listed and choices in related approaches are discussed (日本語版).
The purpose of this article is to give a simple overview toward the realization of AGI.
The purpose of this article is to give a simple overview toward the realization of AGI.
Requirements for AGI
- Cognitive architecture capable of planning and plan execution
- Associative memory based on statistics (necessary for coping with the frame problem)
- Linguistic Functions (as for more general function for handling symbols, conditions such as those discussed in Fodor and Pylyshyn (1988) should be met.)
- Embodiment (in case cognitive development matters)
- Language acquisition (if cognitive development matters, E and B are required)
* The list is not meant to be exhaustive; obviously, there are other necessary cognitive functions such as episodic memory, reinforcement learning and information binding. The items are chosen for the sake of discussion.
Choices of approaches
While there are many approaches towards AGI, not so many deal with all of the requirements above. Below, alternatives for the classification of approaches are discussed.
- Embodiment If an approach deals with embodiment, it should be involved in (cognitive) robotics. A cognitive robotics approach that also deals with language (acquisition) can be found here, which might be said one of the most comprehensive approaches for AGI.
- Distributed representation vs. symbolic representation
As perception processing such as computer vision is largely distributive, many approaches adopt distributed representation in this domain.
Approaches vary depending on whether they adopt distributed representation in other domains (cf. the requirements listed above). - Kinds of distributed representation
Such as neural network models and bayesian networks.
Neural models vary depending on to which extent they are close to the real neural networks (cf. BICA below).
Some approaches are hybrid between neural and bayesian networks (e.g., BESOM, combining SOM and the bayesian network). - Symbolic representation While there are classical knowledge representations such as frames, scripts and production rules, it is empirically known that at least a certain statistical method/representation should be used together to solve real-world problems. Besides approaches incorporating probabilistic elements in classical representation such as production rules (e.g., ACT-R), various probabilistic inference systems (such as probabilistic programming languages, NARS and PLN@OpenCog) have been proposed.
- Emergent conceptual learning
A well-known example is the neural-network-inspired SOM.
Emergent conceptual learning may be regarded as a kind of hidden variable estimation. In the area of deep learning, hidden variables are estimated with auto-encoders and restricted Boltzmann machines. Other approaches may adopt HMM-like models or statistical methods such as LDA for hidden variable estimation. - Associative memory Supervised learning can be seen as associative memory in a broad sense. There are approaches using distributed representation such as neural/bayesian network models, and approaches using statistical methods such as LDA (most recent document search methods fall under the latter).
- BICA (Biologically-Inspired Cognitive Architecture)
Approaches for AGI vary depending on whether or to which extent they mimic living organisms (notably the brain). While WBE (whole brain emulation) is the extreme case, there are more abstract approaches such as Leabra, which mimics brain organs, SPAUN/Nengo based on the spiking neuron model, BESOM, which uses the bayesian network while mimicking brain organs, HTM that mimics neocortex (only SPAUN/Nengo and Leabra may currently realize cognitive architectures). DeSTIN, which incorporates reinforcement learning with hierarchical temporal learning, is loosely inspired by the neocortex and basal ganglia. Psi/MicroPsi is a more psychological model though it is also inspired by neural networks. In general, the more inspired by the brain, the more the approach tends to adopt emergent distributive representation as the brain does.
No comments:
Post a Comment