Thursday, January 26, 2023

Memo: AGI Research Map 2023-01

This memo gives an overview of the AGI research with Fig.1 "AGI Research Map 2023-01" shown below.

Fig. 1 AGI Research Map 2023-01

1. The Choice of Approaches

Fig. 1.1
The upper left portion of the figure shows the approach choices; all choices except No are Yes.

If you don’t go after human cognitive functions, you’d obtain an AGI that is not (necessarily) human-like (e.g., General Problem Solver or AIXI).
Note: "Not-human-like” AGI may not efficiently process tasks that are efficiently processed by humans.

If you go after human cognitive functions, you have a choice whether to go after human modeling (i.e., cognitive science).  If you don’t go after human modeling, you may go after functionally human-like (but structurally not human-like) AGI (this may be a rather engineering oriented approach).  If you go after human modeling, you have a choice whether to mimic the brain.  If you don’t go after mimicking the brain, then you would go after (cognitive) psychological modeling.  You can go after mimicking the brain and still be on engineering (reverse engineering).

If you go after human cognitive functions, you would also go after embodiment (in 3D space) and implementing episodic memory.

2. Problem Solving

The upper right portion of the figure is a classification of problem-solving capabilities. There are two broad categories there: statistical problem solving and constraint satisfaction, both of which AGI should use.

In statistical problem solving, predictions and decisions are made based on statistics.  Machine learning is a type of statistical problem solving.

Constraint satisfaction requires finding a solution that satisfies given conditions (constraints).  Logic (deduction) and GOFAI generally belong to it.  In constraint satisfaction, statistical information can be used for efficiency.

Mathematics is a deductive system, so its operation requires constraint satisfaction.

Statistics uses mathematics (but may not use deduction while in action).

Causal inference uses both statistics and constraint satisfaction.

While hypothesis generation (abduction) is constraint satisfaction in nature, statistical information helps hypotheses generation.
In
mathematics, hypotheses are created to be proved by deduction.

Algorithm generation (programming) is a kind of constraint satisfaction and is a key element for self-improving superintelligence.

Human beings have all the problem solving capabilities mentioned here.

Scientific practice is a (social) activity in which all of the problem-solving capabilities are put to use.

3. Human-specific Cognitive Capabilities

The bottom center of the figure lists human-specific cognitive capabilities (i.e., non-human animals do not have them).  If you go after human cognitive functions, you have to realize these capabilities.

Linguistic functions have been considered the hallmark of human intelligence (cf. Turing Test).  
Certain
social intelligence such as intention understanding and theory of mind is also considered to be unique to humans.  
According to [Tomasello 2009],
causal thinking is also unique to humans (humans always always ask about causes).  
As human children grow, they also develop a concept of quantity that is not found in other animals (
mathematical intuition).

With regard to language, the subfields of linguistics, i.e., syntax, semantics, and pragmatics, are listed (phonology is omitted).  If you are for generative grammar, you would go for constructive semantics as well.  Meanwhile, the semantics successfully used in machine learning is distributional semantics (and embedding).  Since constructive semantics is necessary for precise interpretation of sentence meaning, these semantics would have to be integrated.

If you go after development, you would go after language acquisition as well, where a system acquires language by interacting with existing language speakers in the environment (as human infants do); it learns the meaning of linguistic expressions by inferring the intent of others to use language.  If you don’t go after development, you might go after systems that learn from corpora (as current large-scale language models do).

4. Essential Elements and Development Priorities

All the capabilities listed in the problem-solving section are required for AGI.

Some human-specific cognitive capabilities are optional when you pursue not-necessarily-human-like AGI; for example, an AGI agent that communicates with humans in logical formulae may not need human social intelligence nor human language acquisition capabilities.

The arrows in the figure show the relationship between the use of functions. You would have to develop those which are used before those which use them.  For example, the mathematical capability would require the implementation of a deductive engine beforehand.

5. Capability Settings and Testability

In designing an artifact, you have to specify its capabilities (functional specifications) in advance.  While the settings of capabilities in AGI design must be specific enough for designing tasks to test them, the tasks must be "large enough" to cover functional generality.  The trade-off between specificity and generality is subject to discussion with regard to the definition of generality in AGI research.

Reference

[Tomasello 2009] Tomasello, M.: The Cultural Origins of Human Cognition, Harvard University Press (2009).