Sunday, December 30, 2012

Status Quo of Artificial General Intelligence

The original intent of the AI research, started in the 1950's, was to create human-level (or higher-than-human-level) artificial intelligence.  This goal has not been attained.  Though Turing (in 1950) proposed a test to evaluate AI by means of (keyboard) conversation with human beings, there is no AI program that understands human language well and carries out intelligent conversation as of now.  In terms of real world behavior, no animal-level intelligence has been implemented on machines: no robot would survive in the real world environment.  Most of current researches in AI are aimed at solving smaller problems (narrow AI).

Here, after glancing over the history of AI, I'll quickly comment on the current status of researches aiming for realizing human-level artificial intelligence.

In the trend of the traditional symbolist approach, researches in cognitive architectures bore some fruits such as SOAR and ACT in the 1970's & 1980's.  These cognitive architectures have kinds of knowledge, solve given tasks by planning and learn from experience.  Some of these systems are still maintained and have incorporated subsymbolic approaches.

Let us now turn to the treatment of human language.  After the formalization of syntax by Chomsky also in the 1950's, quite a few computational grammatical theories were proposed and implemented by the 1980's.  As they were not necessarily successful in terms of actual use, engineering researches shifted to statistical methods in the 90's.  While statistical methods have been useful for information retrieval such as Web searches and certain forms of machine translation (e.g., Google Translate), they tend to neglect the understanding of semantics and pragmatics of human language utterances.

The recent major trend of theoretical AI consists in the research of machine learning methods.  After neural network (connectionist) approach became popular in the 1980's, the genetic algorithm and the support vector machine became popular (one after another).  Currently, Bayesian approaches are in and deep learning is also gaining popularity.

Now, researches aiming for realizing human-level artificial intelligence have some signes of revival.  Since the term artificial intelligence has come to refer to the collection of techniques for solving domain-specific tasks, the originally intended  AI is now called 'Artificial General Intelligence' or 'Strong AI' (though the latter originally refers to the philosophical position that a computer can have the mind).  AGI conferences have been held since 2008.  In terms of real world behavior, more researchers are involved in robotics to explore the 'embodied' aspects of intelligence.  In particular, cognitive/developmental robotics is a new interesting field in which robots are to learn as human children do.  In terms of human language use, we see another new research area in "symbol emergence," where artificial systems learn rules and concepts in language rules, instead of getting pre-programmed.

Another new area is found in the modeling of brains.  While previous neural network researches dealt relatively simple mathematical models, the new researches tackle the modeling of brain structures to create a new cognitive architecture or an architecture for machine learning.  There is also an approach called 'Bayesian brain' that assumes that the brain performs statistical data processing in the manner of Bayesian statistics.

Now, let me point out some aspects lacking from the current AI researches for realizing human-level intelligence.  As a prerequisite, a comprehensive cognitive architecture incorporating cognitive functions such as planning and kinds of memories is a must.  Researches without such a cognitive architecture (or without planning to have one) are considered to be piecemeal and will not achieve the human-level intelligence by themselves.

In terms of human language use, the current cognitive architectures do not have sufficient language processing functionalities.  While one could add a state-of-the-art language analyzer such as one with unification grammar (e.g., HPSG or LFG) to a cognitive architecture, such a system requires considerable human labor to furnish lexical entries and grammar.  Moreover, the traditional computational linguistic models assume classical categories in their semantics, so that they cannot withstand the criticism from cognitive linguistics that the semantic categories human-beings use are not classical ones.  So, a cognitive architecture must incorporate a mechanism that discovers and learns rules and concepts in the way symbol emergence researches assume.

AI has a problem called the Frame Problem (Dennett's version): an intelligent system must solve a task in real time by finding relevant knowledge to be practical.  Here, the system does not have time to list all of its knowledge and evaluate relevance.  Associative memory (which may be implemented as a neural network) could practically avoid the problem, as such memory would retrieve relevant knowledge first.  In this regard, a practical cognitive architecture must be based on some form of associative (subsymbolic) memory.

One might argue that the cognitive function demarcating the human being is that of imagination.  Fauconnier and Turner argues that the crux of human mental life is what they call conceptual blending in imagination.  If they are correct, then to realize a human-level intelligence, it is necessary to implement conceptual blending.

In sum, realizing a human-level intelligence would require a comprehensive cognitive architecture with subsymbolic association, language processing by means of symbol emergence and conceptual blending.  Perhaps, the symbol emergence part is the hardest, as it would require a very effective unsupervised machine learning (clustering) algorithm.

Finally, as a desideratum, cognitive architectures to be invented for AGI should be as simple as possible (for theoretical and software engineering reasons).  It would be nice to have an architecture that realizes the above-mentioned functions with  simpler mechanisms and fewer principles.
A Map around AGI

Friday, December 28, 2012

Raison d'Être des Quale

If you define quale (qualia) to be the sense data given to conscious scrutiny, then it may be easy to tell why such things exist.  Sometimes the brain would need to go back sense data to check if something goes wrong in detail.  This qualia business is carried out in early sensory cortices.  If these cortices are destroyed, not only perception but also the ability of forming images vanishes (Damasio, Descartes' Error, p. 99).  The consciousness is the sequential cognitive process integrating all quale (=sense data) for planning actions.  This is, of course, a functional explanation and does not answer the question whether qualia-free zombies exist...

Monday, December 17, 2012

Machine Learning Field Map w.r.t. Artificial Mind

A map of machine learning technologies that have come to my attention...

Acronym Locator:
BESOM: A Cerebral Cortex Model based on Bayesian Networks
DeSTIN: a scalable deep learning architecture that relies on a combination of unsupervised learning and Bayesian inference
DP: Dynamic programming

DPM: Dirichlet Process Mixture
EM: Expectation–maximization algorithm

HDP: Hierarchical Dirichlet process
HMM: Hidden Markov Model
HPYLM: Hierarchical Pitman-Yor Language Model
HTM: Hierarchical Temporal Memory
iHMM: infinite Hidden Markov Model
MCMC: Markov chain Monte Carlo
NPYLM: Nested Pitman-Yor Language Model
SOINN: Self-Organizing Incremental Neural Network
SOM: Self-Organizing Map


Saturday, November 10, 2012

Finding Nicholas Cassimatis

I came across an article by Dr. Cassimatis today (thanks to the AGI mailing list) today.

The following is my summary of the article entitled
"A Cognitive Substrate for Achieving Human-Level Intelligence" (AI Magazine 2006).

The profusion problem:
For intelligent systems to be broadly functional and robustthe profusion of knowledge, data structures and algorithms must be integrated.
The Cognitive Substrate Hypothesis 
There is certain cognitive substrates to solve a relatively small set of  computational problems with which other cognitive tasks can be solved.
"These include reasoning about temporal intervals, causal relations, identities between objects and events, ontologies, beliefs, and desires."
Findings in linguistic semantics (e.g., Jackendoff) support the hypothesis. 
Polyscheme cognitive architecture (Cassimatis 2005)
The common function principle (CFP):  Many AI algorithms can be implemented in terms of the same basic set of common functions.
The multiple implementation principle (MIP): Each common function can be implemented using multiple computational methods.
The article tries to show a parallel between (folk) physical concepts and grammatical concepts.

I agree with the problem setting (the profusion problem) and am also hopeful with the cognitive substrate hypothesis.  I am less sure about Polyscheme, as it is a more practical architecture issue, though the two principles would be desired.

For anyone pursuing a brain-inspired cognitive architecture, the integration issue would be a keener problem, as they often start from learning issues and leave other problems (of planning, language processing, etc.) aside... (This is not a comment on the article above, but a more general comment.)

Dr. Cassimatis is on the faculty of the Cognitive Science Department at Rensselaer, and has founded a search technology company SkyPhase.

Thursday, November 8, 2012

Suffix finder

Well, this isn't really an AI topic, but an elementary NLP exercise.
The AWK script extracts word suffixes from a word histogram.

-----

#! /usr/bin/awk -f 
# Creates the suffix histogram of a word histogram
#
/^[^_]/ {
  words[$1]=$2  # word histogram
  wordt[$1]=$2  # word histogram copy (to evade the awk bug that destroys strings within nested reference of an array)
}
END {
  for (x in words) {
     if (length(x)>3) {  # word length > 3
       for (i=length(x)-1;i>=length(x)-3;i--) {
                         # suffix length <= 3
         if (wordt[substr(x,1,i)]!="") {
              suffix = substr(x,i+1)
              stem = substr(x,1,i)
              # Look for suffix whose stem part is a word.
              if (suffixes[suffix]=="") suffixes[suffix]=words[x]
              else suffixes[suffix]=suffixes[suffix]+words[x]
              if (suffix_count[suffix]=="") suffix_count[suffix]=1
              else suffix_count[suffix] = suffix_count[suffix]=suffix_count[suffix]+1
           }
         }
      }
  }
  for (key in suffixes) {
    if (suffix_count[key]>=10) print key, suffixes[key]
    # print suffixes only used by more than 9 kinds of words.
  }
}
----
Result from the CHILDES/Brown/Adam corpus(sorted by counts):


's 2664
s 1812
r 935
y 539
ed 446
es 444
e 282
n 250 # broke-n, etc.
d 168
er 145
ly 135
'd 76


Monday, October 22, 2012

BESOM

Last week a fellow student introduced me a cognitive model called BESOM, which 'combines a Bayesian network and Self-Organizing Maps.'  It is proposed by Yuuji ICHISUGI and purports to model the cerebral cortex.  It appears to be quite interesting as a biologically inspired computational cognitive model.  The author seems to be known among the people who are interested in this area in Japan.

Tuesday, October 9, 2012

Reentry

As I became a (research) student at an AI laboratory this month (2012-10), I start writing in this blog on AI related topics.

My writings on AI in English so far are found at:
https://sites.google.com/site/rondelion/Home/artificial-mind
and in a blog article on Jeff Hawkins:
http://rondelion.blogspot.jp/2012/02/on-intelligence.html

What I am going to study will be language acquisition by machines.  Hopefully the study will develop into a full-fledged artificial general intelligence research.

For the building block of artificial intelligence, I'm going to use kinds of self-organizing (neural) networks (algorithms).  While I'm hoping for the use of symbols to emerge from sub-symbolic patterns as it would occur in human brains consisting of neurons, current neural network simulators are difficult to deal with (for reasons such as it is computationally demanding and not apt for incremental learning).  Some self-organizing (neural) networks are (claimed to be) less problematic and I'm hoping they are good enough for simulating symbol emergence.

I'm aware that simply accumulating neural-network-like modules wouldn't make an intelligence.  Later studies should be focused on the realization of what is called executive function over a bunch of those modules.

More on later...