Friday, December 18, 2015

Thought on Architecture

The recent posts showed the agents under development in action.
Perhaps, it is the time to stop and think before continuing the development...

For the moment, the agents have very simple architecture, with a blackboard kind of internal communication system.  Namely, an agent has an input buffer and a state buffer, where 'codelets' can read from and write to, respectively.  As both buffers are python dictionaries (hash tables), they can contain any data, which are accessed by 'names/keys.'

Codelets are registered as 'rules,' which have a condition part and an action part.  In each execution cycle, the action of the codelets/rules whose conditions meet 'fires' and it may try to modify the content of the state buffer.  Each codelet has a score and if more than one codelet tries to modify a same part of the state buffer (with the same name/key), the value written by the codelet with the highest score (the winner) is chosen.



In the recent posts, I mentioned affect terms such as 'reward' and 'urge.'  In fact, they correspond to variables with the same names on the state buffer, and the agents 'smile' when they read a positive 'reward' in the buffer.

While the mechanism is quite simple, generic and usable, there are a few points to consider for future modifications:

  • Is it appropriate as a cognitive architecture?
    To make it look more like a respectable cognitive architecture, perhaps, the affect mechanism should have its own modules.
  • Is it biologically plausible?
    (The answer is, of course, No.)  To make it biologically plausible, the architecture should mimic the brain architecture.  Or, more simply, real brains are not supposed to use blackboard architecture.
  • Symbolic representation?
    The current system uses symbolic representation (python dictionaries) in internal communication.  Besides that the brain does not use such symbolic representation, using vector representation would be good when the system is to be controlled machine learning algorithms.

Tuesday, December 15, 2015

Spontaneous walk

After 'looking into each other' a while, the agents get bored (develop the urge to move) and start moving.  (The previous post showed the footage in which they follow each other, stop and smile when they meet.)

 
The simulator was made with V-Rep and python.
The code is found on GitHub.

Friday, December 4, 2015

Adding Emotional Expressions to Agents

Facial expressions were added to the agents in my simulation environment. They follow each other and stop and smile when they meet (kind of cute :-D).  (The smile is driven by 'reward' given when the agent meets another.)

 
The simulator was made with V-Rep and python.
The code is found on GitHub.

Friday, November 20, 2015

AGI in Japan, 2015

More than a year passed since I wrote the article 'AGI in Japan, 2014' and some of AGI-related activities in Japan, then grass-roots, have been institutionalized. The following is an update mostly in the chronological order.

November 2014: Dwango Co. LTD., a media company, launched its AI laboratory. Its current function is 'promotive' by being a communicative hub for AGI researchers.

April 2015: Chiba Institute of Technology inaugurated STAIR (Software Technology and Artificial Intelligence Research) Laboratory (HP in Japanese), headed by Akinori Yonezawa.


The annual convention of the Japanese Society for Artificial Intelligence (JSAI) hosted two-part sessions on AGI. Part I mainly discussed models of the hippocampus and Part II discussed mainly the social and ethical aspects of AI. The entire time table (in Japanese) can be accessed from here, where you may find other topics related to AGI.

July 2015: The New Energy and Industrial Technology Development Organization of Japan (NEDO) announced a plan to create a center of excellence in AI (news release in Japanese). NEDO has a large funding base and at least one of AIRC's intelligent robot projects is funded by NEDO. (Ref. an English article from NEDO on robots)

August 2015: The Whole Brain Architecture Initiative (WBAI), an NPO explicitly aiming for the realization of AGI, was inaugurated. Its main objective is to foster a research community based on the approach where researchers build cognitive architecture with machine learning modules while mimicking the brain. It held the first hackathon in September and a workshop at BICA 2015 (Lyon) in November. The winners of the hackathon (students) were present at the BICA conference.

September 2015:
A project on the Symbol Emergence in Robotics was accepted by CREST, a funding program of the Japan Science and Technology Agency (JST) under the Ministry of Education, Culture, Sports, Science and Technology (MEXT) (ref. Job Info of the project in English). The project will hold a kickoff event in November. The community has been quite active aiming for human-level AI in robotics. Tadahiro Taniguchi, one of the major proponents of the approach, is currently visiting the Imperial College.

On September 29, WIRED Japan held its first Singularity Summit (Japanese site). Lav Varshney and Ben Goertzel were among the speakers. Another speaker, Takuya Matsuda, the author of ‘The Year 2045 Problem (2045年問題)’, is regularly hosting the Singularity Salon, by the way.

The Special Interest Group for AGI (Japanese site) was officially recognized by the Japanese Society for Artificial Intelligence (JSAI). The SIG had been an unofficial group, mainly surveying articles on AGI. The SIG held a mini-inaugurating-symposium on September 30th, where Ben Goertzel gave a talk. After the symposium, there were discussions about potential collaborations towards open AGI research among groups (such as OpenCog and WBAI) world-wide. The SIG will hold the first workshop in December.


While they may not be directly related to AGI, companies in Japan are now coming into the AI field as AI has getting more popularity. For example, Recruit Holdings and Toyota are having their AI research sections, inviting researchers from abroad. In the venture domain, companies like Preferred Infrastructure, which released Chainer (the neural network framework), and Nextremer are engaged in researches in AI and machine learning.

Monday, October 26, 2015

A Simple Virtual Environment for Multi-Agent Simulation

I created a simple simulation environment with two agents (for later being developed and used in language acquisition experiments). At this moment, they are just following each other and stop when they meet.

 
The simulator was made with V-Rep and python.
The code is found on GitHub.

Tuesday, April 21, 2015

Research Plan Updated (2015-04)

I haven't written a post in this blog for a long time.  I have been loosely related to the AI lab at Dwango Co., Ltd., where issues such as brain-mimetic AI are discussed.  Meanwhile, I have changed my research plan on language acquisition so that it will be more focused on acquisition of verbs.

Overview

The plan is intended to create a system that performs language acquisition (symbol grounding), particularly the acquisition of verbs.  The reason for the focus on verbs is their pivotal role in sentence formation.  Human infants usually start acquiring verbs (among other words) around 1.5 years old.  The system will follow the human (infant) language acquisition process in a simplified simulated environment.  While the representation for words and concrete objects including agents are given, the system must form the concepts of motion patterns for itself and map verbs to the motion concepts.

Verbs and Motion

While not all verbs represent motions, the current plan focuses on motion verbs.  For the system to associate motion verbs with internal representation of motion, it must have the representation beforehand.
(For a comprehensive review of the subject matter, please refer toAction Meets Word -- How Children Learn Verbs, Oxford 2006.)

The Representation of 'Intended' Motion

As there are myriad of motion patterns in the environment, infants must attend to certain types of motion to recognize.  One possibility is that, as indicated by Melzoff (2015), they learn their own motion to recognize first and then generalize it to other agents' motion.  As the internal representation of one's own motion is associated with the representation of 'intention,' infants would come to associate similar motions of other agents with certain representation of intention so that they are recognized as 'intended motion.'
Meltzoff, A. N. “The ‘like Me’ Framework for Recognizing and Becoming an Intentional Agent.” Acta psychologica 124.1 (2007): 26–43. PMC. Web. (2015).

Associating Motion with a Verb


This may be the most difficult part of the current plan.  As in any acquisition of word-referent relations, the relation between words and referents is many-to-many.  In a given situation, many words may be uttered in a series, while there are many candidates of referents.  To determine the referential relation, the learner must do a detective work on hypotheses.  If a word (verb) is uttered with a noun whose referent is known, it is likely to be related to something (or some motion) of the referent.

Simulation

40010000037977001815_1.jpgIn the simulation world reside the language learner (LL) and agents who use a language (LU-s). LU-s are programmed to talk to LL, as human parents do. Both have their own repertoire of behavior such as moving around. LL is programmed to acquire language use from the history of behavior and utterances of LU and itself.

Exterior objects are given to LL with their abstract properties in this simulation, as visual pattern recognition is out of the scope.

Words are also given to LL as segmented strings, as word segmentation from phonetic streams is out of the scope.

Social and internal rewards are given to LL; LU-s' behaviors such as 'smiles' and those in accordance with LL's prediction yield rewards inside LL.

LL probabilistically copies (part of) LU-s' utterances and this babbling is reinforced by rewards.

According to Tomasello, language learning requires the recognition of intentions of speakers. Or at least LL must be able to attend to what LU-s refer to with their utterances. In case of motion, while the agent (LL or LU) is salient enough, it would be difficult to find the motion segment referred to.

A means of visualization will be introduced, without which it is difficult to understand what is going on in the simulation...

Learning

In the simulation above, LL must learn motion patterns of itself and LU-s, with a time-series learning mechanism such as RNN.  Learned recurrent patterns may be categorized with categorizers such as k-means or SOINN.  (A combination of a recurrent network and a categorizer may be realized with the DeSTIN framework.)  The scheme for mapping verbs to motion pattern representations is to be determined.  (ref. a potentially related paper: W. Takano and Y. Nakamura