Thursday, December 14, 2017

AGI in Japan, 2017

This year also saw a few entities in Japan playing around AGI.  (See my reports for 2014, 2015, and 2016.)

The SIG-AGI of the Japanese Society for AI held sessions at the JSAI annual convention and three workshops as well as an informal meeting with Marcus Hutter [slides].  (See the event page in Japanese.  The members of its Facebook group [in Japanese] are now more than 2,000.)

The AI and Society symposium was held in Tokyo in October gathering prominent speakers.  A more closed conference on Beneficial AI was collocated with the symposium.  Araya was involved in both conferences.  Previously it was also involved in the General AI Challenge organized by GoodAI.  Ryota Kanai, the president, appears on a TV talk show on the social issues on AI in English...

WBAI (NPO), of which I am an insider, had the following activities:
Besides, Hiroshi Yamakawa, the Chairperson of the NPO, had a long interview on AGI with FLI.

A group called TeamAI recently announced starting their AGI project with 'Lean Startup Method.'

More 'professional' or national-level research programs with fundings such as AIRC, NEDO, and CREST are working on topics related to AGI such as intelligent robotics, though it is rather difficult to find their English pages...  (a rare finding: a NEDO robotics conference in California)

I found the homepage of the newly founded Systems Intelligence Division of the Osaka University...[2017-12-15]

Sunday, June 4, 2017

Research Update (2017-06)

I haven't written on my research for long time.  Last time, I wrote about making an agent learn verbs.  I did my research in the direction and wrote a paper to appear at the BICA 2017 conference (slides).  The research differed in a few points with the previous plan:
  • The agent (LL) learns first- and second-person pronouns as well as verbs.
  • The learning environment does not use the robot simulator, which would take too much time.
For the first point, I noticed that the issue of personal pronouns cannot be avoided to deal with a human-language-like language. While a learning agent may learn them by observing other agents' language use, it may learn only by interacting with a sole caretaker as well.  My research pursued the second possibility, which had not been addressed in previous works.
In the research, the learner 'babbled' while speaking and acting, both of which had limited choices (it was assumed that the repertoires had been learnt beforehand).  The utterances and actions are reinforced by acknowledgment of the caretaker.  Some utterances are also reinforced when the caretaker takes expected actions.  While the internal representations denoting actions and agents are given as symbols, their relation to words were learned from interaction with the caretaker through a language game.  (I'll put the link to the paper when published.  Besides, the source code can be found on GitHub at https://github.com/rondelion/Lingadrome/tree/master/Io&Tu/ .)
In this work, the language model for the two-word sentences were fixed; i.e., it consists of a subject and a verb in that order.  I'm thinking of making an agent learn language models, perhaps with LSTM, for the next research topic.