Sunday, June 4, 2017

Research Update (2017-06)

I haven't written on my research for long time.  Last time, I wrote about making an agent learn verbs.  I did my research in the direction and wrote a paper to appear at the BICA 2017 conference (slides).  The research differed in a few points with the previous plan:
  • The agent (LL) learns first- and second-person pronouns as well as verbs.
  • The learning environment does not use the robot simulator, which would take too much time.
For the first point, I noticed that the issue of personal pronouns cannot be avoided to deal with a human-language-like language. While a learning agent may learn them by observing other agents' language use, it may learn only by interacting with a sole caretaker as well.  My research pursued the second possibility, which had not been addressed in previous works.
In the research, the learner 'babbled' while speaking and acting, both of which had limited choices (it was assumed that the repertoires had been learnt beforehand).  The utterances and actions are reinforced by acknowledgment of the caretaker.  Some utterances are also reinforced when the caretaker takes expected actions.  While the internal representations denoting actions and agents are given as symbols, their relation to words were learned from interaction with the caretaker through a language game.  (I'll put the link to the paper when published.  Besides, the source code can be found on GitHub at https://github.com/rondelion/Lingadrome/tree/master/Io&Tu/ .)
In this work, the language model for the two-word sentences were fixed; i.e., it consists of a subject and a verb in that order.  I'm thinking of making an agent learn language models, perhaps with LSTM, for the next research topic.