AGI-related sessions such as one on autonomy, emergence, and general AI architecture and another on a new eco-system woven out of humans and AI (both in Japanese) were held at the annual convention of the Japanese Society for AI (JSAI) in June.
The SIG-AGI@JSAI held three workshops (see the event page in Japanese). Through the workshops, I learned recent advances in Japan including:
- Susumu Katayama's work on AIXI
While AIXI has been thought to be incomputable and without implementation, there were actually a computable variant and a Haskell implementation which won an award at General AI Challenge in 2017! - Symbol Emergence in Robotics
Well, I knew their advances from before. But I learned that one of their results had been pretty well established: Tomoaki Nakamura et al. showed that a robot could learn a (subset of) human language in a multi-modal environment. This is good news for the realization of human-like AGI. Somehow I haven't been able to locate introductory articles on it in English in their publications. - RGoal Architecture
The architecture by Yuuji Ichisugi et al. can plan its action (i.e., on the zero-shot basis) based on reinforcement learning. Again, English articles on it will be listed sometime in their publications.
The Whole Brain Architecture Inititative (NPO), of which I am an insider, had activities such as:
- The annual hackathon on gaze control, which set a basis for brain-inspired cognitive architecture
- A symposium in JNNS 2018 Conference
- Proposal for Function map-driven development for AGI
- Compiling Requests for Research with Project AGI based in Australia.
As for brain-inspired AI, an academic project Correspondence and Fusion of Artificial Intelligence and Brain Science has been actively working on research in biological intelligence.