While I have reported my research activities on this blog, as it would be difficult to find a thread going through them from the posts, I try to tidy them up and present remaining issues in this post.
Thematic summary
The aim of my research has been to understand human cognitive functions by implementing them (the synthetic approach) in a biologically plausible way. Since I have been interested in AGI, the functions of interest include fluid intelligence and language acquisition. While the overall purpose is to create a language learning agent, since it cannot be done at once, I made my implementation efforts in piecemeal fashion. To deal with the problem with compute and complication, I made the settings (environments) as simple as possible to avoid the need of pattern recognition, non-essential to the subject matters. In my efforts on fluid intelligence, I have been prioritizing architecture over learning, as it is about the architecture that enables to solve unseen tasks. Below I list articles I wrote by themes and in chronological order.
Researches on language acquisition
- 'Simulating the Usage Acquisition of Two-Word Sentences with a First- or Second-Person Subject and Verb' for BICA 2017.
An agent was implemented to learn language use by interacting with a 'caretaker' in a simple language and with social reward from the caretaker. - Implementation of a Parser without Grammar with Neural Sequence Memory
A blog post (2025) Note: almost all CFG parsers ever built are symbolic (i.e., not neural) and require hand-crafted explicit grammar. - A Language Model Grounded to a Simple Visual Environment with Active Vision (2025)
A blog post on the implementation
Researches on fluid intelligent
- Information Binding with Dynamic Associative Representations for a workshop at the AGI-13 conference (2013)
A conceptual paper claiming that information binding with neural networks can be realized by temporal associative traversing.
The idea was partly implemented in 2025: see the post above on the language model. - The Match-to-Sample Task as the First Milestone toward the Realization of Fluid Intelligence (2020)
A conceptual blog post on fluid intelligence (see Table 1 for the distinction between fluid intelligence and crystalized intelligence) - A Model of Fluid Intelligence based on Examining Experienced Sequences,
A conceptual blog post on rule/policy discovery (2022) - Solving a Delayed Match-to-Sample Task with Sequential Memory (2023)
An implementation of the previous post - Implementation of Neural Sequence Memory (2024)
A blog post - A Neural Model of Rule Discovery with Relatively Short-Term Sequence Memory (2024)
An arXiv article reporting an implementation of rule discovery with the neural sequence memory described in the previous post.
Remaining Issues
Chronological Summary (you can skip)
This blog started in 2012, where I stated my interest in AGI and the executive function.
In May 2013, I made an illustration of major human cognitive functions 'to serve for AGI designs.'

Cognitive functions in my post (2013)
Also in 2013, I wrote and presented a paper entitled 'Information Binding with Dynamic Associative Representations' for a workshop at the AGI-13 conference. I believe the idea that the brain binds information by traversing associative links is still valid. In a post in August, I stated to begin creating a language capable agent (rover) to verify the idea. In the post, I listed the following functions for core cognitive architecture:
Time-series pattern recognizers Attention and situation assessment Episodic memory Backtracking
In September, I changed my method to focus on language acquisition with a robot simulator, and included labeling and relation learning as things to be implemented. After struggling with robot simulation, I dropped the simulation and created a new plan in June 2014, which still almost holds.
I decided to focus on verb learning in 2015, struggled again with a robot simulator through 2016, again dropped the simulator, and wrote and presented a paper entitled 'Simulating the Usage Acquisition of Two-Word Sentences with a First- or Second-Person Subject and Verb' for BICA 2017.
Then I suspended my own research to focus on surveys in brain mimetic AI for the Whole Brain Architecture Initiative until 2020.
In 2020, I was interested in fluid intelligence and related tasks such as matrix reasoning tasks and match-to-sample tasks. (See Table 1 of the post for the distinction between fluid intelligence and crystalized intelligence).
In 2021, I started implementing brain mimetic cognitive architecture (first try, an implementation plan for visual working memory tasks, the cortico-BG loop). In 2022, I implemented an agent to solve visual working memory tasks and summarized the effort so far. The attempt lasted until the implementation of a brain-inspired visuomotor agent in early 2024, and I dropped it partly because it complicates things too much. Through those efforts, I was interested in implementing vision and working memory.
Examining issues on working memory and fluid intelligence, I found sequence memory (memory of sequences) important (my post in 2022), and implemented a system to solve a delayed match-to-sample task with sequential memory in 2023. In 2024, I implemented a neural sequence memory and used it for a neural model of rule discovery.
In 2025, I implemented a neural parser to handle a simple context-free language with the neural sequence memory.
Also in 2025, I implemented a language model grounded to a simple visual environment with active vision. With this work, I returned to the early plan for language acquisition in 2010's, combining vision, attention, labeling, and information binding. (Here I used a statistical language model, rather than a syntax-based one I aimed in the earlier neural parser implementation).
No comments:
Post a Comment