Monday, October 22, 2012
BESOM
Last week a fellow student introduced me a cognitive model called BESOM, which 'combines a Bayesian network and Self-Organizing Maps.' It is proposed by Yuuji ICHISUGI and purports to model the cerebral cortex. It appears to be quite interesting as a biologically inspired computational cognitive model. The author seems to be known among the people who are interested in this area in Japan.
Tuesday, October 9, 2012
Reentry
As I became a (research) student at an AI laboratory this month (2012-10), I start writing in this blog on AI related topics.
My writings on AI in English so far are found at:
https://sites.google.com/site/rondelion/Home/artificial-mind
and in a blog article on Jeff Hawkins:
http://rondelion.blogspot.jp/2012/02/on-intelligence.html
What I am going to study will be language acquisition by machines. Hopefully the study will develop into a full-fledged artificial general intelligence research.
For the building block of artificial intelligence, I'm going to use kinds of self-organizing (neural) networks (algorithms). While I'm hoping for the use of symbols to emerge from sub-symbolic patterns as it would occur in human brains consisting of neurons, current neural network simulators are difficult to deal with (for reasons such as it is computationally demanding and not apt for incremental learning). Some self-organizing (neural) networks are (claimed to be) less problematic and I'm hoping they are good enough for simulating symbol emergence.
I'm aware that simply accumulating neural-network-like modules wouldn't make an intelligence. Later studies should be focused on the realization of what is called executive function over a bunch of those modules.
More on later...
My writings on AI in English so far are found at:
https://sites.google.com/site/rondelion/Home/artificial-mind
and in a blog article on Jeff Hawkins:
http://rondelion.blogspot.jp/2012/02/on-intelligence.html
What I am going to study will be language acquisition by machines. Hopefully the study will develop into a full-fledged artificial general intelligence research.
For the building block of artificial intelligence, I'm going to use kinds of self-organizing (neural) networks (algorithms). While I'm hoping for the use of symbols to emerge from sub-symbolic patterns as it would occur in human brains consisting of neurons, current neural network simulators are difficult to deal with (for reasons such as it is computationally demanding and not apt for incremental learning). Some self-organizing (neural) networks are (claimed to be) less problematic and I'm hoping they are good enough for simulating symbol emergence.
I'm aware that simply accumulating neural-network-like modules wouldn't make an intelligence. Later studies should be focused on the realization of what is called executive function over a bunch of those modules.
More on later...
Subscribe to:
Posts (Atom)