Perhaps, it is the time to stop and think before continuing the development...
For the moment, the agents have very simple architecture, with a blackboard kind of internal communication system. Namely, an agent has an input buffer and a state buffer, where 'codelets' can read from and write to, respectively. As both buffers are python dictionaries (hash tables), they can contain any data, which are accessed by 'names/keys.'
Codelets are registered as 'rules,' which have a condition part and an action part. In each execution cycle, the action of the codelets/rules whose conditions meet 'fires' and it may try to modify the content of the state buffer. Each codelet has a score and if more than one codelet tries to modify a same part of the state buffer (with the same name/key), the value written by the codelet with the highest score (the winner) is chosen.
In the recent posts, I mentioned affect terms such as 'reward' and 'urge.' In fact, they correspond to variables with the same names on the state buffer, and the agents 'smile' when they read a positive 'reward' in the buffer.
While the mechanism is quite simple, generic and usable, there are a few points to consider for future modifications:
- Is it appropriate as a cognitive architecture?
To make it look more like a respectable cognitive architecture, perhaps, the affect mechanism should have its own modules. - Is it biologically plausible?
(The answer is, of course, No.) To make it biologically plausible, the architecture should mimic the brain architecture. Or, more simply, real brains are not supposed to use blackboard architecture. - Symbolic representation?
The current system uses symbolic representation (python dictionaries) in internal communication. Besides that the brain does not use such symbolic representation, using vector representation would be good when the system is to be controlled machine learning algorithms.
No comments:
Post a Comment