The LU (caretaker) now 'instructs' the LL (language learner) with commands such as 'stay,' 'turn,' 'come,' and 'go to the blue item' (in Interlingua).
"Luca, turn right." |
"Good!" |
LU also 'declares' (thinks aloud) its own actions before carrying them out.
"I go to the blue one." |
Now I guess I should go for language learning.
Here is a tentative plan:
(For the time being, I'll forget about LU's declaration mode, which requires the recognition of LU's behavior on LL's part.)
- The association of LU's words with LL's own actions
As this should be done at a conceptual level, it requires: - Generation of action concepts
- Generation of word concepts
- Reinforcement learning of action under instruction
- Association between word concepts and action concepts
- Creating a language (syntactic) model linked with the concepts
- Bubbling
LL will utter based on the language model.- If the utterance matches its action, LU repeats it and gives a reward.
This reinforces the bubbling. - If not, LU describes LL's action after saying "No."
This alters the conceptual association and language model.
- If the utterance matches its action, LU repeats it and gives a reward.
No comments:
Post a Comment