Saturday, November 16, 2024

Implementation of Neural Sequence Memory

I was forgetting to report on the neural sequence memory implemented in May.

GitHub: https://github.com/rondelion/SequenceMemory

Animals (including human beings) can keep things in (short-term) memory for performing tasks.  The 'working memory' includes sequence memory; we can memorize sequences of events. Though it seems that it can be implemented with an associative memory that associates an input with the next input, it is not the case, because different input may follow the same input: e.g., A⇒B, A⇒C in ABAACCDAB…  Thus, a proper sequence memory must have ‘latent’ states to represent states in sequences.

The specifications of the implementation are as follows:

  • A latent state is represented as a one-hot vector, which guarantees the independence among the states.  The number of states corresponds to the number of events to be memorized.
  • Latent states have mutual associative links with the input.
  • Latent states have forward and backward associative links among themselves to represent a sequence.
  • It memorizes a sequence with a one-shot exposure by the instant reinforcement of the associative links (as in 'short-term potentiation' of synapses).
  • It can ‘replay’ a sequence with an input stimulus.
  • Latent states have decaying activation so that the least activated state can be ‘recycled.’

The idea here is similar to the competitive queuing model (see Bullock, 2003; Houghton, 1990; Burgess, 1999).

The figure below shows an input sequence (above) and remembered sequence (bottom):

No comments:

Post a Comment