[TentacleWiki] [TitleIndex] [WordIndex

for detailed information, see PalimpsestMemoriesANewHighCapacityForgetfulLearningRuleForHopfieldNetworks, which is where much of this information and related information is quoted from.

PalimpsestLearningRules are based on the principle of keeping the size of the weight matrix elements bounded.

These LearningRules store patterns until capacity is reached, then as patterns continue to be presented the network forgets the older patterns, preferring to remember the newer ones. The network will continue to remember the most recent p patterns, where p is the PalimpsetCapacity of the network.

Palimpsest storage prescriptions are given generally by the local rule:

<nop>w


m_(ij) = 1/n phi(nw


(m-1)_(ij) + epsilon*Xi


m_i*Xi


m_j)</nop>

<nop>where Xi


m is the pattern to be stored, phi is some function, and n is the size of the network. w


m_ij=w


m_ji is the weight matrix after the mth pattern is stored.</nop>

The problem with PalimpsetMemories is that they tend to have very low capacities: less than 0.05n, compared with 0.14n for the HebbRule.

PalimpsestMemoriesANewHighCapacityForgetfulLearningRuleForHopfieldNetworks modifies the above rule to:

<nop>w


0_(ij)=0 forall i,j</nop>

<nop>w


m_ij = 0 for i==j</nop>

<nop>w


m_ij, else = w


(m-1)_(ij)+ [Xi


m_i*Xi


m_j - Xi


m_i*h


m_i - h


m_i*Xi


m_j]/n</nop>

<nop>where h


m_i = sum(k=1 to n)[w


(m-1)_(ik)*Xi


m_k]</nop>

if you want more, like what this has to do with fractals, try the paper. :) But essentially, with the fractal measure and probabilities (that the above somehow gives), this PalimpsestLearningRule has a PalimpsestCapacity of roughly 0.25n.


2020-10-05 00:39