for now, see LearningRules.
or, hell, for now we can jump straight to ForgetfulLearningRules (aka PalmipsestLearning aka PalimpsestMemories)
"Attractor networks such as HopfieldNetworks are used as autoassociative content addressable memories. The aim of such networks is to retrieve a previously learnt pattern from an example which is similar to, or a noisy version of, one of the previously presented patterns."
One problem with standard LearningRules is that they suffer from CatastrophicForgetting.
[http://staff.aist.go.jp/eimei.oyama/hpcle.html] (learning models of coordinate transformation of human visual feedback controller; inverse kinematics?)
[http://www.spie.org/web/oer/september/sep00/ltconstr.html] (new neural network enables machines to expect what to see) - loop learning, not backpropogation; more biologically relevant. hmm. or more technically, [http://citeseer.nj.nec.com/hinton00training.html] (the original paper)
[http://www.lans.ece.utexas.edu/ulg/] (unsupervised learning group at UT Austin)