ContinuousTime["NeuralNetwork"]s with random asymmetric connection will be ["Chaotic"] asymptotically as number of neurons n -> infinity, provided that the origin is not a stable fixed point. However, ["Chaos"] can be constructed with only a FullyConnectedNetwork of two ["Neuron"]s, one excitatory, one inhibitory. PeriodDoublingsToChaosInASimpleNeuralNetwork shows this analytically by creating a conjugacy to PeriodDoubling. With properly chosen weight matrices, an inhibitory neuron and an excitatory neuron together can represent an oscillator of arbitrary period.
This route of["PeriodDoubling"]s to ["Chaos"] is actually robust against some small perturbation of the chosen weight matrices, implying that chaos itself is a strange attractor for["NeuralNetwork"]s. In fact, randomly generated weight matrices have positive probability of having ChaoticDynamics. The chaotic parameter space is explored further for two and three-neuron networks in ComplexDynamicsAndTheStructureOfSmallNeuralNetworks.
For the same parameter values, there may be more than one attractor, even more than one ChaoticAttractor. This co-existence of attractors often gives rise to a so-called GeneralizedHysteresis effect. This effect may serve as a kind of short-term memory. This kind of flipping could be relevant to the flipping of ambiguous figures in the visual domain, or hysteresis effects in finger-tapping. What appears most relevant to the discussion or interpretation of "cognitive dynamics" are the basins of attraction of these systems, not the specific attractor. The properties of basin bounderies may be more relevant to understanding than the exact "shape" of the representing attractor.
PeriodDoublingsToChaosInASimpleNeuralNetwork notes that while tanh (another common NeuronActivationFunction) is not TopologicallyConjugate to the SigmoidFunction, it does contain the same PeriodDoubling route to ["Chaos"].
But what does this represent in the biological system? A charge is not going to be oscillating chaotically. It would be the spike train. Fractal character has been evidenced in spike trains in various studies. It would be nice to display this graphically, so I need to do more reading.
#[http://qil.bu.edu/pdfs/teich/JOSA-14-529-1997.pdf]
#[http://masc-mac.lboro.ac.uk/~masc/research.html]
For more on how neural nets map to biological systems, see (extensively):
#[http://archive.cs.uu.nl/pub/RUU/CS/techreps/CS-2003/2003-008.pdf]
More on spike encodings and information theory:
#[http://www.bme.jhu.edu/~xwang/courses/papers/Borst_NatNeurosci1999.pdf]
notes:
* a minimal ChaoticNeuromodule
topics to cover:
- ["Neuron"]s
["NeuralNet"]s
["NeuralPlasticity"]
subset of papers referenced:
* BifurcationProcessesAndChaoticPhenomenaInCellularNeuralNetworks
* PeriodDoublingsToChaosInASimpleNeuralNetwork
* DynamicsOfVocabularyEvolution
* AttractorSwitchingByNeuralControlOfChaoticNeurodynamics
* ["ACellularGeneticAlgorithmForTrainingRecurrentNeuralNetworks"]
* LearningInPulseCoupledNeuralNets
* TemporalHebbianLearningInRateCodedNeuralNetworks
* TheImportanceOfChaosTheoryInTheDevelopmentOfArtificialNeuralNetworks
* ConstructiveRoleOfChaosInNeuralSystems
* AnElectrophoreticCouplingMechanismBetweenEfficiencyModificationOfSpineSynapsesAndTheirStimulation
* ActinBasedPlasticityInDendriticSpines
Questions I had after reading... things to look into, and such.
- "Where does ["Learning"] come from?"
"What is TheBindingProblem?"
- "How much information can be stored in a chaotic network?"
- "How many ["Neuron"]s/connections are in the ["Brain"]?"
- "How many of the connections are changing how quickly?"
- *"What does that say about information kept/lost/ignored?"
"What is a FeigenbaumParameter?"
- "What was ["Innervation"] again?"
- "Are cells in chemical gradients more likely to misbuild proteins and mutate?"
"What are["PurkinjeCell"]s?"
links to look at and include:
[http://www.dhushara.com/book/paps/chaos/bchaos1.htm] is a LOT of information.
[http://www.dhushara.com/book/paps/consc/brcons1.htm#anchor217145] also good
[http://www.dhushara.com/book/paps/chaos/Genrefs/book/brainp/Chaoq.htm#anchor224454] yowsa
[http://citeseer.nj.nec.com/storkey99efficient.html] (on the mathier side, but good to have: efficient covariance matrix methods for bayesian gaussian processes and hopfield neural networks)