COOPERATIVE BEHAVIOR IN NEURAL SYSTEMS: Ninth Granada Lectures
887(2007); http://dx.doi.org/10.1063/1.2709580View Description Hide Description
When viewed at a certain coarse grain, the brain seems a relatively small dynamical system composed by a few dozen interacting areas, performing a number of stereotypical behaviors. It is known that, even relatively small dynamical systems can reliably generate robust and flexible behavior if they are located near a second order phase transition, because of the abundance of metastable states at the critical point. The approach pursued here assumes that some of the most fundamental properties of the functioning brain are possible because it is spontaneously located at the border of such instability. In this notes we review the motivation, the arguments and recent results as well as the implications of this view of the functioning brain.
887(2007); http://dx.doi.org/10.1063/1.2709581View Description Hide Description
It has been known for about a century that psychophysical response curves (perception of a given physical stimulus vs. stimulus intensity) have a large dynamic range: many decades of stimulus intensity can be appropriately discriminated before saturation. This is in stark contrast with the response curves of sensory neurons, whose dynamic range is small, usually covering only about one decade. We claim that this paradox can be solved by means of a collective phenomenon. By coupling excitable elements with small dynamic range, the collective response function shows a much larger dynamic range, due to the amplification mediated by excitable waves. Moreover, the dynamic range is optimal at the phase transition where self‐sustained activity becomes stable, providing a clear example of a biologically relevant quantity being optimized at criticality. We present a pedagogical account of these ideas, which are illustrated with a simple mean field model.
887(2007); http://dx.doi.org/10.1063/1.2709582View Description Hide Description
Behavioral, neurophysiological, and theoretical studies are converging to a common theory of decision‐making that assumes an underlying diffusion process which integrates both the accumulation of perceptual and cognitive evidence for making the decision and motor choice in one unifying neural network. In particular, neuronal activity in the ventral premotor cortex (VPC) is related to decision‐making while trained monkeys compare two mechanical vibrations applied sequentially to the tip of a finger to report which of the two stimuli have the higher frequency (Romo et al. 2004, Neuron 41: 165). In particular, neurons were found whose response depended only on the difference between the two applied frequencies, the sign of that difference being the determining factor for correct task performance. We describe an integrate‐and‐fire attractor model with realistic synaptic dynamics including AMPA, NMDA and GABA synapses which can reproduce the decision‐making related response selectivity of VPC neurons during the comparison period of the task. Populations of neurons for each decision in the biased competition attractor receive a bias input that depends on the firing rates of neurons in the VPC that code for the two vibrotactile frequencies. It was found that if the connectivity parameters of the network are tuned, using mean‐field techniques, so that the network has two possible stable stationary final attractors respectively related to the two possible decisions, then the firing rate of the neurons in whichever attractor wins reflects the sign of the difference in the frequencies being compared but not the absolute frequencies. Thus Weber’s law for frequency comparison is not encoded by the firing rate of the neurons in these attractors. An analysis of the nonstationary evolution of the dynamics of the network model shows that Weber’s law is implemented in the probability of transition from the initial spontaneous firing state to one of the two possible attractor states. In this way, statistical fluctuations due to finite size noise produced by the spiking dynamics play a crucial role in the decision‐making process.
887(2007); http://dx.doi.org/10.1063/1.2709583View Description Hide Description
The brain has to process and react to an enormous amount of information from the senses in real time. How is all this information represented and processed within the nervous system? A proposal of nonlinear and complex systems research is that dynamical attractors may form the basis of neural information processing. Here we show that this idea can be successfully applied to the human auditory system, and can explain our perception of pitch.
887(2007); http://dx.doi.org/10.1063/1.2709584View Description Hide Description
Oscillatory activity of cells has been the topic of many studies. Oscillatory activity can be due to action potential firing corresponding to the well‐known Hodgkin‐Huxley (HH) type dynamics of ion‐channels in the cell membrane or due to IP3‐mediated calcium oscillations in the endoplasmic reticulum (ER) causing periodic oscillations of calcium transients in the cytosol. In this study we show that coupling of these two oscillatory mechanisms may reveal a complex, rich spectrum of both stable and unstable states of cells with hysteresis. The predicted bi‐stability corresponds to experimentally observed states. This illustrates that the different behavior of cells is not the consequence of differentiation in cells with different properties, but rather reflects different states of a single cell type.
887(2007); http://dx.doi.org/10.1063/1.2709585View Description Hide Description
A traditional view in neuroscience is that information arriving through one channel, i.e. a synapse, is encoded through a single code in the signal, e.g., the rate or the precise timing of the incoming events. However, not all the neural readers have to be interested in the same aspect of a common input signal, especially in multifunctional networks that can take advantage of several simultaneous codes. Multiple codes can be used to discriminate or contextualize certain inputs, even in single neurons. Dynamical mechanisms can add to the existing hard‐wired connectivity for this task. Recent experiments have revealed the existence of neural signatures in the activity of bursting cells of invertebrate central pattern generators. These signatures consist of cell‐specific spike timings in the bursting activity of the neurons. The signatures coexist with the information encoded in the frequency and/or phase relationships of the slow waves. The functional role of these neural fingerprints is still unclear. Based on experiments and using conductance‐based models, we discuss the origin and the role of neural signatures as a part of a multicoding strategy for single cells in different types of neural circuits.
887(2007); http://dx.doi.org/10.1063/1.2709586View Description Hide Description
Odor processing in the animal olfactory system is still an open problem in modern neuroscience. It is a common understanding that the spatial code provided by the activity distribution of the olfactory receptor cells (ORC) due the presence of an odorant is transformed into a spatio‐temporal code in the mitral cell (MC) layer in the case of mammals, or the projection neurons (PN) in the case of insects, that is decoded later along the neural path. The putative role of the spatio‐temporal coding is to disambiguate the stimulus putting it in a more robust representation that allows odor separation, categorization, and recognition. Oscillations due to lateral inhibition among MC’s (or PN’s) may play an important part in the code as well as neural adaptation. To shed some light on their possible role in the olfaction processing, we study the properties of a simple network model. Upon the presentation of a random distributed input it respond with a rich spatio‐temporal structure where two distinct phases are observed. We discuss their properties and implications in information processing.
887(2007); http://dx.doi.org/10.1063/1.2709587View Description Hide Description
Ever since the pioneering work of Hodgkin and Huxley, biological neuron models have consisted of ODEs representing the evolution of the transmembrane voltage and the dynamics of ionic conductances. It is only recently that maps — or difference equations — have begun to receive attention as valid conductance neuron models. They can not only be computationally advantageous substitutes of ODE models, but, since they accommodate chaotic dynamics in a natural way, they may reproduce rich collective behaviors that we explore here.
887(2007); http://dx.doi.org/10.1063/1.2709588View Description Hide Description
A method for suppression of collective synchrony in a population of globally coupled units is presented. The desynchronization is achieved by organizing an interaction between the ensemble and a passive oscillator; this is accomplished by a feedback technique. The significant property of our approach is that the stimulation signal vanishes as soon as the control is successful. The technique is illustrated by simulation of a model of an isolated population of neurons, what suggests a possible application of our technique in neuroscience.
887(2007); http://dx.doi.org/10.1063/1.2709589View Description Hide Description
Using a realistic model of activity‐dependent synapses, we study the detection of coincident spikes by a postsynaptic neuron. In this context, the interplay between short‐term depression and facilitation is analyzed. We have computed, both numerically and analytically, the degree of correlation between the postsynaptic response and the input signal. Our study shows that facilitation strongly enhances spike detection compared with the situation in which depression is the only considered synaptic mechanism. In addition, facilitation determines the existence of an optimal input frequency value, which allows for the best performance within a wide (maximum) range of the neuron firing threshold. This fact could be important in coding relevant information in neural systems constituted by neurons with a high variability in their firing thresholds, as occurs in some cortical areas.
Highly synchronized noise‐driven oscillatory behavior of a FitzHugh—Nagumo ring with phase‐repulsive coupling887(2007); http://dx.doi.org/10.1063/1.2709590View Description Hide Description
We investigate a ring of N FitzHugh‐Nagumo elements coupled in phase‐repulsive fashion and submitted to a (subthreshold) common oscillatory signal and independent Gaussian white noises. This system can be regarded as a reduced version of the one studied in [Phys. Rev. E 64, 041912 (2001)], although externally forced and submitted to noise. The noise‐sustained synchronization of the system with the external signal is characterized.
887(2007); http://dx.doi.org/10.1063/1.2709591View Description Hide Description
We study neural connectivity in cultures of rat hippocampal neurons. We measure the neurons’ response to an electric stimulation for gradual lower connectivity, and characterize the size of the giant cluster in the network. The connectivity undergoes a percolation transition described by the critical exponent β ≃ 0.65. We use a theoretic approach based on bond—percolation on a graph to describe the process of disintegration of the network and extract its statistical properties. Together with numerical simulations we show that the connectivity in the neural culture is local, characterized by a gaussian degree distribution and not a power law one.
887(2007); http://dx.doi.org/10.1063/1.2709592View Description Hide Description
The retrieval abilities of spatially uniform attractor networks can be measured by the average overlap between patterns and neural states. Metric networks (with local connections), like small‐world graphs, modelled by the parameters: connectivity γ and randomness ω, however, display a richer distribution of memory attractors. We found that metric networks can carry information structured in blocks without any global overlap. There is a competition between global and blocks attractors. We propose a way to measure the block information, related to the fluctuations of the overlap over the blocks. The phase‐diagram with the transition from local to global information, shows that the stability of blocks grows with dilution, but decreases with the storage rate and disappears for random topologies.
887(2007); http://dx.doi.org/10.1063/1.2709593View Description Hide Description
We present a theoretical framework which allows one to study both theoretically and numerically the effect of including activity dependent mechanisms in the dynamics of synapses in simple neural networks. In particular, we study synaptic changes at different time scales from less than the millisecond (fast synaptic noise) to the scale of learning (say years). For some limits of interest, as a consequence of such dynamics, the fixed‐point solutions or attractors loose stability and the system shows enhancement of his response to changing external stimuli. In some conditions, this results in a novel phase in which the neural activity continously jumps among different activity patterns.
887(2007); http://dx.doi.org/10.1063/1.2709594View Description Hide Description
The neural activity of the human brain is dominated by self‐sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities.
It constitutes a challenge to develop organizational principles for controlled, self‐sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought‐process is characterized, within this approach, by a time‐series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality.
In this approach the self‐sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered‐out.
Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long‐term working‐point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self‐induced activities and Hebbian‐type learning rules.
887(2007); http://dx.doi.org/10.1063/1.2709595View Description Hide Description
The competition between pattern reconstruction and sequence processing is studied here in an exactly solvable feed‐forward layered neural network model of binary units and patterns near saturation. We show results for both symmetric and asymmetric sequence processing, either one competing with pattern reconstruction represented by a Hebbian interaction, in order to compare these two kinds of sequence processing. Phase diagrams of stationary states are obtained and a new phase of cycles of period two is found for a weak Hebbian term in the case of symmetric sequence processing, independently of the number of condensed patterns c which have macroscopic overlaps with the states of the network. In contrast, the stability of these cycles depends strongly on c. These results are in contrast with those for the competition between a Hebbian interaction and an asymmetric sequence processing interaction, in which the period of the cycles is c and the stability of these solutions does not depend on c. The dynamics of the macroscopic overlaps in the stationary cyclic phase is analyzed in both models.
887(2007); http://dx.doi.org/10.1063/1.2709596View Description Hide Description
Control theory is a mathematical description of how to act optimally to gain future rewards. In this paper I give an introduction to deterministic and stochastic control theory and I give an overview of the possible application of control theory to the modeling of animal behavior and learning. I discuss a class of non‐linear stochastic control problems that can be efficiently solved using a path integral or by MC sampling. In this control formalism the central concept of cost‐to‐go becomes a free energy and methods and concepts from statistical physics can be readily applied.
887(2007); http://dx.doi.org/10.1063/1.2709597View Description Hide Description
We study a system whose dynamics are governed by predictions of its future states. A general formalism and concrete examples are presented. We find that the dynamical characteristics depend on how to shape the predictions as well as on how far ahead in time to make them. We also report that noise can induce oscillatory behavior, which we call “predictive stochastic resonance”.
887(2007); http://dx.doi.org/10.1063/1.2709598View Description Hide Description
Anharmonic interactions in lattices may sustain robust oscillatory modes and (nonlinear) waves including solitons. This is illustrated here by using an exponentially repulsive interaction introduced by Toda. To cope with friction and dissipation ‐always present in real systems‐ and hence to make robust, e.g., solitons, following Lord Rayleigh, an appropriate input‐output energy balance is added to the dynamics. Noise (and hence temperature) is also incorporated by embedding the system in a Gaussian, white noise environment (thermal bath). In the particular case of a lattice ring with six units it is shown how such a Toda‐Rayleigh lattice can be used as a Central Pattern Generator of three different oscillatory modes. These three modes are shown to map three walking (metachronal/low speed, caterpillar/medium speed, and tripod/fast speed) gaits in insects (hexapods). An electronic implementation (diodes map easily exponential interactions) of the Toda‐Rayleigh lattice ring is also discussed, including leg motor controls for an hexapod robot. Finally, the Toda‐Rayleigh mechanical lattice is converted into an electromechanical wire‐like, lattice electric conductor. This is done by considering the lattice units as positive ion cores and adding free electrons to the system. The coupling of Toda dynamics with Coulomb interactions yields remarkable current‐field/voltage and current‐temperature characteristics in the presence of an external electric field. An Ohmic‐non Ohmic transition is possible in the lattice conductor. Such feature permits to consider it as a neural‐like conveyor of subsonic (Ohmic) and fast supersonic (non‐Ohmic) electric or other signals.