{"title": "How Oscillatory Neuronal Responses Reflect Bistability and Switching of the Hidden Assembly Dynamics", "book": "Advances in Neural Information Processing Systems", "page_first": 977, "page_last": 984, "abstract": null, "full_text": "How Oscillatory Neuronal Responses Reflect \n\nBistability and Switching of the Hidden \n\nAssembly Dynamics \n\nK. Pawelzik, H.-V. Bauert, J. Deppisch, and T. Geisel \n\nInstitut fur Theoretische Physik and SFB 185 Nichtlineare Dynamik \n\nUniversitat Frankfurt, Robert-Mayer-Str. 8-10, D-6000 Frankfurt/M. 11, FRG \n\nttemporary adress:CNS-Program, Caltech 216-76, Pasadena \n\nemail: klaus@chaos.uni-frankfurt.dbp.de \n\nAbstract \n\nA switching between apparently coherent (oscillatory) and stochastic \nepisodes of activity has been observed in responses from cat and monkey \nvisual cortex. We describe the dynamics of these phenomena in two paral(cid:173)\nlel approaches, a phenomenological and a rather microscopic one. On the \none hand we analyze neuronal responses in terms of a hidden state model \n(HSM). The parameters of this model are extracted directly from exper(cid:173)\nimental spike trains. They characterize the underlying dynamics as well \nas the coupling of individual neurons to the network. This phenomenolog(cid:173)\nical model thus provides a new framework for the experimental analysis \nof network dynamics. The application of this method to multi unit ac(cid:173)\ntivities from the visual cortex of the cat substantiates the existence of \noscillatory and stochastic states and quantifies the switching behaviour \nin the assembly dynamics. On the other hand we start from the single \nspiking neuron and derive a master equation for the time evolution of the \nassembly state which we represent by a phase density. This phase density \ndynamics (PDD) exhibits costability of two attractors, a limit cycle, and \na fixed point when synaptic interaction is nonlinear. External fluctuations \ncan switch the bistable system from one state to the other. Finally we \nshow, that the two approaches are mutually consistent and therefore both \nexplain the detailed time structure in the data. \n\n977 \n\n\f978 \n\nPawelzik, Bauer, Deppisch, and Geisel \n\n1 \n\nINTRODUCTION \n\nA few years ago, oscillatory and synchronous neuronal activity was discovered in \ncat visual cortex [1-3]. These experiments backed earlier considerations about syn(cid:173)\nchrony in neuronal activity as a mechanism to bind features, e.g., of an object in a \nvisual scene [4]. They triggered broad experimental and theoretical investigations \nof detailed neuronal dynamics as a means for information processing and, in par(cid:173)\nticular, for feature binding. Many theoretical contributions tried to reproduce and \nexplain aspects of the experimentally observed phenomena [5]. Motivated by the \nexperiments, the models where particularly designed to exhibit spatial synchroniza(cid:173)\ntion of permanent oscillatory responses upon stimulation by a common, connected \nstimulus like a bar. Most models consist of elements which exhibit a limit cycle \nafter a simple Hopf bifurcation. \n\nThe experimental data, however, contain many details which the present models do \nnot yet completely incorporate. One of these details is the coexistence of regular \nand irregular episodes in the data, which interchange in an apparently stochastic \nmanner. This interchange can be observed in the signals from a single electrode [6] \nas well as in the time-resolved correlation of the signals from two electrodes [7]. In \nthis contribution we show, that the observed time structure reflects a switching in \nthe dynamics of the underlying neuronal system. This will be demonstrated by two \ncomplementary approaches: \n\nOn the one hand we present a new method for a quantitative analysis of the dynam(cid:173)\nical system underlying the measured spike trains. Our approach gives a quantitative \ndescription of the dynamical phenomena and furthermore explains the relation be(cid:173)\ntween the collective excitation in the network which is not accessible experimentally \n(i.e. hidden) and the contributions of the single observed neurons in terms of transi(cid:173)\ntion probability functions. These probabilities are the parameters of our Ansatz and \ncan be estimated directly from multi unit activities (MUA) using the Baum-Welch(cid:173)\nalgorithm. Especially for the data from cat visual cortex we find that indeed there \nare two states dominating the dynamics of collective excitation, namely a state of \nrepeated excitation and a state in which the observed neurons fire independently \nand stochastically. \n\nOn the other hand using simple statistical considerations we derive a description \nfor a local neuronal subpopulation which exhibits bistability. The dynamics of \nthe subpopulation can either rest on a fixed point - corresponding to the irregular \nfiring pat terns - or can follow a limit cycle - corresponding to the oscillatory firing \npatterns. The subpopulation can alternate between both states under the influence \nof noise in the external excitation. It turns out that the dynamics of this formal \nmodel reproduces the observed local cortical signals in much detail. \n\n2 Excitability of Neurons and Neuronal Assemblies \n\nAn abstract model of a neuron under external excitation e is given by its threshold \ndynamics. The state of the neuron is represented by its phase 4>8, which is the \ntime passed by since the last action potential (4)8 = 0). The threshold 6 is high \ndirectly after a spike and falls off in time and the neuron can fire again when e \nexceeds 6. In case of noise or internal stochasticity, an excitability description of \n\n\fOscillatory Neuronal Responses Reflect Bistability & Switching of Assembly Dynamics \n\n979 \n\nthe dynamics of the neuron is more adequate. It gives the probability PI to fire \nagain in dependence of the state l/J6 with PI (l/J6) = 0'( e - B( l/JS)) and 0' some sigmoid \nfunction. A monotonously falling threshold B then corresponds to a monotonously \nincreasing excitability PI' Such a description neglects any memory in the neuron \ngoing beyond the last spike. In particular this means for an isolated neuron, that \nthe relation Ph(t) = PI(t) . (1- f: Ph (t')dt'). In that case also the autocorrelation \nPI can be easily calculated from the inter-spike interval histogram (ISIH) Ph using \nfunction can be calculated from Ph(t) via G(T) = Ph(T) + f; Ph(T)G(T - t)dt. \nThe excitability formulation sketched above is not valid for a neuron which is em(cid:173)\nbedded in a neuronal assembly. However, we may use this Ansatz of a renewal \nprocess to describe the activation dynamics of the whole assembly (see section 5). \nThe phase l/Jb = 0 here corresponds to the state of synchronous activity of many \nneurons in the assembly, which we call burst for convenience. Since the dynamics \nof the network can differ from the dynamics of the elements we expect the func(cid:173)\ntion pj(l/Jb) which now describes the burst excitability of the whole assembly to be \ndifferent from\u00b7the spike excitability PI(l/Js) of the single neuron. \nA simple example for this is a system of integrate and fire neurons in which os(cid:173)\ncillatory and irregular phases emerge under fixed stimulus conditions( [8, 9] and \nsection 5). Contrary to the excitability of the single refractory element the burst \nexcitability pj of the system has a maximum at l/Jb = T which expresses the in(cid:173)\ncreased probability to burst again after the typical oscillation period T, i.e. \nthe \nmaximum represents a state 0 of oscillation. The assembly, however, can miss to \nburst around l/Jb = T with a probability PO-+6 and switch into a second state s \nin which the probability Ps-+o to burst again is reduced to a constant level. The \nswitching probabilities PO-+6 and Ps-+o can be easily calculated from Pj. In this \nway the shape of pj distinguishes a system with an oscillatory state from a system \nwhich is purely refractory but which nevertheless can still have strong modulations \nin the autocorrelogram [13]. \n\n3 Hidden states and stochastic observables \n\nThe single neuron in an assembly, however, need not be strictly coupled to the state \nof the assembly, i.e. a neuron may also spike for l/Jb > 0 and it may not take part \nin a burst. This stochastic coupling to an underlying process suffices to destroy \nthe equivalence of Ph and the autocorrelogram C( T) =< s(t)s(t + T) >t of the \nspike train set) E {O, I} (Fig 1). We therefore include the probability PObs(l/Jb) to \nobserve a spike when the assembly is in the state l/Jb into our description (Fig. 2). \nThe unlikely case where the spike represents the burst corresponds to the choice \nPobs = 84>b,O' \n\n4 Application to Experimental Data \n\nWhile our approach is quite general, we here concentrate on the measurements \nof Gray et al. \n[2] in cat visual cortex. Because our hidden state model has the \nstructure of a hidden Markov model we can obtain all the parameters POb6(l/J) and \n\n\f980 \n\nPawelzik, Bauer, Deppisch, and Geisel \n\n0.10 \n\n0.08 \n\nC(T) \n\n0.0(3 \n\n0.04 \n\n0.02 \n\n20 \n\n40 \n\nT[mS] \n\n60 \n\n80 \n\nFigure 1: Correlogram of multi unit activities from cat visual cortex (line). Corre(cid:173)\nlaograms predicted from the ISIH (6) and from the hidden state model (+) . \n\nMeasurement Level \n\nNetwork Level \n4>: \n\n,. p (n) \n\nf \n\nFigure 2: The hidden state model. While Pj(q}) governs the dynamics of assembly \nstates q;b, Pobs (q;b) represents the probability to observe a spike of a single neuron. \n\n\fOscillatory Neuronal Responses Reflect Bistability & Switching of Assembly Dynamics \n\n981 \n\n0.25[ \n0.20 \nPj( 4>b) 0.15 \n\n010 \n\n005 \nO.Oob~~~:::t:!~~ ___ ----:,::--_ __ ~. \n\no \n\n020 \n0.10 \no.oo L---_ __ ---=-~:=:!:\u00b1.\"\"--~:!T~~~~. \n\no \n\n10 \n\n20 \n\nFigure 3: Network excitability pJ and single neuron contribution Pobs estimated \nfrom experimental spike trains (A17, cat). \n\npJ (\u00a2;) directly from the multi unit activities using the well known Baum-Welch \nalgorithm[lO]. The results can be seen in Fig. 3. The excitability shows a typical \npeak around the main period at T = 19ms, which indicates a state of oscillation. \nFor larger phases we see a reduced excitability which reveals a state of stochastic \nactivity (Pobs (\u00a2;b > T) > 0). The spike observation probability Pobs (\u00a2;) is peaked \nnear the burst and is about constant elsewhere. This means that we can characterize \nthe data by a stochastic switching between two dynamical states in the underlying \nsystem. Because of the stochastic coupling of the single neuron to the assembly state \nthis can only hardly be observed directly. The switching probabilities between either \nstates calculated from pJ coincide with results from other methods [11]. \nFrom the excitability pJ and the spike probabilities Pobs we now obtain the autocor(cid:173)\nrelation function C(r) = f4>f4>' Pobs(\u00a2;')M(\u00a2;',\u00a2;fPobs(\u00a2;)p(\u00a2;)d\u00a2;'d\u00a2;, with M being \nthe transition matrix of the Markov model (see also below). The result is compared \nto the true autocorrelation C( r) in Fig. 1. The excellent agreement confirms our \nsimple Ansatz of a renewal process for the hidden burst dynamics of the assembly. \n\n5 Bistability and Switching in Networks of Spiking Neurons \n\nThe above results indicate that the dynamics of a cortical assembly includes bista(cid:173)\nbility rather than a simple Hopf bifurcation. In order to understand how this bista(cid:173)\nbility emerges in a network we go one step back and derive a model for a neuronal \nsubpopulation on the basis of spiking neurons. We assume again that the internal \nstate of the neuron is given by the threshold function e depending on the time since \nthe last spike event and that the excitability of the neuron can be described by a \n\n\f982 \n\nPawelzik, Bauer, Deppisch, and Geisel \n\nc \no \n'(cid:173)\n:=J \nQ) \nc: \n\n4 ~----'l.r---....Il ,..----\" \n3 \n2 \n\nti me \n\nt \n\nFigure 4: Illustration of assembly state representation by a phase density. \n\nfiring probability Pf. In a network, however, the input to the neuron has external \ncontributions i ext as well as from within iint i.e. \n\nPf(