{"title": "Resonance in a Stochastic Neuron Model with Delayed Interaction", "book": "Advances in Neural Information Processing Systems", "page_first": 314, "page_last": 320, "abstract": null, "full_text": "Resonance in a Stochastic Neuron Model \n\nwith Delayed Interaction \n\nToru Ohira* \n\nSony Computer Science Laboratory \n\n3-14-13 Higashi-gotanda \n\nShinagawa, Tokyo 141, Japan \n\nohira@csl.sony.co.jp \n\nYuzuru Sato \n\nInstitute of Physics, \n\nGraduate School of Arts and Science, University of Tokyo \n\n3-8-1 Komaba, Meguro, Tokyo 153 Japan \n\nysato@sacral.c.u-tokyo.ac.jp \n\nJack D. Cowan \n\nDepartment of Mathematics \n\nUniversity of Chicago \n\n5734 S. University, Chicago, IL 60637, U.S.A \n\ncowan@math.uchicago.edu \n\nAbstract \n\nWe study here a simple stochastic single neuron model with delayed \nself-feedback capable of generating spike trains. Simulations show \nthat its spike trains exhibit resonant behavior between \"noise\" and \n\"delay\". In order to gain insight into this resonance, we simplify \nthe model and study a stochastic binary element whose transition \nprobability depends on its state at a fixed interval in the past. \nWith this simplified model we can analytically compute interspike \ninterval histograms, and show how the resonance between noise and \ndelay arises. The resonance is also observed when such elements \nare coupled through delayed interaction. \n\n1 \n\nIntrod uction \n\n\"Noise\" and \"delay\" are two elements which are associated with many natural \nand artificial systems and have been studied in diverse fields. Neural networks \nprovide representative examples of information processing systems with noise and \ndelay. Though much research has gone into the investigation of these two factors \nin the community, they have mostly been separately studied (see e.g. [1]). Neural \n\n* Affiliated also with the Laboratory for Information Synthesis, RIKEN Brain Science \n\nInstitute, Wako, Saitama, Japan \n\n\fResonance in a Stochastic Neuron Model with Delayed Interaction \n\n315 \n\nmodels incorporating both noise and delay are more realistic [2], but their complex \ncharacteristics have yet to be explored both theoretically and numerically. \nThe main theme of this paper is the study of a simple stochastic neural model \nwith delayed interaction which can generate spike trains. The most striking feature \nof this model is that it can show a regular spike pattern with suitably \"tuned\" \nnoise and delay [3]. Stochastic resonance in neural information processing has been \ninvestigated by others (see e.g. [4]). This model, however, introduces a different \ntype of such resonance, via delay rather than through an external oscillatory signal. \nIt can be classified with models of stochastic resonance without an external signal \n[5] . \nThe novelty of this model is the use of delay as the source of its oscillatory dynamics. \nTo gain insight into the resonance, we simplify the model and study a stochastic \nbinary element whose transition probability depends on its state at a fixed inter(cid:173)\nval in the past. With this model, we can analytically compute interspike interval \nhistograms, and show how the resonance between noise and delay arises. We fur(cid:173)\nther show that the resonance also occurs when such stochastic binary elements are \ncoupled through delayed interaction. \n\n2 Single Delayed-feedback Stochastic Neuron Model \n\nOur model is described by the following equations: \n\nd \n\nJl dt Vet) \n\n\u00a2(V( t)) \n\n-Vet) + W\u00a2(V(t - r)) + eL(t) \n\n2 \n\n1 + e-1)(V(t)-9) \n\n-1 \n\n(1) \n\nwhere 11 and () are constants, and V is the membrane potential of the neuron. The \nnoise term eL has the following probability distribution. \n\npee = u) \n\n1 \n(-L 5: u 5: L) \n2L \no (u<-L,u>L) , \n\n(2) \ni.e., eL is a time uncorrelated uniformly distributed noise in the range (-L, L). It \ncan be interpreted as a fluctuation that is much faster than the membrane relaxation \ntime Jl. The model can be interpreted as a stochastic neuron model with delayed \nself-feedback of weight W, which is an extension of a model with no delay previously \nstudied using the Fokker- Planck equation [6]. \nWe numerically study the following discretized version: \n\nVet + 1) = 1 + e-1)(V(t-T )- 9) - 1 + eL \n\n2 \n\n(3) \n\nWe fix 11 and () so that this map has two basins of attractors of differing size with \nno delay, as shown in Figure l(A) . We have simulated the map (3) with various \nnoise widths and delays and find regular spiking behavior as shown in Fig l(C) for \ntuned noise width and delay. In case the noise width is too large or too small given \nself-feedback delay, this rhythmic behavior does not emerge, as shown in Fig1(B) \nand (D). \n\nWe argue that the delay changes the effective shape of the basin of attractors into \nan oscillatory one, just like that due to an external oscillating force which, as is \nwell-known, leads to stochastic resonance with a tuned noise width. The analysis of \nthe dynamics given by (1) or (3), however, is a non- trivial task, particularly with \n\n\f316 \n\nT. Ohira, Y. Sato and J. D. Cowan \n\nrespect to the spike trains. A previous analysis using the Fokker-Planck equation \ncannot capture this emergence of regular spiking behavior. This difficulty motivates \nus to further simplify our model, as described in the next section. \n\n(A) \n\ncp \n, \n\n(E) \n\n(0) \n\n(b) \n\n... \n\n200 \n\nm \n\n.oo \n\n1000 t \n\n(8) \n\nV(t) \n\n(e) \n\nV(t) \n\n(F) \n\nX(t) \n\n.. , \n\n(G) \n\nX(t) , \n, \n\nI ... \n\n.~. ., \n\n,.p \n\n100 \n\nt \n\nL - -\n\n, \n\n.. \n\n.. \n\n, \n\n, \n,>--- I - I - L.. ~ I - L.. ' - -\n\n\" \n\n(D) \n\nV(t) \n\n(8) \n\nX(t) \n\nFigure 1: (A) The shape of the sigmoid function 4> (b) for\", = 4 and 0 = 0.1. The \nstraight line (a) is 4> = V and the crossings of the two lines indicate the stationary \npoint of the dynamics. Also, the typical dynamics of V (t) from the map model are \nshown as we change noise width L. The values of L are (B) L = 0.2, (C) L = 0.4, \n(D) L = 0.7. The data is taken with T = 20, '\" = 4.0, 0 = 0.1 and the initial \ncondition V(t) = 0.0 for t E [-r,O]. The plots are shown between t = a to 1000. \n(E) Schematic view of the single binary model. Some typical dynamics from the \nbinary model are also shown. The values of parameters are r = 10, q = 0.5, and \n(F) p = 0.005, (G) p = 0.05, and (H) p = 0.2. \n\n3 Delayed Stochastic Binary N enron Model \n\nThe model we now discuss is an approximation of the dynamics that retains the \nasymmetric stochastic transition and delay. The state X(t) of the system at time \nstep t is either -lor 1. With the same noise eL, the model is described as follows: \n\nX(t + 1) \n\nO[f(X(t - T\u00bb + eLl, \n1 \nf(n) = 2\u00aba + b) + n(a - b\u00bb, \nO[n] \n\n(0 ~ n), \n\n(4) \nwhere a and b are parameters such that lal ~ L and Ibl ~ L, and r is the delay. \nThis model is an approximate discretization of the space of map (3) into two states \n\n-1 (0) n), \n\n1 \n\n\fResonance in a Stochastic Neuron Model with Delayed Interaction \n\n317 \n\nwith a and b controlling the bias of transition depending on the state of X r steps \nearlier. When a i- b, the transition between the two states is asymmetric, reflecting \nthe two differing sized basins of attractors. \n\nWe can describe this model more concisely in probability space (Figure I(E)). The \nformal definition is given as follows: \n\nP(I, t + 1) \n\nP(-I,t+l) \n\np \n\nq \n\nX(t - r) = -1, \np, \n1- q, X(t - r) = 1, \nX(t - r) = 1, \nq, \n1- p, X(t - r) = -1, \n1 \n2(1 + L)' \n1 \n2(1 - L)' \n\nb \n\na \n\n(5) \n\nwhere P(s, t) is the probability that X(t) = s. Hence, the transition probability of \nthe model depends on its state r steps in the past, and is a special case of a delayed \nrandom walk [7]. \nWe randomly generate X(t) for the interval t = (-r, 0). Simulations are performed \nin which parameters are varied and X(t) is recorded for up to 106 steps. They \nappear to be qualitatively similar to those generated by the map dynamics (Figure \nI(F),(G),(H)). ;,From the trajectory X(t), we construct a residence time histogram \nh( u) for the system to be in the state -1 for u consecutive steps. Some examples \nof the histograms are shown in Figure 2 (q = 1 - q = 0.5, r = 10). We note that \nwith p \u00ab 0.5, as in Figure 2(A), the model has a tendency to switch or spike to \nthe X = 1 state after the time step interval of r. But the spike trains do not last \nlong and result in a small peak in the histogram. For the case of Figure 2(C) where \np is closer to 0.5, we observe less regular transitions and the peak height is again \nsmall. With appropriate p as in Figure 2(B), spikes tend to appear at the interval T \nmore frequently, resulting in higher peaks in the histogram. This is what we mean \nby stochastic resonance (Figure 2(D)). Choosing an appropriate p is equivalent to \n\"tuning\" the noise width L, with other parameters appropriately fixed. \nIn this \nsense, our model exhibits stochastic resonance. \n\nThis model can be treated analytically. The first observation to make with the \nmodel is that given r, it consists of statistically independent r + 1 Markov chains. \nEach Markov chain has its state appearing at every r+l interval. With this property \nof the model, we label time step t by the two integers sand k as follows \n\n(6) \nLet P\u00b1(t) == P\u00b1(s, k) be the probability for the state to be in the \u00b11 state at time \nt or (s, k). Then, it can be shown that \n\n(O::;s,O::;k::;r) \n\nt=s(r+l)+k, \n\na(1 - ,S) + ,s P+(s = 0, k), \n{3(1- ,S) + ,s P_(s = 0, k), \n\nP+(s, k) \nP_ (s, k) \n\na \n\n{3 \n\n, \n\np \n\nq \n\n, \np+q \n- -, \np+q \n1 - (p + q). \n\n(7) \nIn the steady state, P+(s --+ oo,k) == P+ = a and P_(s --+ oo,k) == P_ = {3. The \nsteady state residence time histogram can be obtained by computing the following \n\n\f318 \n\nT. Ohira, Y. Safo and J. D. Cowan \n\nquantity, h(u) = P(+;-,Uj+), which is the probability that the system takes \nconsecutive -1 states U times between two + 1 states. With the definition of the \nmodel and statistical independence between Markov chains in the sequence, the \nfollowing expression can be derived: \n\n(1 ~ U < r) \n\nP(+;-,Uj+) \n\nP+(P_)Up+ = (,8)U(a)2 \n\n= p+(p-r(1- q) = (,8r(a)(1- q) \n= p+(p-r(q)(1- p)U-T(p) = (,8)U(p)2 \n\n(8) \n(9) \n(10) \nWith appropriate normalization, this expression reflects the shape of the histogram \nobtained by numerical simulations, as shown in Figure 2. Also, by differentiating \nequation (9) with respect to p, we derive the resonant condition for the peak to \nreach maximum height as \n\n(u > r) \n\n(u = r) \n\nq=pr \n\n(11) \n\nor, equivalently, \n\nL - a = (L + b)r. \n\n(12) \nIn Figure 2(D), we see that maximum peak amplitude is reached by choosing pa(cid:173)\nrameters according to equation (11). We note that this analysis for the histogram is \nexact in the stationary limit, which makes this model unique among those showing \nstochastic resonance. \n\n(M \"'::L \n\nb(a) ~ Ol~ \n.\" \n'M. \n\nj ~ \n\n~ \n\n~ ~ \n\nl~ I ~ ~ I'!. \n\nI ' ! . :O \n\n\u2022 \n\n(at\n\nhCD) \n\n.\" \n\n\"'::I~ \n.... Ol~ ... . ~. \n\n1 ~ \n\n, \n\n\" \n\n10 \n\n12 ~ I' 17 ~ \n\njO \n\n\u2022 \n\n(et \n\n\"'~~ (I Ol~ \nb(ot .. \" ... \n\nI\" us UP!> 20 \n\n\u2022 \n\n\", \nh(tt . \"\" \n\n~ \n\n,', !> \n\n\"\"L \n\n.\" ... . ~. \n\n10 \n\n10 \n\n)0 \n\n\u2022 \n\nto \n\n\"'a .... \" \n\nFigure 2: Residence time histogram and dynamics of X(t) as we change p. The \nvalues of p are (A) p = 0.005, (B) p = 0.05, (C) p = 0.2. The solid line in the \nhistogram is from the analytical expression given in equations (8-10). Also, in (D) \nwe show a plot of peak height by varying p. The solid line is from equation (9). \nThe parameters are r = 10, q = 0.5. \n\n4 Delay Coupled Two Neuron Case \n\nWe now consider a circuit comprising two such stochastic binary neurons coupled \nwith delayed interaction. We observe again that resonance between noise and delay \n\n\fResonance in a Stochastic Neuron Model with Delayed Interaction \n\n319 \n\ntakes place. The coupled two neuron model is a simple extension of the model in \nthe previous section. The transition probability of each neuron is dependent on the \nother neuron's state at a fixed interval in the past. Formally, it can be described in \nprobability space as follows. \n\nPl(l, t + 1) \n\nPl(-I,t+l) \n\nP2(1, t + 1) \n\nP2(-I,t+l) \n\nX 2(t - 72) = -1, \nPI! \n1- q!, X 2(t - 72) = 1, \nX 2(t - 72) = 1, \nql, \n1 - PI, X 2(t - 72) = -1, \nXl(t - 7d = -1, \nP2, \n1- q2, Xl(t - 7d = 1, \nXl(t - 71) = 1, \nq2, \n1- P2, Xl(t - 71) = -1 \n\n(13) \nPi(S, t) is the probability that the state of the neuron i is Xi(t) = s. We have \nperformed simulation experiments on the model and have again found resonance \nbetween noise and delay. Though more intricate than the single neuron model, we \ncan perform a similar theoretical analysis of the histograms and have obtained ap(cid:173)\nproximate results for some cases. For example, we obtain the following approximate \nanalytical results for the peak height of the interspike histogram of Xl for the case \n71 = 72 == 7. ( The peak occurs at 71 + 72 + 1.) \nH(Pl' P2, qI, q2) = \n\n(14) \n\n{J.t3(P!, P2, ql, q2 )ql + J.t4(PI, P2, ql, q2)(1 - pd Y \n{J.tl(P!'P2, ql, q2)(qlq2PI + ql(l - q2)(1 - ql)) \n+J.t2(Pl,P2,ql,q2)((I- pdq2Pl + (1- Pl)(I- q2)(I- qd)} \nh (PI, P2, ql, q2)!2(PI, P2, ql, q2) \n\nJ.tl (PI, P2, qI, q2) \n\nJ.t2 (PI, P2, ql, q2) \n\nJ.t3 (PI, P2, ql , q2 ) \n\nJ.t4 (PI! P2, qI, q2 ) \n\nII (PbP2, ql, q2) \n\n12(PI,P2, qI, q2) \n\nS(PI,P2, ql, q2) \n\nS(PI , P2 , ql , q2) \n\nII (PI ,P2, qI, q2) \nS(PI,P2, q!, q2) \n!2(PI,P2, ql, q2) \ns (PI! P2, ql, q2) \n\n1 \n\nS(PI,P2, ql, q2) \nPI(I - P2) + P2(1 - ql) \nql (1 - q2) + q2 (1 - qd \nP2 + PI (1 - P2 - q2) \nq2 + ql(l - P2 -q2) \nh (PI, P2, ql, q2)!2(PI ,P2, ql, q2) \n\n+ h(PI,P2, ql, q2) + !2(PI,P2, ql, q2) + 1 \n\n(15) \n\n(16) \n\n(17) \n\n(18) \n\n(19) \n\n(20) \n\n(21) \n\nThese analytical results are compared with the simulation experiments, examples \nof which are shown in Figure 3. A detailed analysis, particularly for the case of \n71 =I 72, is quite intricate and is left for the future. \n\n5 Discussion \n\nThere are two points to be noted. Firstly, although there are examples which may \nindicate that stochastic resonance is utilized in biological information processing, \nit is yet to be explored if the resonance between noise and delay has some role in \n\n\f320 \n\nT. Ohira, Y. Sato and J. D. Cowan \n\n(A) \n\npi \n\n0 . 01 \n\nh(t) ..... \n\n0 . 008 \n\n(8) \n\n(C) \n\nh(t) ::::~ \n\n..00< ~ \n\no.oo:z \n\n0 . 00:3: \n\n0 .2 \n\n0 . 3 \n\n.. \n\n0 . 1 \n\n0 , ' \n\n0 . 1 \n\n.. \n\n0.' \n\n0 . 5 \n\nFigure 3: A plot of peak height by varying P2. The solid line is from equation (14-\n20). The parameters are T1 = T2 = 10, q1 = q2 = 0.5, (A)P1 = P2, (B) P1 = 0.005, \n(C) P1 = 0.025. \n\nneural information processing. Secondly, there are many investigations of spiking \nneural models and their applications (see e.g., [8]). Our model can be considered \nas a new mechanism for generating controlled stochastic spike trains. One can \npredict its application to weak signal transmission analogous to recent research \nusing stochastic resonance with a larger number of units in series [9]. Investigations \nof the network model with delayed interactions are currently underway. \n\nReferences \n\n[1) Hertz, J. A., Krogh, A., & Palmer, R. G. (1991). Introduction to the Theory of Neural \nComputation. Redwood City: Addison-Wesley. \n\n[2) Foss, J., Longtin, A., Mensour, B., & Milton, J . G. (1996). Multistability and Delayed \nRecurrent Loops. Physical Review Letters, 76, 708-711; Pham, J., Pakdaman, K., Vibert, \nJ.-F. (1998). Noise-induced coherent oscillations in randomly connected neural networks. \nPhysical Review E, 58, 3610-3622; Kim, S., Park, S. H., Pyo, H.-B. (1999). Stochastic \nResonance in Coupled Oscillator Systems with Time Delay. Physical Review Letters, 8!, \n1620-1623; Bressloff, P. C. (1999). Synaptically Generated Wave Propagation in Excitable \nNeural Media. Physical Review Letters, 8!, 2979-2982. \n[3) Ohira, T. & Sato, Y. (1999). Resonance with Noise and Delay. Physical Review Letters, \n8!, 2811-2815. \n\n[4) Gammaitoni, L., Hii.nggi, P., Jung, P., & Marchesoni, F.(1998). Stochastic Resonance. \nReview of Modem Physics, 70, 223-287. \n\n[5) Gang, H., Ditzinger, T., Ning, C. Z., & Haken, H.(1993) Stochastic Resonance without \nExternal Periodic Force. Physical Review Letters, 71, 807-810; Rappel, W-J. & Strogatz, \nS. H. (1994). Stochastic resonance in an autonomous system with a nonuniform limit cycle. \nPhysical Review E, 50,3249-3250; Longtin, A. (1997). Autonomous stochastic resonance \nin bursting neurons. Physical Review E, 55, 868-876. \n[6) Ohira, T. & Cowan J . D. (1995). Stochastic Single Neurons, Neural Communication, \n7518-528. \n\n[7) Ohira, T. & Milton, J. G. (1995). Delayed Random Walks. Physical Review E, 5!, \n3277-3280; Ohira, T. (1997). Oscillatory Correlation of Delayed Random Walks, Physical \nReview E, 55, RI255-1258. \n\n[8) Maas, W. (1997). Fast Sigmoidal Network via Spiking Neurons. Neural Computation, \n9(2), 279-304; Maas, W. (1996). Lower Bounds for the Computational Power of Networks \nof Spiking Neurons. Neural Computation, 8(1), 1-40. \n[9) Locher, M., Cigna, D., and Hunt, E. R. (1998). Noise Sustained Propagation of a Signal \nin Coupled Bistable Electric Elements Physical Review Letters, 80, 5212-5215. \n\n\f", "award": [], "sourceid": 1656, "authors": [{"given_name": "Toru", "family_name": "Ohira", "institution": null}, {"given_name": "Yuzuru", "family_name": "Sato", "institution": null}, {"given_name": "Jack", "family_name": "Cowan", "institution": null}]}