{"title": "Active dendrites: adaptation to spike-based communication", "book": "Advances in Neural Information Processing Systems", "page_first": 1188, "page_last": 1196, "abstract": "Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity.", "full_text": "Active dendrites:\n\nadaptation to spike-based communication\n\nBal\u00b4azs B Ujfalussy1,2\nubi@rmki.kfki.hu\n\nM\u00b4at\u00b4e Lengyel1\n\nm.lengyel@eng.cam.ac.uk\n\n1 Computational & Biological Learning Lab, Dept. of Engineering, University of Cambridge, UK\n2 Computational Neuroscience Group, Dept. of Biophysics, MTA KFKI RMKI, Budapest, Hungary\n\nAbstract\n\nComputational analyses of dendritic computations often assume stationary in-\nputs to neurons, ignoring the pulsatile nature of spike-based communication be-\ntween neurons and the moment-to-moment \ufb02uctuations caused by such spiking\ninputs. Conversely, circuit computations with spiking neurons are usually formal-\nized without regard to the rich nonlinear nature of dendritic processing. Here we\naddress the computational challenge faced by neurons that compute and represent\nanalogue quantities but communicate with digital spikes, and show that reliable\ncomputation of even purely linear functions of inputs can require the interplay of\nstrongly nonlinear subunits within the postsynaptic dendritic tree. Our theory pre-\ndicts a matching of dendritic nonlinearities and synaptic weight distributions to\nthe joint statistics of presynaptic inputs. This approach suggests normative roles\nfor some puzzling forms of nonlinear dendritic dynamics and plasticity.\n\n1\n\nIntroduction\n\nThe operation of neural circuits fundamentally depends on the capacity of neurons to perform com-\nplex, nonlinear mappings from their inputs to their outputs. Since the vast majority of synaptic inputs\nimpinge the dendritic membrane, its morphology, and passive as well as active electrical properties\nplay important roles in determining the functional capabilities of a neuron. Indeed, both theoretical\nand experimental studies suggest that active, nonlinear processing in dendritic trees can signi\ufb01cantly\nenhance the repertoire of singe neuron operations [1, 2].\nHowever, previous functional approaches to dendritic processing were limited because they studied\ndendritic computations in a \ufb01ring rate-based framework [3, 4], essentially requiring both the inputs\nand the output of a cell to have stationary \ufb01ring rates for hundreds of milliseconds. Thus, they\nignored the effects and consequences of temporal variations in neural activities at the time scale\nof inter-spike intervals characteristic of in vivo states [5]. Conversely, studies of spiking network\ndynamics [6, 7] have ignored the complex and highly nonlinear effects of the dendritic tree.\nHere we develop a computational theory that aims at explaining some of the morphological and\nelectrophysiological properties of dendritic trees as adaptations towards spike-based communica-\ntion. In line with the vast majority of theories about neural network computations, the starting point\nof our theory is that each neuron needs to compute some function of the membrane potential (or,\nequivalently, the instantaneous \ufb01ring rate) of its presynaptic partners. However, as the postsynaptic\nneuron does not have direct access to the presynaptic membrane potentials, only to the spikes emit-\nted by its presynaptic partners based on those potentials, computing the required function becomes a\nnon-trivial inference problem. That is, neurons need to perform computations on their inputs in the\nface of signi\ufb01cant uncertainty as to what those inputs exactly are, and so as to what their required\noutput might be.\nIn section 2 we formalize the problem of inferring some required output based on incomplete\nspiking-based information about inputs and derive an optimal online estimator for some simple\n\n1\n\n\fbut tractable cases. In section 3 we show that the optimal estimator exhibits highly nonlinear behav-\nior closely matching aspects of active dendritic processing, even when the function of inputs to be\ncomputed is purely linear. We also present predictions about how the statistics of presynaptic inputs\nshould be matched by the clustering patterns of synaptic inputs onto active subunits of the dendritic\ntree. In section 4 we discuss our \ufb01ndings and ways to test our predictions experimentally.\n\n2 Estimation from correlated spike trains\n\n2.1 The need for nonlinear dendritic operations\n\nIdeally, the (subthreshold) dynamics of the somatic membrane potential, v(t), should implement\nsome nonlinear function, f (u(t)), of the presynaptic membrane potentials, u(t).1\n\n\u2327\n\ndv(t)\n\ndt\n\n= f (u(t)) v(t)\n\n(1)\n\nHowever, the presynaptic membrane potentials cannot be observed directly, only the presynaptic\nspike trains s0:t that are stochastic functions of the presynaptic membrane potential trajectories.\nTherefore, to minimise squared error, the postsynaptic membrane potential should represent the\nmean of the posterior over possible output function values it should be computing based on the input\nspike trains:\n\n\u2327\n\ndv(t)\n\ndt 'Z f (u(t)) P(u(t)|s0:t) du(t) v(t)\n\n(2)\n\nBiophysically, to a \ufb01rst approximation, the somatic membrane potential of the postsynaptic neuron\ncan be described as some function(al), \u02dcf, of the local dendritic membrane potentials, vd(t)\n\nThis is interesting because P\ufb01ster et al. [11, 12] have recently suggested that short-term synaptic\nplasticity arranges for each local dendritic postsynaptic potential, vd\ni , to (approximately) represent\nthe posterior mean of the corresponding presynaptic membrane potential:\n\n\u2327\n\ndv(t)\n\ndt\n\n= \u02dcfvd(t) v(t)\n\nvd\n\ni (t) 'Z ui(t) P(ui(t)|si,0:t) dui\n\n(3)\n\n(4)\n\nThus, it would be tempting to say that in order to achieve the computational goal of Eq. 2, the way\nthe dendritic tree (together with the soma) should integrate these local potentials, as given by \u02dcf,\nshould be directly determined by the function that needs to be computed: \u02dcf = f. However, it is easy\nto see that in general this is going to be incorrect:\n\nf Z u(t)Yi\n\nP(ui(t)|si,0:t) du(t)! 6=Z f (u(t)) P(u(t)|s0:t) du(t)\n\n(5)\n\nwhere the l.h.s. is what the neuron implements (eqs. 3-4) and the r.h.s. is what it should compute\n(eq. 2). The equality does not hold in general when f is non-linear or P(u(t)|s0:t) does not factorise.\nIn the following, we are going to consider the case when the function, f (u), is a purely linear\ncombination of synaptic inputs, f (u) = Pi ci ui. Such linear transformations seem to suggest\n\nlinear dendritic operations and, in combination with a single global \u2018somatic\u2019 nonlinearity, they are\noften assumed in neural network models and descriptive models of neuronal signal processing [10].\nHowever, as we will show below, estimation from the spike trains of multiple correlated presynaptic\nneurons requires a non-linear integration of inputs even in this case.\n\n1Dynamics of this form are assumed by many neural network models, though the variables u amd v are\nusually interpreted as instantaneous \ufb01ring rates rather than membrane potentials [10]. However, just as in our\ncase (Eq. 8), the two are often taken to be related through a simple non-linear function which thus makes the\ntwo frameworks essentially isomorphic.\n\n2\n\n\f2.2 The mOU-NP model\n\nWe assume that the hidden dynamics of presynaptic membrane potentials are described by a multi-\nvariate Ornstein\u2013Uhlenbeck (mOU) process (discretised in time into t ! 0 time bins, thus formally\nyielding an AR(1) process):\n\n(u0 utt)t + qtpt,\n\n1\n\u2327\n\nut = utt +\n= \u21b5utt + qtpt +\n\nt\n\u2327\n\nu0\n\nqt\n\niid\u21e0N (0, Q)\n\n(6)\n\n(7)\n\nwhere we described all neurons with the same parameters: u0, the resting potential and \u2327, the\nmembrane time constant (with \u21b5 = 1 t\n\u2327 ). Importantly, Q is the covariance matrix parametrising\nthe correlations between the subthreshold membrane potential \ufb02uctuations of presynaptic neurons.\nSpiking is described by a nonlinear-Poisson (NP) process where the instantaneous \ufb01ring rate, r, is\nan exponential function of u with exponent and \u201cbaseline rate\u201d g:\n\nand the number of spikes emitted in a time bin, s, is Poisson with this rate:\n\nr(u) = g eu\n\nThe spiking process itself is independent i.e., the likelihood is factorised across cells:\n\nP(s|u) = Poisson(s; t r(u))\n\nP(s|u) =Yi\n\nP(si|ui)\n\n(8)\n\n(9)\n\n(10)\n\n2.3 Assumed density \ufb01ltering in the mOU-NP model\n\nOur goal is to derive the time evolution of the posterior distribution of the membrane potential,\nP(ut|s0:t), given a particular spiking pattern observed. Ultimately, we will need to compute some\nfunction of u under this distribution. For linear computations (see above), the \ufb01nal quantity of\ninterest depends onPi ci ui which in the limit (of many presynaptic cells) is going to be Gaussian-\ndistributed, and as such only dependent on the \ufb01rst two moments of the posterior. This motivates\nus to perform assumed density \ufb01ltering, by which we substitute the true posterior with a moment-\nmatched multivariate Gaussian in each time step, P(ut|s0:t) 'N (ut; \u00b5t, \u2303t).\nAfter some algebra (see Appendix for details) we obtain the following equations for the time evolu-\ntion of the mean and covariance of the posterior under the generative process de\ufb01ned by Eqs. 7-10:\n\n\u02d9\u00b5 =\n\n\u02d9\u2303 =\n\n1\n\u2327\n2\n\u2327\n\n(u0 \u00b5) + \u2303 (s(t) )\n(\u2303OU \u2303) 2\u2303\u2303\n\n(11)\n\n(12)\n\nwhere si(t) is the spike train of presynaptic neuron i represented as a sum of Dirac-delta functions,\n () is a vector (diagonal matrix) whose elements i = ii = g e\u00b5i+ 2\u2303ii\nare the estimated \ufb01ring\nrates of the neurons, and \u2303OU = Q\u2327\n2 is the prior covariance matrix of the presynaptic membrane\npotentials in the absence of any observation.\n\n2\n\n2.4 Modelling correlated up and down states\n\nThe mOU-NP process is a convenient and analytically tractable way to model correlations between\npresynaptic neurons but it obviously falls short of the dynamical complexity of cortical ensembles\nin many respects. Following and expanding on [12], here we considered one extension that allowed\nus to model coordinated changes between more hyper- and depolarised states across presynaptic\nneurons, such as those brought about by cortical up and down states.\nIn this extension, the \u2018resting\u2019 potential of each presynaptic neuron, u0, could switch between two\ndifferent values, uup and udown, and followed \ufb01rst order Markovian dynamics. Up and down states in\ncortical neurons are not independent but occur synchronously [13]. To reproduce these correlations\n\n3\n\n\fFigure 1: Simulation of the optimal estimator in the case of two presynaptic spikes with different\ntime delays (t). A: The posterior means (Aa), variances, \u2303ii, and the covariance, \u230312 (Ab). The\ndynamics of the postsynaptic membrane potential, v (Ad) is described by Eq. 1, where f (u) =\nu1 + u2 (Ac). B: The same as A on an extended time scale. C: The nonlinear summation of\ntwo EPSPs, characterised by the ratio of the actual EPSP (cyan on Ad) and the linear sum of two\nindividual EPSPs (grey on Ad) is shown for different delays and correlations between the presynaptic\nneurons. The summation is sublinear if the presynaptic neurons are positively correlated, whereas\nnegative correlations imply supralinear summation.\n\nwe introduced a global, binary state variable, x that in\ufb02uenced the Markovian dynamics of the\nresting potential of individual neurons (see Appendix and Fig. 2A). Unfortunately, an analytical\nsolution to the optimal estimator was out of reach in this case, so we resorted to particle \ufb01ltering\n[14] to compute the output of the optimal estimator.\n\n3 Nonlinear dendrites as near-optimal estimators\n\n3.1 Correlated Ornstein-Uhlenbeck process\n\nFirst, we analysed the estimation problem in case of mOU dynamics where we could derive an op-\ntimal estimator for the membrane potential. Postsynaptic dynamics needed to follow the linear sum\nof presynaptic membrane potentials. Figure 1 shows the optimal postsynaptic response (Eqs. 11-12)\nafter observing a pair of spikes from two correlated presynaptic neurons with different time delays.\nWhen one of the cells (black) emits a spike, this causes an instantaneous increase not only in the\nmembrane potential estimate of the neuron itself but also in those of all correlated neurons (red neu-\nron in Fig. 1Aa and Ba). Consequently, the estimated \ufb01ring rate, , of both cells increases. Albeit\nindirectly, a spike also in\ufb02uences the uncertainty about the presynaptic membrane potentials \u2013 quan-\nti\ufb01ed by the posterior covariance matrix. A spike itself does not change this covariance directly, but\nsince it increases estimated \ufb01ring rates, the absence of even more spikes in the subsequent period\nbecomes more informative. This increased information rate following a spike decreases estimator\nuncertainty about true membrane potential values for a short period (Fig. 1Ab and Bb). However, as\nthe estimated \ufb01ring rate decreases back to its resting value nearly exponentially after the spike, the\nestimated uncertainty also returns back to its steady state.\nImportantly, the instantaneous increase of the posterior means in response to a spike is proportional\nto the estimated uncertainty about the membrane potentials and to the estimator\u2019s current belief\nabout the correlations between the neurons. As each spike in\ufb02uences not only the mean estimate\nof the membrane potentials of other correlated neurons but also the uncertainty of these estimates,\nthe effect of a spike from one cell on the posterior mean depends on the spiking history of all other\ncorrelated neurons (Fig. 1Ac-Ad).\n\n4\n\n\u00b51\u00b52\u21e40.60.6mean0.10.5variance012v (mV)-1.21.2\uf001\uf002\uf003\uf004\uf0050200\uf006\uf007\uf004\uf008\uf003\uf009\uf004\uf005\uf00a\uf009\uf004\uf005\uf00a\uf00b\uf00c\uf00d\uf00c\uf00b\uf00c\uf00d\uf00c\uf00e\uf00f\uf010\uf007\uf011\uf010\uf012\uf011\uf010\uf010\uf008\uf013\uf014\uf006\uf007\uf011\uf015\uf016\uf017\uf018\uf016\uf017\uf019\uf01a\uf01a\uf017\uf01a\uf01a\uf017\uf001\uf001\uf002\uf001\uf016\uf01a\uf016\uf016\uf016\uf017\uf019\uf016\uf01b\uf016\uf017\uf019\uf01c\uf01d\uf01e\uf01c\uf014\uf01c\uf01f\uf01c\uf012\uf01c\uf020\uf01d\uf014\uf01d\uf01f\uf00b\uf00c\uf00d\uf00c\uf00b\uf00c\uf00d\uf00c\uf00e\fIn the example shown in Fig. 1, the postsynaptic dynamics is required to compute a purely linear\nsum of two presynaptic membrane potentials, f (u) = u1 + u2. However, depending on the prior\ncorrelation between the two presynaptic neurons and the time delay between the two spikes, the\namplitude of the postsynaptic membrane potential change evoked by the pair of spikes can be either\nlarger or smaller than the linear sum of the individual excitatory postsynaptic potentials (EPSPs)\n(Fig. 1Ad, C). EPSPs from independent neurons are additive, but if the presynaptic neurons are pos-\nitively correlated then their spikes convey redundant information and they are integrated sublinearly.\nConversely, simultaneous spikes from negatively correlated presynaptic neurons are largely unex-\npected and induce supralinear summation. The deviation from the linear summation is proportional\nto the magnitude of the correlation between the presynaptic neurons (Fig. 1C).\nWe compared the nonlinear integration of the inputs in the optimal estimator with experiments mea-\nsuring synaptic integration in the dendritic tree of neurons. For a passive membrane, cable theory\n[15] implies that inputs are integrated linearly only if they are on electronically separated dendritic\nbranches, but reduction of the driving force entails a sublinear interaction between co-localised in-\nputs. Moreover, it has been found that active currents, the IA potassium current in particular, also\ncontribute to the sublinear integration within the dendritic tree [16, 17]. Our model predicts that\ninputs that are integrated sublinearly are positively correlated (Fig. 1C).\nIn sum, we can already see that correlated inputs imply nonlinear integration in the postsynaptic\nneuron, and that the form of nonlinearity needs to be matched to the degree and sign of correla-\ntions between inputs. However, the \ufb01nding that supralinear interactions are only expected from\nanticorrelated inputs defeats biological intuition. Another shortcoming of the mOU model is related\nto the second-order effects of spikes on the posterior covariance. As the covariance matrix does not\nchange instantaneously after observing a presynaptic spikes (Fig. 1B), two spikes arriving simulta-\nneously are summed linearly (not shown). At the other extreme, two spikes separated by long delays\nagain do not in\ufb02uence each other. Therefore the nonlinearity of the integration of two spikes has a\nnon-monotonic shape, which again is unlike the monotonic dependence of the degree of nonlinearity\non interspike intervals found in experiments [18, 19]. In order to overcome these limitations, we ex-\ntended the model to incorporate correlated changes in the activity levels of presynaptic neurons [13].\n\n3.2 Correlated up and down states\n\nWhile the statistics of presynaptic membrane potentials exhibit more complex temporal dependen-\ncies in the extended model (Fig. 2A), importantly, the task is still assumed to be the same simple\nlinear computation as before: f (u) = u1 + u2.\nHowever, the more complex P(u) distribution means that we need to sum over the possible values\nof the hidden variables: P(u) = Pu0\nP(u|u0) P(u0). The observation of a spike changes both\nthe conditional distributions, P(u|u0), and the probability of being in the up state, P(u0 = uup),\nby causing an upward shift in both. A second spike causes a further increase in the membrane\npotential estimate, and, more importantly, in the probability of being in the up state for both neurons.\nSince the probability of leaving the up state is low, the membrane potential estimate decays back\nto its steady state more slowly if the probability of being in the up state is high (Fig. 2B). This\ncauses a supralinear increase in the membrane potential of the postsynaptic neuron which again\ndepends on the interspike interval, but this time supralinearity is predicted for positively correlated\npresynaptic neurons (Fig. 2C,E). Note, that while in the mOU model, supralinear integration arises\ndue to dynamical changes in uncertainty (of membrane potential estimates), in the extended model\nit is associated with a change in a hypothesis (about hidden up-down states).\nThis is qualitatively similar to what was found in pyramidal neurons in the neocortex [19] and in the\nhippocampus [18, 20] that are able to switch from (sub)linear to supralinear integration of synaptic\ninputs through the generation of dendritic spikes [21]. Speci\ufb01cally, in neocortical pyramidal neurons\nPolsky et al. [19] found, that nearly synchronous inputs arriving to the same dendritic branch evoke\nsubstantially larger postsynaptic responses than expected from the linear sum of the individual re-\nsponses (Fig. 2D-E). While there is a good qualitative match between model and experiments, the\ntime scales of integration are off by a factor of 2. Neverthless, given that we did not perform ex-\nhaustive parameter \ufb01tting in our model, just simply set parameters to values that produced realistic\npresynaptic membrane potential trajectories (cf. our Fig. 2A with [13]), we regard the match ac-\nceptable and are con\ufb01dent that with further \ufb01ne tuning of parameters the match would also improve\nquantitatively.\n\n5\n\n\fFigure 2: A: Example voltage traces and spikes from the modeled presynaptic neurons (black and\nred) with correlated up and down states. The green line indicates the value of the global up-down\nstate variable. B: Inference in the model: The posterior probability of being in the up state (left)\n\nand the posterior mean of Pi ui after observing two spikes (grey) from different neurons with\n\nt = 8 ms latency. C: Supralinear summation in the switching mOU-NP model. D: Supralinear\nsummation by dendritic spikes in a cortical pyramidal neuron. E: Peak amplitude of the response\n(red) and the linear sum (black squares) is shown for different delays in experiments (left) and the\nmodel (right). (D and left panel in E are reproduced from [19]).\n\n3.3 Nonlinear dendritic trees are necessary for purely linear computations\n\nIn the previous sections we demonstrated that optimal inference based on correlated spike trains\nrequires nonlinear interaction within the postsynaptic neuron, and we showed that the dynamics of\nthe optimal estimator is qualitatively similar to the dynamics of the somatic membrane potential of\na postsynaptic neuron with nonlinear dendritic processing. In this section we will build a simpli\ufb01ed\nmodel of dendritic signal processing and compare its performance directly to several alternative\nmodels (see below) on a purely linear task, for which the neuron needs to compute the sum of\n\npresynaptic membrane potentials: f (u) =P10\n\ni=1 ui.\n\nWe model the dendritic estimator as a two-layer feed-forward network of simple units (Fig. 3A)\nthat has been proposed to closely mimic the repertoire of input-output transformations achievable\nby active dendritic trees [22].\nIn this model, synaptic inputs impinge on units in the \ufb01rst layer,\ncorresponding to dendritic branches, where nonlinear integration of inputs arriving to a dendritic\nbranch is modeled by a sigmoidal input-output function, and the outputs of dendritic branch units\nare in turn summed linearly in the single (somatic) unit of the second layer. We trained the model to\nestimate f by changing the connection weights of the two layers corresponding to synaptic weights\n(wji) and branch coupling strengths (\u02dccj, see Appendix, Fig. 3A).\nWe compared the performance of the dendritic estimator to four alternative models (Figure 3B):\n\n1. The linear estimator, which is similar to the dendritic estimator except that the dendrites are\n\nlinear.\n\n2. The independent estimator, in which the individual synapses are independently optimal esti-\nmators of the corresponding presynaptic membrane potentials (Eq. 4) [11, 12], and the cell\ncombines these estimates linearly. Note that the only difference between the independent esti-\nmator and the optimal estimator is the assumption implicit to the former that presynaptic cells\nare independent.\n\n3. The scaled independent estimator still combines the synaptic potentials linearly, but the weights\n\nof each synapse are rescaled to partially correct for the wrong assumption of independence.\n\n4. Finally, the optimal estimator is represented by the differential equations 11-12.\n\nThe performance of the different estimators were quanti\ufb01ed by the estimation error normalized by\n. Figure 3C shows the estimation error of the \ufb01ve differ-\nent models in the case of 10 uniformly correlated presynaptic neurons. If the presynaptic neurons\n\nthe variance of the signal, h(Pi ui\u02dcvestimator)2i\n\nvar[Pi ui]\n\n6\n\n2 mV\uf0011 ms32 ms50 ms100 ms\uf002\uf003\uf004\uf005\uf006\uf007\uf008\uf005\uf009\uf00a\uf00b\uf00c\uf00b\uf00b\uf00d\uf00b\uf00b\uf00b\uf00d\uf00c\uf00b\uf00b\uf00e\uf00b\uf00b\uf00bMembrane potential (mV)\uf00f\uf00c\uf00b\uf00c\uf00d\uf00b\uf00d\uf00ctime(ms)\u25c6202060100time(ms)upstateprobability\u25c6202060100\uf009\uf004\uf010\uf011\uf012\uf006\uf007\uf009\uf013\uf004\uf014\uf006\uf015\uf016\uf004\uf003\uf017\uf005\uf006\uf003\uf004\uf018\uf007\uf009\uf019\uf005\uf005\uf006\uf015\uf009\uf019\uf016\uf006\uf01a\uf007\uf01b\uf01c24681001020304050\uf01d\uf006\uf015\uf009\uf019\uf016\uf006\uf01a\uf001\uf016\uf004\uf003\uf017\uf005\uf006\uf003\uf004\uf018\uf007\uf009\uf019\uf005\uf01e\uf01f\uf020\uf01f\uf007\uf015\uf005\uf013\uf012\uf004\uf003\uf019\uf01a\uf006\uf007\uf008\uf005\uf021\uf00a0.00.40.8\u25c64\u25c6202\uf003\uf004\uf005\uf006\uf007\uf004\uf010\uf003\uf006\uf016\uf022\uf015\uf012\uf007\uf008\uf005\uf009\uf00a020406080100\uf01d\uf006\uf015\uf009\uf019\uf016\uf006\uf01a\uf001\uf016\uf004\uf003\uf017\uf005\uf006\uf003\uf004\uf018\uf007\uf009\uf019\uf0052 mV100 ms\uf003\uf004\uf005\uf006\uf007\uf004\uf010\uf003\uf006\uf016\uf022\uf015\uf012\uf007\uf008\uf005\uf009\uf00a\uf01e\uf023\uf013\uf006\uf016\uf004\uf005\uf006\uf010\uf003\uf01d\uf024\uf01a\uf006\uf0120 ms15 ms30 ms50 ms50 ms\uf01e\fFigure 3: Performance of 5 different estimators are compared in the task of estimating f (u) =\ni=1 ui. A: Model of the dendritic estimator. B: Different estimators (see text for more details).\nC: Estimation error, normalised with the variance of the signal. The number of presynaptic neurons\nwere N = 10. Error bars show standard deviations.\n\nPN\n\nwere independent, all three estimators that used dynamical synapses (\u02dcvind, \u02dcvsind and \u02dcvopt) were op-\ntimal, whereas the linear estimator had substantially larger error. Interestingly, the performance of\nthe dendritic estimator (yellow) was nearly optimal even if the individual synapses were not opti-\nmal estimators for the corresponding presynaptic membrane potentials. In fact, adding depressing\nsynapses to the dendritic model degraded its performance because the sublinear effect introduced\nby the saturation of the sigmoidal dendritic nonlinearity interfered with that implied by synaptic\ndepression. When the correlation increased between the presynaptic neurons, the performance of\nthe estimators assuming independence (black and orange) became severely suboptimal, whereas the\ndendritic estimator (yellow) remained closer to optimal.\nFinally, in order to investigate the synaptic mechanisms underlying the remarkably high performance\nof the dendritic estimator, we trained a dendritic estimator on a task where the presynaptic neurons\nformed two groups. Neurons from different groups were independent or negatively correlated with\neach other, cor(ui, uk) = {0.6, 0.3, 0}, while there were positive correlations between neu-\nrons from the same group, cor(ui, uj) = {0.3, 0.6, 0.9} (Fig. 4A). The postsynaptic neuron had\ntwo dendritic branches, each of them receiving input from each presynaptic neurons initially. After\ntuning synaptic weights and branch coupling strengths to minimize estimation error, and pruning\nsynapses with weights below threshold, the model achieved near-optimal performance as before\n(Fig. 4C). More importantly, we found that the structure of the presynaptic correlations was re-\n\ufb02ected in the synaptic connection patterns on the dendritic branches: most neurons developed stable\nsynaptic weights only on one of the two dendritic branches, and synapses originating from neurons\nwithin the same group tended to cluster on the same branch (Fig. 4B).\n\n4 Discussion\n\nIn the present paper we introduced a normative framework to describe single neuron computation\nthat sheds new light on nonlinear dendritic information processing. Following [12], we observe that\nspike-based communication causes information loss in the nervous system, and neurons must infer\nthe variables relevant for the computation [23\u201325]. As a consequence of this spiking bottleneck,\nsignal processing in single neurons can be conceptually divided into two parts: the inference of\nthe relevant variables and the computation itself. When the presynaptic neurons are independent\nthen synapses with short term plasticity can optimally solve the inference problem [12] and non-\nlinear processing in the dendrites is only for computation. However, neurons in a population are\noften tend to be correlated [5, 13] and so the postsynaptic neuron should combine spike trains from\nsuch correlated neurons in order to \ufb01nd the optimal estimate of its output. We demonstrated that\nthe solution of this inference problem requires nonlinear interaction between synaptic inputs in the\n\n7\n\n0.20.40.60.8\uf001\uf002\uf003\uf004\uf005\uf006\uf007\uf008\uf009\uf00a\uf00b\uf009\uf00c\uf00d\uf007\uf004\uf005\uf00d\uf007\uf002\uf00e\uf00b\uf009\uf003\uf003\uf002\uf003\uf00f\uf002\uf003\uf003\uf009\uf006\uf005\uf00d\uf007\uf002\uf00e0.90.50\uf002\uf010\uf00d\uf007\uf004\uf005\uf006\uf00a\uf009\uf00e\uf00a\uf003\uf007\uf00d\uf007\uf00f\uf00c\uf00f\uf005\uf006\uf009\uf00a\uf00b\uf007\uf00e\uf00a\uf009\uf010\uf009\uf00e\uf00a\uf009\uf00e\uf00d\uf007\uf00e\uf00a\uf009\uf010\uf009\uf00e\uf00a\uf009\uf00e\uf00d\uf006\uf007\uf00e\uf009\uf005\uf003\uf011\uf012\uf013\fFigure 4: Synaptic connectivity re\ufb02ects the correlation structure of the input. A: The presynaptic\ncovariance matrix is block-diagonal, with two groups (neurons 1\u20134 and 5\u20138). Initially, each presy-\nnaptic neuron innervates both dendritic branches, and the weights, w, of the static synapses are then\ntuned to minimize estimation error. B: Synaptic weights after training, and pruning the weakest\nsynapses. Columns corresponds to solutions of the error-minimization task with different presynap-\ntic correlations and/or initial conditions, and rows are different synapses. The detailed connectivity\npatterns differ across solutions, but neurons from the same group usually all innervate the same den-\ndritic branch. Below: fraction of neurons in each solution innervating 0, 1 or 2 branches. The height\nof the yellow (blue, green) bar indicates the proportion of presynaptic neurons innervating two (one,\nzero, respectively) branches of the postsynaptic neuron. C: After training, the nonlinear dendritic\nestimator performs close to optimal and much better than the linear neuron.\n\npostsynaptic cell even if the computation itself is purely linear. Of course, actual neurons are usually\nfaced with both problems: they will need to compute nonlinear functions of correlated inputs and\nthus their nonlinearities will serve both estimation and computation. In such cases our approach\nallows dissecting the respective contributions of active dendritic processing towards estimation and\ncomputation.\nWe demonstrated that the optimal estimator of the presynaptic membrane potentials can be closely\napproximated by a nonlinear dendritic tree where the connectivity from the presynaptic cells to the\ndendritic branches and the nonlinearities in the dendrites are tuned according to the dependency\nstructure of the input. Our theory predicts that independent neurons will innervate distant den-\ndritic domains, whereas neurons that have correlated membrane potentials will impinge on nearby\ndendritic locations, preferentially on the same dendritic branches, where synaptic integration in\nnonlinear [19, 26]. More speci\ufb01cally, the theory predicts sublinear integration between positively\ncorrelated neurons and supralinear integration through dendritic spiking between neurons with cor-\nrelated changes in their activity levels. To directly test this prediction the membrane potentials of\nseveral neurons need to be recorded under naturalistic in vivo conditions [5, 13] and then the subcel-\nlular topography of their connectivity with a common postsynaptic target needs to be determined.\nSimilar approaches have been used recently to characterize the connectivity between neurons with\ndifferent receptive \ufb01eld properties in vivo [27, 28].\nOur model suggests that the postsynaptic neuron should store information about the dependency\nstructure of its presynaptic partners within its dendritic membrane. Online learning of this informa-\ntion based on the observed spiking patterns requires new, presumably non-associative forms of plas-\nticity such as branch strength potentiation [29, 30] or activity-dependent structural plasticity [31].\n\nAcknowledgments\nWe thank J-P P\ufb01ster for valuable insights and comments on earlier versions of the manuscript, and\nP Dayan, B Gutkin, and Sz K\u00b4ali for useful discussions. This work has been supported by the Hun-\ngarian Scienti\ufb01c Research Fund (OTKA, grant number: 84471, BU) and the Welcome Trust (ML).\n\n8\n\n\uf001\uf002\uf003\uf004\uf005\uf001\uf006\uf001\uf007\uf008\uf009\uf003\uf00a\uf00b\uf00c\uf00d\uf004\uf00e\uf001\uf006\uf003\uf007\uf00f\uf010\uf011\uf001\uf002\uf003\uf004\uf005\uf007\uf012\uf013\uf014\uf001\uf007\uf008\uf006\uf003\uf00a\uf007\uf015\uf001\uf016\uf006\uf004\uf017\uf001\uf009\uf018\uf019\uf007\uf012\uf009\uf003\uf001\uf018\uf012\uf003\uf006\uf004\uf008\uf014\uf01a\uf006\uf003\uf01a\uf008\uf012\uf007\uf006\uf003\uf009\uf003\uf018\uf012\uf003\uf01b\uf014\uf01a\uf006\uf003\uf01a\uf008\uf012\uf007\uf006\uf009\uf005\uf007\uf012\uf01c\uf004\uf018\uf00f\uf01b\uf00f\uf00f\uf01b\uf011\uf00f\uf01b\uf01d\uf00f\uf01b\uf01e\uf00f\uf01b\uf01f\uf010\uf01b\uf00f\uf020\uf001\uf007\uf012\uf01c\uf004\uf007\uf012\uf009\uf003\uf014\uf006\uf008\uf008\uf009\uf008\fReferences\n1. Koch, C. Biophysics of computation (Oxford University Press, 1999).\n2. Stuart, G., Spruston, N. & Hausser, M. Dendrites (Oxford University Press, 2007).\n3. Poirazi, P. & Mel, B.W. Impact of active dendrites and structural plasticity on the memory capacity of\n\nneural tissue. Neuron 29, 779\u201396 (2001).\n\n4. Poirazi, P., Brannon, T. & Mel, B.W. Arithmetic of subthreshold synaptic summation in a model CA1\n\npyramidal cell. Neuron 37, 977\u201387 (2003).\n\n5. Crochet, S., Poulet, J.F., Kremer, Y. & Petersen, C.C. Synaptic mechanisms underlying sparse coding of\n\nactive touch. Neuron 69, 1160\u201375 (2011).\n\n6. Maass, W. & Bishop, C. Pulsed Neural Networks (MIT Press, 1998).\n7. Gerstner, W. & Kistler, W. Spiking Neuron Models (Cambridge University Press, 2002).\n8. Rieke, F., Warland, D., de Ruyter van Steveninck, R. & Bialek, W. Spikes (MIT Press, 1996).\n9. Deneve, S. Bayesian spiking neurons I: inference. Neural Comput. 20, 91\u2013117 (2008).\n10. Dayan, P. & Abbot, L.F. Theoretical neuroscience (The MIT press, 2001).\n11. P\ufb01ster, J., Dayan, P. & Lengyel, M. Know thy neighbour: a normative theory of synaptic depression. Adv.\n\nNeural Inf. Proc. Sys. 22, 1464\u20131472 (2009).\n\n12. P\ufb01ster, J., Dayan, P. & Lengyel, M. Synapses with short-term plasticity are optimal estimators of presy-\n\nnaptic membrane potentials. Nat. Neurosci. 13, 1271\u20131275 (2010).\n\n13. Poulet, J.F. & Petersen, C.C. Internal brain state regulates membrane potential synchrony in barrel cortex\n\nof behaving mice. Nature 454, 881\u20135 (2008).\n\n14. Doucet, A., De Freitas, N. & Gordon, N. Sequential Monte Carlo Methods in Practice (Springer, New\n\nYork, 2001).\n\n15. Rall, W. Branching dendritic trees and motoneuron membrane resistivity. Exp. Neurol. 1, 491\u2013527 (1959).\n16. Hoffman, D.A., Magee, J.C., Colbert, C.M. & Johnston, D. K+ channel regulation of signal propagation\n\nin dendrites of hippocampal pyramidal neurons. Nature 387, 869\u201375 (1997).\n\n17. Cash, S. & Yuste, R. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron 22,\n\n383\u201394 (1999).\n\n18. Gasparini, S., Migliore, M. & Magee, J.C. On the initiation and propagation of dendritic spikes in CA1\n\npyramidal neurons. J. Neurosci. 24, 11046\u201356 (2004).\n\n19. Polsky, A., Mel, B.W. & Schiller, J. Computational subunits in thin dendrites of pyramidal cells. Nat.\n\nNeurosci. 7, 621\u20137 (2004).\n\n20. Margulis, M. & Tang, C.M. Temporal integration can readily switch between sublinear and supralinear\n\nsummation. J. Neurophysiol. 79, 2809\u201313 (1998).\n\n21. Hausser, M., Spruston, N. & Stuart, G.J. Diversity and dynamics of dendritic signaling. Science 290,\n\n739\u201344 (2000).\n\n22. Poirazi, P., Brannon, T. & Mel, B.W. Pyramidal neuron as two-layer neural network. Neuron 37, 989\u201399\n\n(2003).\n\n23. Huys, Q.J., Zemel, R.S., Natarajan, R. & Dayan, P. Fast population coding. Neural Comput. 19, 404\u201341\n\n(2007).\n\n24. Natarajan, R., Huys, Q.J.M., Dayan, P. & Zemel, R.S. Encoding and decoding spikes for dynamics stimuli.\n\nNeural Computation 20, 2325\u20132360 (2008).\n\n25. Gerwinn, S., Macke, J. & Bethge, M. Bayesian population decoding with spiking neurons. Frontiers in\n\nComputational Neuroscience 3 (2009).\n\n26. Losonczy, A. & Magee, J.C. Integrative properties of radial oblique dendrites in hippocampal CA1 pyra-\n\nmidal neurons. Neuron 50, 291\u2013307 (2006).\n\n27. Bock, D.D. et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177\u201382\n\n(2011).\n\n28. Ko, H. et al. Functional speci\ufb01city of local synaptic connections in neocortical networks. Nature (2011).\n29. Losonczy, A., Makara, J.K. & Magee, J.C. Compartmentalized dendritic plasticity and input feature storage\n\nin neurons. Nature 452, 436\u201341 (2008).\n\n30. Makara, J.K., Losonczy, A., Wen, Q. & Magee, J.C. Experience-dependent compartmentalized dendritic\n\nplasticity in rat hippocampal CA1 pyramidal neurons. Nat. Neurosci. 12, 1485\u20137 (2009).\n\n31. Butz, M., Worgotter, F. & van Ooyen, A. Activity-dependent structural plasticity. Brain Res. Rev. 60,\n\n287\u2013305 (2009).\n\n9\n\n\f", "award": [], "sourceid": 690, "authors": [{"given_name": "Balazs", "family_name": "Ujfalussy", "institution": null}, {"given_name": "M\u00e1t\u00e9", "family_name": "Lengyel", "institution": null}]}