{"title": "Continuous Time Particle Filtering for fMRI", "book": "Advances in Neural Information Processing Systems", "page_first": 1049, "page_last": 1056, "abstract": "We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study.", "full_text": "Continuous Time Particle Filtering for fMRI\n\nLawrence Murray\nSchool of Informatics\nUniversity of Edinburgh\n\nlawrence.murray@ed.ac.uk\n\nAmos Storkey\n\nSchool of Informatics\nUniversity of Edinburgh\na.storkey@ed.ac.uk\n\nAbstract\n\nWe construct a biologically motivated stochastic differential model of the neu-\nral and hemodynamic activity underlying the observed Blood Oxygen Level De-\npendent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The\nmodel poses a dif\ufb01cult parameter estimation problem, both theoretically due to the\nnonlinearity and divergence of the differential system, and computationally due to\nits time and space complexity. We adapt a particle \ufb01lter and smoother to the task,\nand discuss some of the practical approaches used to tackle the dif\ufb01culties, includ-\ning use of sparse matrices and parallelisation. Results demonstrate the tractability\nof the approach in its application to an effective connectivity study.\n\n1 Introduction\nFunctional Magnetic Resonance Imaging (fMRI) poses a large-scale, noisy and altogether dif\ufb01cult\nproblem for machine learning algorithms. The Blood Oxygen Level Dependent (BOLD) signal,\nfrom which fMR images are produced, is a measure of hemodynamic activity in the brain \u2013 only an\nindirect indicator of the neural processes which are of primary interest in most cases.\nFor studies of higher level patterns of activity, such as effective connectivity [1], it becomes neces-\nsary to strip away the hemodynamic activity to reveal the underlying neural interactions. In the \ufb01rst\ninstance, this is because interactions between regions at the neural level are not necessarily evident at\nthe hemodynamic level [2]. In the second, analyses increasingly bene\ufb01t from the temporal qualities\nof the data, and the hemodynamic response itself is a form of temporal blurring.\nWe are interested in the application of machine learning techniques to reveal meaningful patterns\nof neural activity from fMRI. In this paper we construct a model of the processes underlying the\nBOLD signal that is suitable for use in a \ufb01ltering framework. The model proposed is close to that\nof Dynamic Causal Modelling (DCM) [3]. The main innovation over these deterministic models is\nthe incorporation of stochasticity at all levels of the system. This is important; under \ufb01xed inputs,\nDCM reduces to a generative model with steady state equilibrium BOLD activity and independent\nnoise at each time point. Incorporating stochasticity allows proper statistical characterisation of the\ndependence between brain regions, rather than relying on relating decay rates1.\nOur work has involved applying a number of \ufb01ltering techniques to estimate the parameters of\nthe model, most notably the Unscented Kalman Filter [4] and various particle \ufb01ltering techniques.\nThis paper presents the application of a simple particle \ufb01lter. [5] take a similar \ufb01ltering approach,\napplying a local linearisation \ufb01lter [6] to a model of individual regions. In contrast, the approach\nhere is applied to multiple regions and their interactions, not single regions in isolation.\nOther approaches to this type of problem are worth noting. Perhaps the most commonly used tech-\nnique to date is Structural Equation Modelling (SEM) [7; 8] (e.g. [9; 10; 11]). SEM is a multivariate\n1A good analogy is the fundamental difference between modelling time series data yt using an exponentially\ndecaying curve with observational noise xt = axt\u22121 +c, yt = xt +\u0001t, and using a much more \ufb02exible Kalman\n\ufb01lter xt = axt\u22121 + c + \u03c9t, yt = xt + \u0001t (where xt is a latent variable, a a decay constant, c a constant and \u0001\nand \u03c9 Gaussian variables).\n\n1\n\n\fregression technique where each dependent variable may be a linear combination of both indepen-\ndent and other dependent variables. Its major limitation is that it is static, assuming that all observa-\ntions are temporally independent and that interactions are immediate and wholly evident within each\nsingle observation. Furthermore, it does not distinguish between neural and hemodynamic activity,\nand in essence identi\ufb01es interactions only at the hemodynamic level.\nThe major contributions of this paper are establishing a stochastic model of latent neural and hemo-\ndynamic activity, formulating a \ufb01ltering and smoothing approach for inference in this model, and\novercoming the basic practical dif\ufb01culties associated with this. The estimated neural activity relates\nto the domain problem and is temporally consistent with the stimulus. The approach is also able to\nestablish connectivity relationships.\nThe ability of this model to establish such connectivity relationships on the basis of stochastic tem-\nporal relationships is signi\ufb01cant. One problem in using structural equation models for effective\nconnectivity analysis is the statistical equivalence of different causal models. By presuming a tem-\nporal causal order, temporal models of this form have no such equivalence problems. Any small\namount of temporal connectivity information available in fMRI data is of signi\ufb01cant bene\ufb01t, as it\ncan disambiguate between statically equivalent models.\nSection 2 outlines the basis of the hemodynamic model that is used. This is combined with neural,\ninput and measurement models in Section 3 to give the full framework. Inference and parameter\nestimation are discussed in Section 4, before experiments and analysis in Sections 5 and 6.\n2 Hemodynamics\nTemporal analysis of fMRI is signi\ufb01cantly confounded by the fact that it does not measure brain\nactivity directly, but instead via hemodynamic activity, which (crudely) temporally smooths the\nactivity signal. The quality of temporal analysis therefore depends signi\ufb01cantly on the quality of\nmodel used to relate neural and hemodynamic activity.\nThis relationship may be described using the now well established Balloon model [12]. This models\na venous compartment as a balloon using Windkessel dynamics. The state of the compartment is\nrepresented by its blood volume normalised to the volume at rest, v = V /V0 (blood volume V , rest\nvolume V0), and deoxyhemoglobin (dHb) content normalised to the content at rest, q = Q/Q0 (dHb\ncontent Q, rest content Q0). The compartment receives in\ufb02ow of fully oxygenated arterial blood\nfin(t), extracts oxygen from the blood, and expels partially deoxygenated blood fout(t). The full\ndynamics may be represented by the differential system:\nfin(t) E(t)\nE0\n[fin(t) \u2212 fout(v)]\n\n\u2212 fout(v) q\nv\n\n(cid:184)\n\n(cid:183)\n\n(1)\n\n(2)\n\n=\n\n1\n\u03c40\n1\n\u03c40\n\ndq\ndt\ndv\ndt\nE(t) \u2248 1 \u2212 (1 \u2212 E0) 1\nfout(v) \u2248 v\n\n=\n\n1\n\u03b1\n\nfin(t)\n\n(3)\n(4)\n\nwhere \u03c40 and \u03b1 are constants, and E0 is the oxygen extraction fraction at rest.\nThis base model is driven by the independent input fin(t). It may be further extended to couple in\nneural activity z(t) via an abstract vasodilatory signal s [13]:\n\ndf\ndt\nds\ndt\n\n= s\n= \u0001z(t) \u2212 s\n\u03c4s\n\n\u2212 (f \u2212 1)\n\n\u03c4f\n\n.\n\n(5)\n\n(6)\n\nThe complete system de\ufb01ned by Equations 1-6, with fin(t) = f, is now driven by the independent\ninput z(t). From the balloon model, the relative BOLD signal change over the baseline S at any\ntime may be predicted using [12]:\n\n(cid:179)\n\n(cid:180)\n\n(cid:105)\n\n(cid:104)\n\n\u2206S\nS\n\n= V0\n\nk1(1 \u2212 q) + k2\n\n1 \u2212 q\nv\n\n+ k3(1 \u2212 v)\n\n.\n\n(7)\n\nFigure 1 illustrates the system dynamics. Nominal values for constants are given in Table 1.\n\n2\n\n\fFigure 1: Response of the balloon model to a 1s burst of neural activity at magnitude 1 (time on x\naxis, response on y axis).\n\n3 Model\nWe de\ufb01ne a model of the neural and hemodynamic interactions between M regions of interest. A\nregion consists of neural tissue and a venous compartment. The state xi(t) of region i at time t is\ngiven by:\n\n\uf8f1\uf8f4\uf8f4\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f4\uf8f4\uf8f3\n\nxi(t) =\n\nneural activity\n\nzi(t)\nfi(t) normalised blood \ufb02ow into the venous compartment\nsi(t)\nqi(t)\nvi(t)\n\nvasodilatory signal\nnormalised dHb content of the venous compartment\nnormalised blood volume of the venous compartment\n\nInput model\n\nThe complete state at time t is given by x(t) = (x1(t)T , . . . , xM (t)T )T .\nWe construct a model of the interactions between regions in four parts \u2013 the input model, the neural\nmodel, the hemodynamic model and the measurement model.\n3.1\nThe input model represents the stimulus associated with the experimental task during an fMRI ses-\nsion. In general this is a function u(t) with U dimensions. For a simple block design paradigm a\none-dimensional box-car function is suf\ufb01cient.\n3.2 Neural model\nNeural interactions between the regions are given by:\n\ndz = Az dt + Cu dt + c + \u03a3z dW,\n\n(8)\nwhere dW is the M-dimensional standard (zero mean, unit variance) Wiener process, A an M \u00d7 M\nmatrix of ef\ufb01cacies between regions, C an M \u00d7 U matrix of ef\ufb01cacies between inputs and regions,\nc an M-dimensional vector of constant terms and \u03a3z an M \u00d7 M diagonal diffusion matrix with\n\u03c3z1, . . . , \u03c3zM along the diagonal.\nThis is similar to the deterministic neural model of DCM expressed as a stochastic differential equa-\ntion, but excludes the bilinear components allowing modulation of connections between seeds. In\ntheory these can be added, we simply limit ourselves to a simpler model for this early work. In\naddition, and unlike DCM, nonlinear interactions between regions could also be included to account\nfor modulatory activity. Again it seems sensible to keep the simplest linear case at this stage of\nthe work, but the potential for nonlinear generalisation is one of the longer term bene\ufb01ts of this\napproach.\n3.3 Hemodynamic model\nWithin each region, the variables fi, si, qi, vi and zi interact according to a stochastic extension of\nthe balloon model (c.f. Equations 1-6). It is assumed that regions are suf\ufb01ciently separate that their\n\nConstant\nValue\n\n\u03c40\n0.98\n\n\u03c4f\n1/0.65\n\n\u03c4s\n1/0.41\n\n\u03b1\n0.32\n\n\u0001\n0.8\n\nV0 E0\n0.4\n\n0.018\n\nk1\n7E0\n\nk2\n2\n\nk3\n2E0 \u2212 0.2\n\nTable 1: Nominal values for constants of the balloon model [12; 13].\n\n3\n\n 0.84 1.04 0 30q 0.9 1.35 0 30v 0.8 1.9 0 30f-0.3 0.7 0 30s-0.2 1 0 30BOLD (%)\fhemodynamic activity is independent given neural activity[14]. Noise in the form of the Wiener\nprocess is introduced to si and the log space of fi, qi and vi, in the latter three cases to ensure\npositivity:\n\nd ln fi =\n\ndsi =\n\nd ln qi =\n\nd ln vi =\n\n(cid:183)\n\n(cid:184)\n\n1\nsi dt + \u03c3fi dW\nfi\n\u2212 (f \u2212 1)\n\u0001zi \u2212 s\n\u03c4s\n1 \u2212 (1 \u2212 E0) 1\n(cid:105)\n\n(cid:34)\n(cid:104)\n\nE0\n\n\u03c4f\n\nfi\n\nfi\n\nfi \u2212 v\n\n1\n\u03b1\ni\n\ndt + \u03c3vi dW.\n\ndt + \u03c3si dW\n\n(cid:35)\n\n\u2212 v\n\n1\n\n\u03b1\u22121\n\ni\n\nqi\n\ndt + \u03c3qi dW\n\n1\nqi\u03c40\n1\nvi\u03c40\n\n(cid:183)\n\n(9)\n\n(10)\n\n(11)\n\n(12)\n\n(14)\n\n3.4 Measurement model\nThe relative BOLD signal change at any time for a particular region is given by (c.f. Equation 7):\n\n(cid:181)\n\n(cid:182)\n\n(cid:184)\n\n\u2206yi = V0\n\nk1(1 \u2212 qi) + k2\n\n1 \u2212 qi\nvi\n\n+ k3(1 \u2212 vi)\n\n.\n\n(13)\n\nThis may be converted to an absolute measurement y\u2217\nusing the baseline signal bi for each seed and an independent noise source \u03be \u223c N (0, 1):\n\ni for comparison with actual observations by\n\ny\u2217\ni = bi(1 + \u2206yi) + \u03c3yi \u03be.\n\n4 Estimation\nThe model is completely de\ufb01ned by Equations 8 to 14. This \ufb01ts nicely into a \ufb01ltering framework,\nwhereby the input, neural and hemodynamic models de\ufb01ne state transitions, and the measurement\nmodel predicted observations. For i = 1, . . . , M, \u03c3zi, \u03c3fi, \u03c3si, \u03c3qi and \u03c3vi de\ufb01ne the system noise\nand \u03c3yi the measurement noise. Parameters to estimate are the elements of A, C, c and b.\nFor a sequence of time points t1, . . . , tT , we are given observations y(t1), . . . , y(tT ), where\ny(t) = (y1(t), . . . , yM (t))T . We seek to exploit the data as much as possible by estimating\nP (x(tn)| y(t1), . . . , y(tT )) for n = 1, . . . , T \u2013 the distribution over the state at each time point\ngiven all the data.\nBecause of non-Gaussianity and nonlinearity of the transitions and measurements, a two-pass parti-\ncle \ufb01lter is proposed to solve the problem. The forward pass is performed using a sequential impor-\ntance resampling technique similar to CONDENSATION [15], obtaining P (x(tn)| y(t1), . . . , y(tn))\nfor n = 1, . . . , T . Resampling at each step is handled using a deterministic resampling method [16].\nThe transition of particles through the differential system uses a 4th/5th order Runge-Kutta-Fehlberg\nmethod, the adaptive step size maintaining \ufb01xed error bounds.\nThe backwards pass is substantially more dif\ufb01cult. Naively, we can simply negate the derivatives of\nthe differential system and step backwards to obtain P (x(tn)| y(tn+1), . . . , y(tT )), then fuse these\nwith the results of the forwards pass to obtain the desired posterior. Unfortunately, such a backwards\nmodel is divergent in q and v, so that the accumulated numerical errors of the Runge-Kutta can\neasily cause an explosion to implausible values and a tip-toe adaptive step size to maintain error\nbounds. This can be mitigated by tightening the error bounds, but the task becomes computationally\nprohibitive well before the system is tamed.\nAn alternative is a two-pass smoother that reuses particles from the forwards pass [17], reweighting\nthem on the backwards pass so that no explicit backwards dynamics are required. This sidesteps the\ndivergence issue completely, but is computationally and spatially expensive and requires computa-\ntion of p(x(tn) = s(i)\ntn\u22121. This imposes some\nlimitations, but is nevertheless the method used here.\nThe forwards pass provides a weighted sample set {(s(i)\nt )} at each time point t = t1, . . . , tT\nfor i = 1, . . . , P . Initialising with \u03c8tT = \u03c0tT , the backwards step to calculate weights at time tn is\n\ntn\u22121) for particular particles s(i)\n\ntn | x(tn\u22121) = s(j)\n\ntn and s(j)\n\n, \u03c0(i)\n\nt\n\n4\n\n\fas follows [17]2:\n\n\u03b1(i,j)\n\ntn+1 | x(tn) = s(j)\n\n= p(x(tn+1) = s(i)\n\ntn ) for i, j = 1, . . . , P\ntn\n\u03b3tn = \u03b1tn\u03c0tn\ntn(\u03c8tn+1 \ufb01 \u03b3tn) where \ufb01 is element-wise division,\n\u03b4tn = \u03b1T\n(cid:80)\n\u03c8tn = \u03c0tn \u2297 \u03b4tn where \u2297 is element-wise multiplication.\ntn , \u03c8(i)\n\ntn = 1 and the smoothed result {(s(i)\n\u03c8(i)\n\ntn )} for i = 1, . . . , P\n\ntn+1 | x(tn) = s(j)\n\nThese are then normalised so that\nis stored.\nThere are numerous means of propagating particles through the forwards pass that accommodate the\nresampling step and propagation of the Wiener noise through the nonlinearity. These include var-\nious stochastic Runge-Kutta methods, the Unscented Transformation [4] or a simple Euler scheme\nusing \ufb01xed time steps and adding an appropriate portion of noise after each step. The requirement to\nef\ufb01ciently make P 2 density calculations of p(x(tn+1) = s(i)\ntn ) during the backwards\npass is challenging with such approaches, however. To keep things simple, we instead simply prop-\nagate particles noiselessly through the transition function, and add noise from the Wiener process\nonly at times t1, . . . , tT as if the transition were linear. This reasonably approximates the noise of\nthe system while keeping the density calculations very simple \u2013 transition s(j)\ntn noiselessly to obtain\nthe mean value of a Gaussian with covariance equal to that of the system noise, then calculate the\ndensity of this Gaussian at s(i)\nObserve that if system noise is suf\ufb01ciently tight, \u03b1tn becomes sparse as negligibly small densities\nround to zero. Implementing \u03b1tn as a sparse matrix can provide signi\ufb01cant time and space savings.\nPropagation of particles through the transition function and density calculations can be performed\nin parallel. This applies during both passes. For the backwards pass, each particle at tn need only\nbe transitioned once to produce a Gaussian from which the density of all particles at tn+1 can be\ncalculated, \ufb01lling in one column of \u03b1tn.\nFinally, the parameters A, C, c and b may be estimated by adding them to the state with arti\ufb01cial\ndynamics (c.f. [18]), applying a broad prior and small system noise to suggest that they are generally\nconstant. The same applies to parameters of the balloon model, which may be included to allow\nvariation in the hemodynamic response across the brain.\n\ntn+1.\n\n5 Experiments\nWe apply the model to data collected during a simple \ufb01nger tapping exercise. Using a Siemens\nVision at 2T with a TR of 4.1s, a healthy 23-year-old right-handed male was scanned on 33 separate\ndays over a period of two months. In each session, 80 whole volumes were taken, with the \ufb01rst\ntwo discarded to account for T1 saturation effects. The experimental paradigm consists of alternat-\ning 6TR blocks of rest and tapping of the right index \ufb01nger at 1.5Hz, where tapping frequency is\nprovided by a constant audio cue, present during both rest and tapping phases.\nAll scans across all sessions were realigned using SPM5 [19] and a two-level random effects analysis\nperformed, from which 13 voxels were selected to represent regions of interest. No smoothing or\nnormalisation was applied to the data. Of the 13 voxels, four are selected for use in this experiment\n\u2013 located in the left posterior parietal cortex, left M1, left S1 and left premotor cortex. The mean\nof all sessions is used as the measurement y(t), which consists of M = 4 elements, one for each\nregion.\nWe set t1 = 1TR = 4.1s, . . . , tT = 78TR = 319.8s as the sequence of times, corresponding to\nthe times at which measurements are taken after realignment. The experimental input function u(t)\nis plotted in Figure 2, taking a value of 0 at rest and 1 during tapping. The error bounds on the\nRunge-Kutta are set to 10\u22124. Measurement noise is set to \u03c3yi = 2 for i = 1, . . . , M and the prior\nand system noise as in Table 2. With the elements of A, C, c and b included in the state, the state\nsize is 48. P = 106 particles are used for the forwards pass, downsampling to 2.5 \u00d7 104 particles\nfor the more expensive backwards pass.\n\n2We have expressed this in matrix notation rather than the original notation in [17]\n\n5\n\n\fFigure 2: Experimental input u(t), x axis is\ntime t expressed in TRs.\n\nFigure 3: Number of nonzero elements in \u03b1tn\nfor n = 1, . . . , 77.\n\nAi,i\nAi,j\nCi,1\nzi\nfi, si, qi, vi, ci\nbi\n\ni = 1, . . . , N\ni, j = 1, . . . , N, i 6= j\ni = 1, . . . , N\ni = 1, . . . , N\ni = 1, . . . , N\ni = 1, . . . , N\n\nPrior Noise\n\u03c3\n10\u22122\n10\u22122\n10\u22122\n10\u22121\n10\u22122\n10\u22122\n\n\u03c3\n1/2\n1/2\n1/2\n1/2\n1/2\n10\n\n\u00b5\n\u22121\n0\n0\n0\n0\n\u00afyi\n\nTable 2: Prior and system noise.\n\nThe experiment is run on the Eddie cluster of the Edinburgh Compute and Data Facility (ECDF) 3\nover 200 nodes, taking approximately 10 minutes real time. The particle \ufb01lter and smoother are\ndistributed across nodes and run in parallel using the dysii Dynamic Systems Library 4.\nAfter application of the \ufb01lter, the predicted neural activity is given in Figure 4 and parameter esti-\nmates in Figures 6 and 7. The predicted output obtained from the model is in Figure 5, where it is\ncompared to actual measurements acquired during the experiment to assess model \ufb01t.\n6 Discussion\nThe model captures the expected underlying form for neural activity, with all regions distinctly\ncorrelated with the experimental stimulus. Parameter estimates are generally constant throughout\nthe length of the experiment and some ef\ufb01cacies are signi\ufb01cant enough in magnitude to provide\nbiological insight. The parameters found typically match those expected for this form of \ufb01nger\ntapping task. However, as the focus of this paper is the development of the \ufb01ltering approach we\nwill reserve a real analysis of the results for a future paper, and focus on the issues surrounding the\n\ufb01lter and its capabilities and de\ufb01ciencies. A number of points are worth making in this regard.\nParticles stored during the forwards pass do not necessarily support the distributions obtained during\nthe backwards pass. This is particularly obvious towards the extreme left of Figure 4, where the\nsmoothed results appear to become erratic, essentially due to degeneracy in the backwards pass.\nFurthermore, while the smooth weighting of particles in the forwards pass is informative, that of\nthe backwards pass is often not, potentially relying on heavy weighting of outlying particles and\nshedding little light on the actual nature of the distributions involved.\nFigure 3 provides empirical results as to the sparseness of \u03b1tn. At worst at least 25% of elements\nare zero, demonstrating the advantages of a sparse matrix implementation in this case.\nThe particle \ufb01lter is able to establish consistent neural activity and parameter estimates across runs.\nThese estimates also come with distributions in the form of weighted sample sets which enable the\nuncertainty of the estimates to be understood. This certainly shows the stochastic model and particle\n\ufb01lter to be a promising approach for systematic connectivity analysis.\n\n3http://www.is.ed.ac.uk/ecdf/\n4http://www.indii.org/software/dysii/\n\n6\n\n-1 0 1 2061218 01x1082x1083x1084x108 0 77\fFigure 4: Neural activity predictions z (y axis)\nover time (x axis). Forwards pass results as\nshaded histogram, smoothed results as solid line\nwith 2\u03c3 error.\n\nFigure 5: Measurement predictions y\u2217 (y axis)\nover time (x axis). Forwards pass results as\nshaded histogram, smoothed results as solid line\nwith 2\u03c3 error, circles actual measurements.\n\nFigure 6: Parameter estimates A (y axis) over time (x axis). Forwards pass results as shaded his-\ntogram, smoothed results as solid line with 2\u03c3 error.\n\nThe authors would like to thank David McGonigle for helpful discussions and detailed information\nregarding the data set.\n\n7\n\n 0 0.14-1 0 1 0 0.14-1 0 1 0 0.14-1 0 1 0 0.14 0 319.8-1 0 1 0 0.06 180 190 200 210 0 0.06 180 190 200 210 0 0.06 180 190 200 210 0 0.06 0 319.8 180 190 200 210-2-1 0 1-2-1 0 1-2-1 0 1-2-1 0 1 0 0.2 0 0.2 0 0.2 0 0.2\fFigure 7: Parameter estimates of C (y axis) over time (x axis). Forwards pass results as shaded\nhistogram, smoothed results as solid line with 2\u03c3 error.\n\nReferences\n[1] Friston, K. and Buchel, C. (2004) Human Brain Function, chap. 49, pp. 999\u20131018. Elsevier.\n[2] Gitelman, D. R., Penny, W. D., Ashburner, J., and Friston, K. J. (2003) Modeling regional and psy-\nchophysiologic interactions in fMRI: the importance of hemodynamic deconvolution. NeuroImage, 19,\n200\u2013207.\n\n[3] Friston, K., Harrison, L., and Penny, W. (2003) Dynamic causal modelling. NeuroImage, 19, 1273\u20131302.\n[4] Julier, S. J. and Uhlmann, J. K. (1997) A new extension of the Kalman \ufb01lter to nonlinear systems. The\nProceedings of AeroSense: The 11th International Symposium on Aerospace/Defense Sensing, Simulation\nand Controls, Multi Sensor Fusion, Tracking and Resource Management.\n\n[5] Riera, J. J., Watanabe, J., Kazuki, I., Naoki, M., Aubert, E., Ozaki, T., and Kawashim, R. (2004) A\nstate-space model of the hemodynamic approach: nonlinear \ufb01ltering of BOLD signals. NeuroImage, 21,\n547\u2013567.\n\n[6] Ozaki, T. (1993) A local linearization approach to nonlinear \ufb01ltering. International Journal on Control,\n\n57, 75\u201396.\n\n45, 289\u2013307.\n\n[7] Bentler, P. M. and Weeks, D. G. (1980) Linear structural equations with latent variables. Psychometrika,\n\n[8] McArdle, J. J. and McDonald, R. P. (1984) Some algebraic properties of the reticular action model for\n\nmoment structures. British Journal of Mathematical and Statistical Psychology, 37, 234\u2013251.\n\n[9] Schlosser, R., Gesierich, T., Kaufmann, B., Vucurevic, G., Hunsche, S., Gawehn, J., and Stoeterb, P.\n(2003) Altered effective connectivity during working memory performance in schizophrenia: a study\nwith fMRI and structural equation modeling. NeuroImage, 19, 751\u2013763.\n\n[10] Au Duong, M., et al. (2005) Modulation of effective connectivity inside the working memory network in\n\npatients at the earliest stage of multiple sclerosis. NeuroImage, 24, 533\u2013538.\n\n[11] Storkey, A. J., Simonotto, E., Whalley, H., Lawrie, S., Murray, L., and McGonigle, D. (2007) Learning\n\nstructural equation models for fMRI. Advances in Neural Information Processing Systems, 19.\n\n[12] Buxton, R. B., Wong, E. C., and Frank, L. R. (1998) Dynamics of blood \ufb02ow and oxygenation changes\n\nduring brain activation: The balloon model. Magnetic Resonance in Medicine, 39, 855\u2013864.\n\n[13] Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. (2000) Nonlinear responses in fMRI: The balloon\n\nmodel, Volterra kernels, and other hemodynamics. NeuroImage, 12, 466\u2013477.\n\n[14] Zarahn, E. (2001) Spatial localization and resolution of BOLD fMRI. Current Opinion in Neurobiology,\n\n11, 209\u2013212.\n\n[15] Isard, M. and Blake, A. (1998) Condensation \u2013 conditional density propagation for visual tracking. Inter-\n\nnational Journal of Computer Vision, 29, 5\u201328.\n\n[16] Kitagawa, G. (1996) Monte Carlo \ufb01lter and smoother for non-Gaussian nonlinear state space models.\n\nJournal of Computational and Graphical Statistics, 5, 1\u201325.\n\n[17] Isard, M. and Blake, A. (1998) A smoothing \ufb01lter for condensation. Proceedings of the 5th European\n\nConference on Computer Vision, 1, 767\u2013781.\n\n[18] Kitagawa, G. (1998) A self-organising state-space model. Journal of the American Statistical Association,\n\n[19] Wellcome Department of Imaging Neuroscience (2006), Statistical parametric mapping. Online at\n\n93, 1203\u20131215.\n\nwww.\ufb01l.ion.ucl.ac.uk/spm/.\n\n8\n\n-1 0 1 0 0.2\f", "award": [], "sourceid": 557, "authors": [{"given_name": "Lawrence", "family_name": "Murray", "institution": null}, {"given_name": "Amos", "family_name": "Storkey", "institution": null}]}