{"title": "Modeling Neuronal Interactivity using Dynamic Bayesian Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 1593, "page_last": 1600, "abstract": null, "full_text": "Modeling Neuronal Interactivity using Dynamic\n\nBayesian Networks\n\nLei Zhangy,z, Dimitris Samarasy, Nelly Alia-Kleinz, Nora Volkowz, Rita Goldsteinz\n\ny Computer Science Department, SUNY at Stony Brook, Stony Brook, NY\n\nz Medical Department, Brookhaven National Laboratory, Upton, NY\n\nAbstract\n\nFunctional Magnetic Resonance Imaging (fMRI) has enabled scientists\nto look into the active brain. However, interactivity between functional\nbrain regions, is still little studied. In this paper, we contribute a novel\nframework for modeling the interactions between multiple active brain\nregions, using Dynamic Bayesian Networks (DBNs) as generative mod-\nels for brain activation patterns. This framework is applied to modeling\nof neuronal circuits associated with reward. The novelty of our frame-\nwork from a Machine Learning perspective lies in the use of DBNs to\nreveal the brain connectivity and interactivity. Such interactivity mod-\nels which are derived from fMRI data are then validated through a\ngroup classi\ufb01cation task. We employ and compare four different types\nof DBNs: Parallel Hidden Markov Models, Coupled Hidden Markov\nModels, Fully-linked Hidden Markov Models and Dynamically Multi-\nLinked HMMs (DML-HMM). Moreover, we propose and compare two\nschemes of learning DML-HMMs. Experimental results show that by\nusing DBNs, group classi\ufb01cation can be performed even if the DBNs are\nconstructed from as few as 5 brain regions. We also demonstrate that, by\nusing the proposed learning algorithms, different DBN structures charac-\nterize drug addicted subjects vs. control subjects. This \ufb01nding provides\nan independent test for the effect of psychopathology on brain function.\nIn general, we demonstrate that incorporation of computer science prin-\nciples into functional neuroimaging clinical studies provides a novel ap-\nproach for probing human brain function.\n\n1. Introduction\n\nFunctional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the\nactive human brain [1] by providing sequences of 3D brain images with intensities repre-\nsenting blood oxygenation level dependent (BOLD) regional activations. This has revealed\nexciting insights into the spatial and temporal changes underlying a broad range of brain\nfunctions, such as how we see, feel, move, understand each other and lay down memo-\nries. This fMRI technology offers further promise by imaging the dynamic aspects of the\nfunctioning human brain. Indeed, fMRI has encouraged a growing interest in revealing\nbrain connectivity and interactivity within the neuroscience community. It is for exam-\nple understood that a dynamically managed goal directed behavior requires neural con-\ntrol mechanisms orchestrated to select the appropriate and task-relevant responses while\ninhibiting irrelevant or inappropriate processes [12]. To date, the analyses and interpre-\ntation of fMRI data that are most commonly employed by neuroscientists depend on the\n\n\fcognitive-behavioral probes that are developed to tap regional brain function. Thus, brain\nresponses are a-priori labeled based on the putative underlying task condition and are then\nused to separate a priori de\ufb01ned groups of subjects. In recent computer science research\n[18][13][3][19], machine learning methods have been applied for fMRI data analysis. How-\never, in these approaches information on the connectivity and interactivity between brain\nvoxels is discarded and brain voxels are assumed to be independent, which is an inaccurate\nassumption (see use of statistical maps [3][19] or the mean of each fMRI time interval[13]).\nIn this paper, we exploit Dynamic Bayesian Networks for modeling dynamic (i.e., con-\nnecting and interacting) neuronal circuits from fMRI sequences. We suggest that through\nincorporation of graphical models into functional neuroimaging studies we will be able\nto identify neuronal patterns of connectivity and interactivity that will provide invaluable\ninsights into basic emotional and cognitive neuroscience constructs. We further propose\nthat this interscienti\ufb01c incorporation may provide a valid tool where objective brain imag-\ning data are used for the clinical purpose of diagnosis of psychopathology. Speci\ufb01cally, in\nour case study we will model neuronal circuits associated with reward processing in drug\naddiction. We have previously shown loss of sensitivity to the relative value of money in\ncocaine users [9]. It has also been previously highlighted that the complex mechanism of\ndrug addiction requires the connectivity and interactivity between regions comprising the\nmesocorticolimbic circuit [12][8]. However, although advancements have been made in\nstudying this circuit\u2019s role in inhibitory control and reward processing, inference about the\nconnectivity and interactivity of these regions is at best indirect. Dynamical causal models\nhave been compared in [16]. Compared with dynamic causal models, DBNs admit a class\nof nonlinear continuous-time interactions among the hidden states and model both causal\nrelationships between brain regions and temporal correlations among multiple processes,\nuseful for both classi\ufb01cation and prediction purposes.\n\nProbabilistic graphical models [14][11] are graphs in which nodes represent random vari-\nables, and the (lack of) arcs represent conditional independence assumptions. In our case,\ninterconnected brain regions can be considered as nodes of a probabilistic graphical model\nand interactivity relationships between regions are modeled by probability values on the\narcs (or the lack of) between these nodes. However, the major challenge in such a ma-\nchine learning approach is the choice of a particular structure that models connectivity\nand interactivity between brain regions in an accurate and ef\ufb01cient manner. In this work,\nwe contribute a framework of exploiting Dynamic Bayesian Networks to model such a\nstructure for the fMRI data. More speci\ufb01cally, instead of modeling each brain region in\nisolation, we aim to model the interactive pattern of multiple brain regions. Furthermore,\nthe revealed functional information is validated through a group classi\ufb01cation case study:\nseparating drug addicted subjects from healthy non-drug-using controls based on trained\nDynamic Bayesian Networks. Both conventional BBNs and HMMs are unsuitable for\nmodeling activities underpinned not only by causal but also by clear temporal correlations\namong multiple processes [10], and Dynamic Bayesian Networks [5][7] are required. Since\nthe state of each brain region is not known (only observations of activation exist), it can be\nthought of as a hidden variable[15]. An intuitive way to construct a DBN is to extend a\nstandard HMM to a set of interconnected multiple HMMs. For example, Vogler et al. [17]\nproposed Parallel Hidden Markov Models (PaHMMs) that factorize state space into mul-\ntiple independent temporal processes without causal connections in-between. Brand et al.\n[2] exploited Coupled Hidden Markov Models (CHMMs) for complex action recognitions.\nGong et al. [10] developed a Dynamically Multi-Linked Hidden Markov Model (DML-\nHMM) for the recognition of group activities involving multiple different object events in\na noisy outdoor scene. This model is the only one of those models that learns both the\nstructure and parameters of the graphical model, instead of presuming a structure (possibly\ninaccurate) given the lack of knowledge of human brain connectivity. In order to model\nthe dynamic neuronal circuits underlying reward processing in the human brains, we ex-\nplore and compare the above DBNs. We propose and compare two learning schemes of\n\n\fDML-HMMs, one is greedy structure search (Hill-Climbing) and the other is Structural\nExpectation-Maximization (SEM).\n\nTo our knowledge, this is the \ufb01rst time that Dynamic Bayesian Networks are exploited in\nmodeling the connectivity and interactivity among brain regions activated during a fMRI\nstudy. Our current experimental classi\ufb01cation results show that by using DBNs, group\nclassi\ufb01cation can be performed even if the DBNs are constructed from as few as 5 brain\nregions. We also demonstrate that, by using the proposed learning algorithms, different\nDBN structures characterize drug addicted subjects vs. control subjects which provides\nan independent test for the effects of psychopathology on brain function. From the ma-\nchine learning point of view, this paper provides an innovative application of Dynamic\nBayesian Networks in modeling dynamic neuronal circuits. Furthermore, since the struc-\ntures to be explored are exclusively represented by hidden (cannot be observed directly)\nstates and their interconnecting arcs, the structure learning of DML-HMMs poses a greater\nchallenge than other DBNs [5]. From the neuroscienti\ufb01c point of view, drug addiction is a\ncomplex disorder characterized by compromised inhibitory control and reward processing.\nHowever, individuals with compromised mechanisms of control and reward are dif\ufb01cult to\nidentify unless they are directly subjected to challenging conditions. Modeling the inter-\nactive brain patterns is therefore essential since such patterns may be unique to a certain\npsychopathology and could hence be used for improving diagnosis and prevention efforts\n(e.g., diagnosis of drug addiction, prevention of relapse or craving). In addition, the de-\nvelopment of this framework can be applied to further our understanding of other human\ndisorders and states such as those impacting insight and awareness, that similarly to drug\naddiction are currently identi\ufb01ed based mostly on subjective criteria and self-report.\n\nFigure 1: Four types of Dynamic Bayesian Networks: PaHMM, CHMM, FHMM and\nDML-HMM.\n2. Dynamic Bayesian Networks\n\nIn this section, we will brie\ufb02y describe the general framework of Dynamic Bayesian Net-\nworks. DBNs are Bayesian Belief Networks that have been extended to model the stochas-\ntic evolution of a set of random variables over time [5][7]. As described in [10], a DBN\nB can be represented by two sets of parameters (m; \u00a3) where the \ufb01rst set m represents\nthe structure of the DBN including the number of hidden state variables S and observation\nvariables O per time instance, the number of states for each hidden state variable and the\ntopology of the network (set of directed arcs connecting the nodes). More speci\ufb01cally, the\nith hidden state variable and the jth observation variable at time instance t are denoted as\nS(i)\nt with i 2 f1; :::; Nhg and j 2 f1; :::; Nog, Nh and No are the number of hid-\nden state variables and observation variables respectively. The second set of parameters \u00a3\nincludes the state transition matrix A, the observation matrix B and a matrix \u2026 modeling the\n1). More speci\ufb01cally, A and B quantify the transition models\ninitial state distribution P (S i\nP (S(i)\nt )) respectively where P a(S(i)\nt )\nare the parents of S(i)\nt ) for observations). In this paper, we will exam-\nine four types of DBNs: Parallel Hidden Markov Models (PaHMM) [17], Coupled Hid-\nden Markov Models (CHMM)[2], Fully Connected Hidden Markov Models (FHMM) and\n\nt )) and observation models P (O(i)\n\n(similarly P a(O(i)\n\njP a(O(i)\n\nt\n\njP a(S(i)\n\nt\n\nand O(j)\n\nt\n\nt\n\n\fDynamically Multi-Linked Hidden Markov Models (DML-HMM)[10] as shown in Fig 1\nwhere observation nodes are shown as shaded circles, hidden nodes as clear circles and\nthe causal relationships among hidden state variables are represented by the arcs between\nhidden nodes. Notice that the \ufb01rst three DBNs are essentially three special cases of the\nDML-HMM.\n\n2.1. Learning of DBNs\nGiven the form of DBNs in the previous sections, there are two learning problems that must\nbe solved for real-world applications: 1) Parameter Learning: assuming \ufb01xed structure,\ngiven the training sequences of observations O, how we adjust the model parameters B =\n(m; \u00a3) to maximize P (OjB); 2) Structure Learning: for DBNs with unknown structure\n(i.e. DML-HMMs), how we learn the structure from the observation O. Parameter learning\nhas been well studied in [17][2]. Given \ufb01xed structure, parameters can be learned iteratively\nusing Expectation-Maximization (EM). The E step, which involves the inference of hidden\nstates given parameters, can be implemented using an exact inference algorithm such as\nthe junction tree algorithm. Then the parameters and maximal likelihood L(\u00a3) can be\ncomputed iteratively from the M step.\n\nIn [10], the DML-HMM was selected from a set of candidate structures, however the se-\nlection of candidate structure is non-trivial for most applications including brain region\nconnectivity. For a DML-HMM with N hidden nodes, the total number of different struc-\ntures is 2N 2\n\u00a1N , thus it is impossible to conduct an exhaustive search in most cases. The\nlearning of DBNs involving both parameter learning and structure learning has been dis-\ncussed in [5], where the scoring rules for standard probabilistic networks were extended\nto the dynamic case and the Structural EM (SEM) algorithm was developed for structure\nlearning when some of the variables are hidden. The structure learning of DML-HMMs\nis more challenging since the structures to be explored are exclusively represented by the\nhidden states and none of them can be directly observed. In the following, we will explain\ntwo learning schemes for the DML-HMMs. One standard way is to perform parametric\nEM within an outer-loop structural search. Thus, our \ufb01rst scheme is to use an outer-loop\nof the Hill-Climbing algorithm (DML-HMM-HC). For each step of the algorithm, from\nthe current DBN, we \ufb01rst compute a neighbor list by adding, deleting, or reversing one\narc. Then we perform parameter learning for each of the neighbors and go to the neighbor\nwith the minimum score until there is no neighbor whose score is higher than the current\nDBN. Our second learning scheme is similar to the Structural EM algorithm [5] in the\nsense that the structural and parametric modi\ufb01cation are performed within a single EM\nprocess. As described in [5][4], a structural search can be performed ef\ufb01ciently given com-\nplete observation data. However, as we described above, the structure of DML-HMMs are\nrepresented by the hidden states which can not be observed directly. Hence, we develop\nthe DML-HMM-SEM algorithm as follows: given the current structure, we \ufb01rst perform\na parameter learning and then, for each training data, we compute the Most Probable Ex-\nplanation (MPE), which computes the most likely value for each hidden node (similar to\nViterbi in standard HMM). The MPE thus provides a complete estimation of the hidden\nstates and a complete-data structural search [4] is then performed to \ufb01nd the best structure.\nWe perform learning iteratively until the structure converges. In this scheme, the structural\nsearch is performed in the inner loop thus making the learning more ef\ufb01cient. Pseudo-\ncodes of both learning schemes are described in Table 1. In this paper, we use Schwarz\u2019s\nBayesian Information Criterion (BIC): BIC = \u00a12 log L(\u00a3B) + KB log N as our score\nfunction where for a DBN B, L(\u00a3B) is the maximal likelihood under B, KB is the di-\nmension of the parameters of B and N is the size of the training data. Theoretically, the\nDML-HMM-SEM algorithm is not guaranteed to converge since for the same training data,\nthe most probably explanations (Si; Sj) of two DML-HMMs Bi; Bj might be different. In\nthe worst case, oscillation between two structures is possible. To guarantee halting of the\nalgorithm, a loop detector can be added so that, once any structure is selected in a second\n\n\ftime, we stop the learning and select the structure with the minimum score visited during\nthe searching. However, in our experiments, the learning algorithm always converged in a\nfew steps.\n\nProcedure DML-HMM-HC\nInitial M odel(B0);\nLoop i = 0; 1; ::: until convergence:\n\nProcedure DML-HMM-SEM\nInitial M odel(B0);\nLoop i = 0; 1; ::: until convergence:\n\ni ] = Learn P arameter(Bi);\n\ni ; score0\ni = Generate N eighbors(Bi);\n\n0\n\n[B\nB1::J\nfor j=1..J\n\n[Bj 0\n\ni ; scorej\n\ni ] = Learn P arameter(B j\n\ni );\n\nj = F ind M inscore(score1::J\nif (scorej\nreturn B\n\ni > score0\ni )\n\ni\n\n0\n\ni;\n\nelse\n\n);\n\n0\n\ni ] = Learn P arameter(Bi);\n\ni ; score0\n\n[B\nS = M ost P rob Expl(B\nBmax\nif Bmax\n\ni ; O);\n= F ind Best Struct(S);\n\n== B\n\ni\n\n0\n\n0\n\ni\n\ni\n\nreturn B\n\n0\n\ni;\n\nelse\n\nBi+1 = Bmax\n\ni\n\n;\n\nBi+1 = Bj\ni ;\n\nTable 1: Two schemes of learning DML-HMMs: the left column lists the DML-HMM-HC\nscheme and the right column lists the DML-HMM-SEM scheme.\n3. Modeling Reward Neuronal Circuits: A Case Study\n\nIn this section, we will describe our case study of modeling Reward Neuronal Circuits:\nby using DBNs, we aim to model the interactive pattern of multiple brain regions for the\nneuropsychological problem of sensitivity to the relative value of money. Furthermore, we\nwill examine the revealed functional information encapsulated in the trained DBNs through\na group classi\ufb01cation study: separating drug addicted subjects from healthy non-drug-using\ncontrols based on trained DBNs.\n\n3.1. Data Collection and Preprocess\nIn our experiments, data were collected to study the neuropsychological problem of loss of\nsensitivity to the relative value of money in cocaine users[9]. MRI studies were performed\non a 4T Varian scanner and all stimuli were presented using LCD-goggles connected to\na PC. Human participants pressed a button or refrained from pressing based on a picture\nshown to them. They received a monetary reward if they performed correctly. Speci\ufb01-\ncally, three runs were repeated twice (T1, T2, T3; and T1R, T2R, T3R) and in each run,\nthere were three monetary conditions (high money, low money, no money) and a baseline\ncondition; the order of monetary conditions was pseudo-randomized and identical for all\nparticipants. Participants were informed about the monetary condition by a 3-sec instruc-\ntion slide, presenting the stimuli: $0.45, $0.01 or $0.00. Feedback for correct responses in\neach condition consisted of the respective numeral designating the amount of money the\nsubject has earned if correct or the symbol (X) otherwise. To simulate real-life motivational\nsalience, subjects could gain up to $50 depending on their performance on this task. 16 co-\ncaine dependent individuals, 18-55 years of age, in good health, were matched with 12\nnon-drug-using controls on sex, race, education and general intellectual functioning. Sta-\ntistical Parametric Mapping (SPM)[6] was used for fMRI data preprocessing (realignment,\nnormalization/registration and smoothing) and statistical analyses.\n\n3.2. Feature Selection and Neuronal Circuit Modeling\nThe fMRI data are extremely high dimensional (i.e. 53 \u00a3 63 \u00a3 46 voxels per scan). Prior\nto training the DBN, we selected 5 brain regions: Left Inferior Frontal Gyrus (Left IFG),\nPrefrontal Cortex (PFC, including lateral and medial dorsolateral PFC and the anterior cin-\ngulate), Midbrain (including substantia nigra), Thalamus and Cerebellum. These regions\nwere selected based on prior SPM analyses random-effects analyses (ANOVA) where the\ngoal was to differentiate effect of money (high, low, no) from the effect of group (cocaine,\n\n\fFigure 2: Learning processes and learned structures from two algorithms. The leftmost\ncolumn demonstrates two (superimposed) learned structures where light gray dashed arcs\n(long dash) are learned from DML-HMM-HC, dark gray dashed arcs (short dash) from\nDML-HMM-SEM and black solid arcs from both. The right columns shows the transient\nstructures of the learning processes of two algorithms where black represents existence of\narc and white represents no arc.\ncontrol) on all regions that were activated to monetary reward in all subjects. In all these\n\ufb01ve regions, the monetary main effect was signi\ufb01cant as evidenced by region of interest\nfollow-up analyses. Of note is the fact that these \ufb01ve regions are part of the mesocorticol-\nimbic reward circuit, previously implicated in addiction. Each of the above brain regions\nis presented by a k-D feature vector where k is the number of brain voxels selected in\nthis brain region (i.e. k = 3 for Left IFG and k = 8 for PFC). After feature selection, a\nDML-HMM with 5 hidden nodes can be learned as described in Sec. 2 from the training\ndata. The leftmost image in Fig. 2 shows two superimposed possible structures of such\nDML-HMMs. The causal relationships discovered among different brain regions are em-\nbodied in the topology of the DML-HMM. Each of the \ufb01ve hidden variables has two states\n(activated or not) and each continuous observation variable (given by a k-D feature vec-\ntor) represents the observed activation of each brain region. The Probabilistic Distribution\nFunction (PDF) of each observation variable is a mixture of Gaussians conditioned by the\nstate of its discrete parent node.\n\nFigure 3: Left three images shows the structures learned from the 3 subsets of Group C\nand the right three images shows those learned from subsets of Group S. Figure shows that\nsome arcs consistently appeared in Group C but not consistently in Group S (marked in\ndark gray) and vice versa (marked in light gray), which implies such group differences in\nthe interactive brain patterns may correspond to the loss of sensitivity to the relative value\nof money in cocaine users.\n4. Experiments and Results\n\nWe collected fMRI data of 16 drug addicted subjects and 12 control subjects, 6 runs per\nparticipant. Due to head motion, some data could not be used. In our experiments, we\nused a total of 152 fMRI sequences (87 scans per sequence) with 86 sequences for the drug\naddicted subjects (Group S) and 66 for control subjects (Group C).\n\nFirst we compare the two learning schemes for DML-HMMs proposed in Sec. 2. Fig. 2\ndemonstrates the learning process (initialized with the FHMM) for drug addicted subjects.\nThe leftmost column shows two learned structures where red arcs are learned from DML-\nHMM-HC, green arcs from DML-HMM-SEM and black arcs from both. The right columns\nshow the learning processes of DML-HMM-SEM (top) and DML-HMM-HC (bottom) with\n\n\fblack representing existence of arc and white representing no arc. Since in DML-HMM-\nSEM, structure learning is in the inner loop, the learning process is much faster than that of\nDML-HMM-HC. We also compared the BIC scores of the learned structures and we found\nDML-HMM-SEM selected better structures than DML-HMM-HC.\n\nIt is also very interesting to examine the structure learning processes by using different\ntraining data. For each participant group, we randomly separated the data set into three\nsubsets and trained DBNs are reported in Fig. 3 where the left three images show the struc-\ntures learned from the 3 subsets of Group C and the right three images show those learned\nfrom subsets of Group S. In Fig. 3, we found the learned structures of each group are sim-\nilar. We also found that some arcs consistently appeared in Group C but not consistently\nin Group S (marked in red) and vice versa (marked in green), which implies such group\ndifferences in the interactive brain patterns may correspond to the loss of sensitivity to the\nrelative value of money in cocaine users. More speci\ufb01cally, in Fig. 3, the average intra-\ngroup similarity scores were 80% and 78.3%, while cross-group similarity was 56.7%.\n\nFigure 4: Classi\ufb01cation results: All DBN methods signi\ufb01cantly improved classi\ufb01cation\nrates compared to K-Nearest Neighbor with DML-HMM performing best.\nThe second set of experiments was to apply the trained DBNs for group classi\ufb01cation. In\nour data collection, there were 6 runs of fMRI collection: T1, T2, T3, T1R, T2R and T3R\nwith the latter latter repeating the former three, grouped into 4 data sets fT 1; T 2; T 3; ALLg\nwith ALL containing all the data. We performed classi\ufb01cation experiments on each of the\n4 data sets where the data were randomly divided into a training set and a testing set of\nequal size. During training, the described four DBN type were employed using the train-\ning set while during the learning of DML-HMMs, different initial structures (PaHMM,\nCHMM, FHMM) were used and the structure with the minimum BIC score was selected\nfrom the three learned DML-HMMs. For each model, two DBNs fBc; Bsg were trained\non the training data of Group C and Group S respectively. During testing, for each\ntesting fMRI sequence Otest, we computed two likelihoods P test\n= P (OtestjBc) and\n= P (OtestjBs) using the two trained DBNs. Since the two DBNs may have dif-\nP test\nferent structures, instead of directly comparing the two likelihoods, we used the difference\nbetween these two likelihoods for classi\ufb01cation. More speci\ufb01cally, during training, for each\ntraining sequence T Ri, we computed the ratio of two likelihoods RT R\ns where\ns = P (T RijBs). As expected, generally the ratios of Group\nc = P (T RijBc) and P i\nP i\nC training data were signi\ufb01cantly greater than those of Group S. During testing, the ratio\nfor each test sequence was also computed and compared to the ratios\nRtest = P test\nof the training data for classi\ufb01cation. Fig. 4 reports the classi\ufb01cation rates of the different\nDBNs on each data set. For comparison, the k-th Nearest Neighbor (KNN) algorithm was\napplied on the fMRI sequences directly and Fig. 4 shows that by using DBNs, classi\ufb01cation\nrates are signi\ufb01cantly better with DML-HMM outperforming all other models.\n\ni = P i\n\nc =P i\n\ns\n\nc\n\n=P test\n\ns\n\nc\n\n5. Conclusions and Future Work\n\nIn this work, we contributed a framework of exploiting Dynamic Bayesian Networks to\nmodel the functional information of the fMRI data. We explored four types of DBNs: a\nParallel Hidden Markov Model (PaHMM), a Coupled Hidden Markov Model (CHMM),\n\n\fa Fully-linked Hidden Markov Model (FHMM) and a Dynamically Multi-linked Hidden\nMarkov Model. Furthermore, we proposed and compared two structural learning schemes\nof DML-HMMs and applied the DBNs to a group classi\ufb01cation problem. To our knowl-\nedge, this is the \ufb01rst time that Dynamic Bayesian Networks are exploited in modeling\nthe connectivity and interactivity among brain voxels from fMRI data. This framework\nof exploring functional information of fMRI data provides a novel approach of revealing\nbrain connectivity and interactivity and provides an independent test for the effect of psy-\nchopathology on brain function.\n\nCurrently, DBNs use independently pre-selected brain regions, thus some other important\ninteractivity information may have been discarded in the feature selection step. Our future\nwork will focus on developing a dynamic neuronal circuit modeling framework performing\nfeature selection and DBN learning simultaneously. Due to computational limits and for\nclarity purposes, we explored only 5 brain regions and thus another direction of future work\nis to develop a hierarchical DBN topology to comprehensively model all implicated brain\nregions ef\ufb01ciently.\nReferences\n[1] S. Anders, M. Lotze, M. Erb, W. Grodd, and N. Birbaumer. Brain activity underlying emotional\n\nvalence and arousal: A response-related fmri study. In Human Brain Mapping.\n\n[2] M. Brand, N. Oliver, and A. Pentland. Coupled hidden markov models for complex action\n\nrecognition. In CVPR, pages 994\u2013999, 1996.\n\n[3] J. Ford, H. Farid, F. Makedon, L.A. Flashman, T.W. McAllister, V. Megalooikonomou, and A.J.\n\nSaykin. Patient classi\ufb01cation of fmri activation maps. In MICCAI, 2003.\n\n[4] N. Friedman. The bayesian structual algorithm. In UAI, 1998.\n[5] N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic net-\n\nworks. In Uncertainty in AI, pages 139\u2013147, 1998.\n\n[6] K. Friston, A. Holmes, K. Worsley, and et al. Statistical parametric maps in functional imaging:\n\nA general linear approach. Human Brain Mapping, pages 2:189\u2013210, 1995.\n\n[7] G. Ghahramani. Learning dynamic bayesian networks. In Adaptive Processing of Sequences\n\nand Data Structures, Lecture Notes in AI, pages 168\u2013197, 1998.\n\n[8] R.Z. Goldstein and N.D. Volkow. Drug addiction and its underlying neurobiological basis: Neu-\nroimaging evidence for the involvement of the frontal cortex. American Journal of Psychiatry,\n(10):1642\u20131652.\n\n[9] R.Z. Goldstein et al. A modi\ufb01ed role for the orbitofrontal cortex in attribution of salience to\nmonetary reward in cocaine addiction: an fmri study at 4t. In Human Brain Mapping Confer-\nence, 2004.\n\n[10] S. Gong and T. Xiang. Recognition of group activities using dynamic probabilistic networks.\n\nIn ICCV, 2003.\n\n[11] M.I. Jordan and Y. Weiss. Graphical models: probabilistic inference, Arbib, M. (ed): Handbook\n\nof Neural Networks and Brain Theory. MIT Press, 2002.\n\n[12] A.W. MacDonald et al. Dissociating the role of the dorsolateral prefrontal and anterior cingulate\n\ncortex in cognitive control. Science, 288(5472):1835\u20131838, 2000.\n\n[13] T.M. Mitchell, R. Hutchinson, R. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman.\nLearning to decode cognitive states from brain images. Machine Learning, 57:145\u2013175, 2004.\n\n[14] K.P. Murphy. An introduction to graphical models. 2001.\n[15] L.K. Hansen P. Hojen-Sorensen and C.E. Rasmussen. Bayesian modeling of fmri time series.\n\nIn NIPS, 1999.\n\n[16] W.D. Penny, K.E. Stephan, A. Mechelli, and K.J. Friston. Comparing dynamic causal models.\n\nNeuroImage, 22(3):1157\u20131172, 2004.\n\n[17] C. Vogler and D. Metaxas. A framework for recognizing the simultaneous aspects of american\n\nsign language. In CVIU, pages 81:358\u2013384, 2001.\n\n[18] X. Wang, R. Hutchinson, and T.M. Mitchell. Training fmri classi\ufb01ers to detect cognitive states\n\nacross multiple human subjects. In NIPS03, Dec 2003.\n\n[19] L. Zhang, D. Samaras, D. Tomasi, N. Volkow, and R. Goldstein. Machine learning for clinical\n\ndiagnosis from functional magnetic resonance imaging. In CVPR, 2005.\n\n\f", "award": [], "sourceid": 2855, "authors": [{"given_name": "Lei", "family_name": "Zhang", "institution": null}, {"given_name": "Dimitris", "family_name": "Samaras", "institution": null}, {"given_name": "Nelly", "family_name": "Alia-klein", "institution": null}, {"given_name": "Nora", "family_name": "Volkow", "institution": null}, {"given_name": "Rita", "family_name": "Goldstein", "institution": null}]}