{"title": "Optimal Response Initiation: Why Recent Experience Matters", "book": "Advances in Neural Information Processing Systems", "page_first": 785, "page_last": 792, "abstract": "In most cognitive and motor tasks, speed-accuracy tradeoffs are observed: Individuals can respond slowly and accurately, or quickly yet be prone to errors. Control mechanisms governing the initiation of behavioral responses are sensitive not only to task instructions and the stimulus being processed, but also to the recent stimulus history. When stimuli can be characterized on an easy-hard dimension (e.g., word frequency in a naming task), items preceded by easy trials are responded to more quickly, and with more errors, than items preceded by hard trials. We propose a rationally motivated mathematical model of this sequential adaptation of control, based on a diffusion model of the decision process in which difficulty corresponds to the drift rate for the correct response. The model assumes that responding is based on the posterior distribution over which response is correct, conditioned on the accumulated evidence. We derive this posterior as a function of the drift rate, and show that higher estimates of the drift rate lead to (normatively) faster responding. Trial-by-trial tracking of difficulty thus leads to sequential effects in speed and accuracy. Simulations show the model explains a variety of phenomena in human speeded decision making. We argue this passive statistical mechanism provides a more elegant and parsimonious account than extant theories based on elaborate control structures.", "full_text": "Optimal Response Initiation:\n\nWhy Recent Experience Matters\n\nMatt Jones\n\nMichael C. Mozer\n\nDept. of Psychology &\n\nInstitute of Cognitive Science\n\nDept. of Computer Science &\nInstitute of Cognitive Science\n\nUniversity of Colorado\n\nmcj@colorado.edu\n\nUniversity of Colorado\nmozer@colorado.edu\n\nSachiko Kinoshita\n\nMACCS &\n\nDept. of Psychology\nMacquarie University\n\nskinoshi@maccs.mq.edu.au\n\nAbstract\n\nIn most cognitive and motor tasks, speed-accuracy tradeoffs are observed: In-\ndividuals can respond slowly and accurately, or quickly yet be prone to errors.\nControl mechanisms governing the initiation of behavioral responses are sensi-\ntive not only to task instructions and the stimulus being processed, but also to the\nrecent stimulus history. When stimuli can be characterized on an easy-hard di-\nmension (e.g., word frequency in a naming task), items preceded by easy trials\nare responded to more quickly, and with more errors, than items preceded by hard\ntrials. We propose a rationally motivated mathematical model of this sequential\nadaptation of control, based on a diffusion model of the decision process in which\ndif\ufb01culty corresponds to the drift rate for the correct response. The model as-\nsumes that responding is based on the posterior distribution over which response\nis correct, conditioned on the accumulated evidence. We derive this posterior as\na function of the drift rate, and show that higher estimates of the drift rate lead\nto (normatively) faster responding. Trial-by-trial tracking of dif\ufb01culty thus leads\nto sequential effects in speed and accuracy. Simulations show the model explains\na variety of phenomena in human speeded decision making. We argue this pas-\nsive statistical mechanism provides a more elegant and parsimonious account than\nextant theories based on elaborate control structures.\n\n1 Introduction\n\nConsider the task of naming the sum of two numbers, e.g., 14+8. Given suf\ufb01cient time, individuals\nwill presumably produce the correct answer. However, under speed pressure, mistakes occur. In\nmost cognitive and motor tasks, speed-accuracy tradeoffs are observed: Individuals can respond\naccurately but slowly, or quickly but be prone to errors. Speed-accuracy tradeoffs are due to the fact\nthat evidence supporting the correct response accumulates gradually over time (Rabbitt & Vyas,\n1970; Gold & Shadlen, 2002). Responses initiated earlier in time will be based on lower-quality\ninformation, and hence less likely to be correct.\nOn what basis do motor systems make the decision to initiate a response? Recent theories have\ncast response initiation in terms of optimality (Bogacz et al., 2006), where optimality might be\nde\ufb01ned as maximizing reward per unit time, or minimizing a linear combination of latency and error\nrate. Although optimality might be de\ufb01ned in various ways, all de\ufb01nitions require an estimate of\nthe probability that each candidate response will be correct. We argue that this estimate in turn\nrequires knowledge of the task dif\ufb01culty, or speci\ufb01cally, the rate at which evidence supporting the\ncorrect response accumulates over time. If a task is performed repeatedly, task dif\ufb01culty can be\nestimated over a series of trials, suggesting that optimal decision processes should show sequential\neffects, in which performance on one trial depends on the dif\ufb01culty of recent trials. We describe an\nexperimental paradigm that offers behavioral evidence of sequential effects in response initiation.\n\n1\n\n\fFigure 1: An illustration of the MDM. Left panel: evidence accumulation for a 20-AFC task as a\nfunction of time, with \u00b5R\u2217 = .04, \u00b5i(cid:54)=R\u2217 = 0, \u03c3 = .15. Middle panel: the posterior over responses,\nP (R\u2217|X), with a = .04 and b = 0, based on the diffusion trace in the left panel. Right panel: the\nposterior over responses, P ( \u02c6R\u2217|X), assuming \u02c6a = .07 and \u02c6b = .02 for the same diffusion trace.\n\nWe summarize key phenomena from this paradigm, and show that these phenomena are predicted\nby a model of response initiation. Our work achieves two goals: (1) offering a better understanding\nof and a computational characterization of control processes involved in response initiation, and (2)\noffering a rational basis for sequential effects in simple stimulus-response tasks.\n\n2 Models of Decision Making\n\nNeurophysiological and psychological data (e.g., Gold & Shadlen, 2002; Ratcliff, Cherian, & Seg-\nraves, 2003) have provided converging evidence for a theory of cortical decision making, known as\nthe diffusion decision model or DDM (see recent review by Ratcliff & McKoon, 2007). The DDM is\nformulated for two-alternative forced choice (2AFC) decisions. A noisy neural integrator accumu-\nlates evidence over time; positive evidence supports one response, negative evidence the other. The\nmodel\u2019s dynamics are represented by a differential equation, dx = \u00b5dt + w, where x is the accumu-\nlated evidence over time t, \u00b5 is the relative rate of evidence supporting one response over the other\n(positive or negative, depending on the balance of evidence), and w is white noise, w \u223c N (0, \u03c32dt).\nThe variables \u00b5 and \u03c3 are called the drift and diffusion rates. A response is initiated when the ac-\ncumulated evidence reaches a positive or negative threshold, i.e., x > \u03b8+ or x < \u03b8\u2212. The DDM\nimplements the optimal decision strategy under various criteria of optimality (Bogacz et al., 2006).\nTasks involving n alternative responses (nAFC) can be modeled by generalizing the DDM to have\none integrator per possible response (Bogacz & Gurney, 2007; Vickers, 1970). We refer to this\ngeneralized class of models as multiresponse diffusion models or MDM. Consider one example of\nan nAFC task: naming the color of a visually presented color patch. The visual system produces\na trickle of evidence for the correct or target response, R\u2217. This evidence supports the target re-\nsponse via a positive drift rate, \u00b5R\u2217, whereas the drift rates of the other possible color names,\n{\u00b5i | i\n(cid:54)= R\u2217}, are zero. (We assume no similarity among the stimuli, e.g., an aqua patch pro-\nvides no evidence for the response \u2019blue\u2019, although our model could be extended in this way.) The\nleft panel of Figure 1 illustrates typical dynamics of the MDM. The abcissa represents processing\ntime relative to the onset of the color patch, and each curve represents one integrator (color name).\n\n2.1 A Decision Rule for the Multiresponse Diffusion Model\n\nAlthough the DDM decision rule is optimal, no unique optimal decision rule exists for the multiple-\nresponse case (Bogacz & Gurney, 2007; Dragelin et al, 1999). Rules based on an evidence\ncriterion\u2014analogous to the DDM decision rule\u2014turn out to be inadequate.\nInstead, candidate\nrules are based on the posterior probability that a particular response is correct given the ob-\nserved evidence up to the current time, P (R\u2217 = r|X).\nIn our notation, R\u2217 is the random\nvariable denoting the target response, r is a candidate response among the n alternatives, and\n\u03c4 } is a collection of discrete samples of the multivariate dif-\nX = {xi(j\u03c4) | i = 1...n, j = 0... T\nfusion process observed up to the current time T . The simulations reported here use a decision rule\nthat initiates responding when the accuracy of the response is above a threshold, \u03b8:\n\nIf \u2203r such that P (R\u2217 = r|X) \u2265 \u03b8, then initiate response r.\n\n(1)\n\n2\n\n05010000.20.40.60.81timeP(R*|X)050100\u22122024timeevidence05010000.20.40.60.81timeP(R* | X)^\fThis rule has been shown to minimize decision latency in the limit of \u03b8 \u2192 1 (Dragelin et al., 1999).\nHowever, our model\u2019s predictions are not tied to this particular rule. We emphasize that any sensible\nrule requires estimation of P (R\u2217 = r|X), and we focus on how the phenomena explained by our\nmodel derive from the properties of this posterior distribution.\nBaum and Veeravalli (1994; see also Bogacz & Gurney, 2007) derive P (R\u2217 = r|X) for the case\nwhere all nontargets have the same drift rate, \u00b5nontgt, the target has drift rate \u00b5tgt, and \u00b5nontgt, \u00b5tgt,\nand \u03c3 are known. (We introduce the \u00b5tgt and \u00b5nontgt notation to refer to these drift rates even in\nthe absence of information about R\u2217.) We extend the Baum and Veeravalli result to the case where\n\u00b5tgt is an unknown random variable that must be estimated by the observer. The diffusion rate of a\nrandom walk, \u03c32, can be determined with arbitrary precision from a single observed trajectory, but\nthe drift rate cannot (see Supplementary Material \u2013 available at http://matt.colorado.edu/papers.htm).\nTherefore, estimating statistics of \u00b5tgt is critical to achieving optimal performance.\nGiven a sequence of discrete observations from a diffusion process, x = {x(j\u03c4) | j = 0... T\n\u03c4 }, we\ncan use the independence of increments to a diffusion process with known drift and diffusion rates,\n\nx(t2) \u2212 x(t1) \u223c N(cid:0)(t2 \u2212 t1)\u00b5, (t2 \u2212 t1)\u03c32(cid:1), to calculate the likelihood of x:\n\nP (x|\u00b5, \u03c3) \u221d exp(cid:2)(\u2206x(T )\u00b5 \u2212 \u00b52T /2)/\u03c32(cid:3) ,\n\nwhere \u2206x(T ) = x(T ) \u2212 x(0) is a suf\ufb01cient statistic for estimating \u00b5.\nConsider the case where the drift rate of the target is a random variable, \u00b5tgt \u223c N (a, b2), and the\ndrift rate of all nontargets, \u00b5nontgt, is zero. Using Equation 2 and integrating out \u00b5tgt, the posterior\nover response alternatives can be determined (see Supplementary Material):\n\n(2)\n\n(cid:20) b2\u2206xr(T )2 + 2a\u03c32\u2206xr(T )\n\n(cid:21)\n\nP (R\u2217 = r|X, a, b) \u221d exp\n\n(3)\nThe middle panel of Figure 1 shows P (R\u2217|X, a, b), as a function of processing time for the diffusion\ntrace in the left panel, when the true drift rate is known (a = \u00b5tgt and b = 0).\n\n2\u03c32(\u03c32 + T b2)\n\n.\n\n2.2 Estimating Drift\n\nTo recap, we have argued that optimal response initiation in nAFC tasks requires calculation of\nthe posterior response distribution, which in turn depends on assumptions about the drift rate of the\ntarget response. We proposed a decision rule based on a probabilisitic framework (Equations 1 and 3)\nthat permits uncertainty in the drift rate, but requires a characterization of the prior distribution of\nthis variable.\nWe assume that the parameters of this distribution, a and b, are unknown. Consequently, the observer\ncannot compute P (R\u2217|X), but must use an approximation, P ( \u02c6R\u2217|X), based on estimates \u02c6a and \u02c6b.\nWhen \u00b5tgt is not representative of the assumed distribution N (\u02c6a, \u02c6b2), performance of the model\nwill be impaired, as illustrated by a comparison of the center and right panels of Figure 1. In the\ncenter panel, \u00b5tgt = .04 is known; in the right panel, \u00b5tgt is not representative of the assumed\ndistribution. The consequence of this mismatch is that\u2014for the criterion indicated by the dashed\nhorizontal line\u2014the model chooses the wrong response.\nWe turn now to the estimation of the model\u2019s drift distribution parameters, \u02c6a and \u02c6b. Consider a\nsequence of trials, k = 1...K, in which the same decision task is performed with different stimuli,\nand the drift rate of the target response on trial k is \u00b5(k). Following each trial, the drift rate can\nalso be estimated: \u02c6\u00b5tgt(k) = \u2206xR\u2217(Tk)/Tk, where Tk is the time taken to respond on trial k.\nIf the task environment changes slowly, the drift rates over trials will be autocorrelated, and the\ndrift distribution parameters on trial k can be estimated from past trial history, {\u02c6\u00b5tgt(1)...\u02c6\u00b5tgt(k \u2212\n1)}. The weighting of past history should be based on the strength of the autocorrelation. Using\nmaximum likelihood estimation of a and b with an exponential weighting on past history, one obtains\n(4)\nwhere k is an index over trials, and the {vi(k)} are moment statistics of the drift disribution, updated\nfollowing each trial using an exponential weighting constant, \u03bb \u2208 [0, 1]:\n\n\u02c6a(k) = v1(k)/v0(k), and \u02c6b(k) = [v2(k)/v0(k) \u2212 \u02c6a(k)2]0.5,\n\n(5)\nThis update rule is an ef\ufb01cient approximation to full hierarchical Bayesian inference of a and b.\nWhen combined with Equations 1 and 3 it determines the model\u2019s response on the current trial.\n\nvi(k) = \u03bbvi(k \u2212 1) + \u02c6\u00b5tgt(k \u2212 1)i.\n\n3\n\n\f3 The Blocking Effect\n\nThe optimal decision framework we have proposed naturally leads to the prediction that performance\non the current trial is in\ufb02uenced by drift rates observed on recent trials. Because drift rates determine\nthe signal-to-noise ratio of the diffusion process, they re\ufb02ect the dif\ufb01culty of the task at hand. Thus,\nthe framework predicts that an optimal decision maker should show sequential effects based on\nrecent trial dif\ufb01culty. We now turn to behavioral data consistent with this prediction.\nIn any behavioral task, some items are intrinsically easier than others, e.g., 10+3 is easier than 5+8,\nwhether due to practice or the number of cognitive operations required to determine the sum. By\nde\ufb01nition, individuals have faster response times (RTs) and lower error rates to easy items. However,\nthe RTs and error rates are modulated by the composition of a trial block. Consider an experimental\nparadigm consisting of three blocks: just easy items (pure easy), just hard items (pure hard), and\na mixture of both in random order (mixed). When presented in a mixed block, easy items slow\ndown relative to a pure block and hard items speed up. This phenomenon, known as the blocking\neffect (not to be confused with blocking in associative learning), suggests that the response-initiation\nprocesses use information not only from the current stimulus, but also from the stimulus environment\nin which it is operating. Table 1 shows a typical blocking result for a word-reading task, where word\nfrequency is used to manipulate dif\ufb01culty. We summarize the central, robust phenomena of the\nblocking-effect literature (e.g., Kiger & Glass, 1981; Lupker, Brown & Columbo, 1997; Lupker,\nKinoshita, Coltheart, & Taylor, 2000; Taylor & Lupker, 2001).\nP1. Blocking effects occur across diverse paradigms, including naming, arithmetic veri\ufb01cation\nand calculation, target search, and lexical decision. They are obtained when stimulus or response\ncharacteristics alternate from trial to trial. Thus, the blocking effect is not associated with a speci\ufb01c\nstimulus or response pathway, but rather is a general phenomenon of response initiation.\nP2. A signature of the effect concerns the relative magnitudes of easy-item slowdown and hard-item\nspeedup. Typically, slowdown and speedup are of equal magnitude. Signi\ufb01cantly more speedup\nthan slowdown is never observed. However, in some paradigms (e.g., lexical decision, priming)\nsigni\ufb01cantly more slowdown than speedup can be observed.\nP3. The RT difference bewteen easy and hard items does not fully disappear in mixed blocks. Thus,\nRT depends on both the stimulus type and the composition of the block.\nP4. Speed-accuracy tradeoffs are observed: A drop in error rate accompanies easy-item slowdown,\nand a rise in error rate accompanies hard-item speedup.\nP5. The effects of stimulus history are local, i.e., the variability in RT on trial k due to trial k \u2212 l\ndecreases rapidly with l. Dependencies for l > 2 are not statistically reliable (Taylor & Lupker,\n2001), although the experiments may not have had suf\ufb01cient power to detect weak dependencies.\nP6. Overt responses are necessary for obtaining blocking effects, but overt errors are not.\n\n4 Explanations for the Blocking Effect\n\nThe blocking effect demonstrates that the response time depends not only on information accruing\nfrom the current stimulus, but also on recent stimuli in the trial history. Therefore, any explanation\nof the blocking effect must specify the manner by which response initiation processes are sensitive\nto the composition of a block. Various mechanisms of control adaptation have been proposed.\nDomain-speci\ufb01c mechanisms. Many of the proposed mechanisms are domain-speci\ufb01c. For example,\nRastle and Coltheart (1999) describe a model with two routes to naming, one lexical and one non-\nlexical, and posit that the composition of a block affects the emphasis that is placed on the output of\none route versus the other. Because of the ubiquity of blocking effects across tasks, domain-speci\ufb01c\n\nTable 1: RTs and Error Rates for\nBlocking study of Lupker, Brown,\n& Columbo (1997, Expt. 3)\n\nPure Block\n488 ms (3.6%)\n583 ms (12.0%)\n\nMixed Block\n513 ms (1.8%)\n559 ms (12.2%)\n\nDifference\n+25 ms (-1.8%)\n-24 ms (+0.2%)\n\nEasy\nHard\n\n4\n\n\faccounts are not compelling. Parsimony is achieved only if the adaptation mechanism is localized\nto a stage of response initiation common across stimulus-response tasks.\nRate of convergence. Kello and Plaut (2003) have proposed that control processes adjust a gain\nparameter on units in a dynamical connectionist model. Increasing the gain results in more rapid\nconvergence, but also a higher error rate. Simulations of this model have explained the basic block-\ning effect, but not the complete set of phenomena we listed previously. Of greater concern is the\nfact that the model predicts the time taken to utter the response (when the response mode is verbal)\ndecreases with increased speed pressure, which does not appear to be true (Damian, 2003).\nEvidence criterion. A candidate mechanism with intuitive appeal is the trial-to-trial adjustment of an\nevidence criterion in the MDM, such that the easier the previous trials are, the lower the criterion is\nset. This strategy results in the lowest criterion in a pure-easy block, intermediate in a mixed block,\nand highest in a pure-hard block. Because a higher criterion produces slower RTs and lower error\nrates, this leads to slowdown of easy items and speedup of hard items in a mixed block. Nonetheless,\nthere are four reasons for being skeptical about an account of the blocking effect based on adjustment\nof an evidence criterion. (1) From a purely computational perspective, the optimality\u2014or even the\nbehavioral robustness\u2014of an MDM with an evidence criterion has not been established. (2) Taylor\nand Lupker (2001) illustrate that adaptation of an evidence criterion can\u2014at least in some models\u2014\nyield incorrect predictions concerning the blocking effect. (3) Strayer and Kramer (1994) attempted\nto model the blocking effect for a 2AFC task using an adaptive response criterion in the DDM. Their\naccount \ufb01t data, but had a critical shortcoming: They needed to allow different criteria for easy and\nhard items in a mixed block, which makes no sense because the trial type was not known in advance,\nand setting differential criteria depends on knowing the trial type. (4) On logical grounds, the relative\nimportance of speed versus accuracy should be determined by task instructions and payoffs. Item\ndif\ufb01culty is an independent and unrelated factor. Consistent with this logical argument is the \ufb01nding\nthat manipulating instructions to emphasize speed versus accuracy does not produce the same pattern\nof effects as altering the composition of a block (Dorfman & Glanzer, 1988).\n\n5 Our Account: Sequential Estimation of Task Dif\ufb01culty\n\nHaving argued that existing accounts of the blocking effect are inadequate, we return to our analysis\nof nAFC tasks, and show that it provides a parsimonious account of blocking effects. Our account is\npremised on the assumption that response initiation processes are in some sense optimal. Regardless\nof the speci\ufb01c optimality criterion, optimal response initiation requires an estimate of accuracy,\nspeci\ufb01cally, the probability that a response will be correct conditioned on the evidence accumulated\nthus far, P (R\u2217 = r|X). As we argue above, estimation of this probability requires knowledge of\nthe dif\ufb01culty (drift) of the correct response, and recent trial history can provide this information.\nThe response posterior, P (R\u2217 = r|X), under our generative model of the task environment (Equa-\ntion 3) predicts a blocking effect. To see this clearly, consider the special case where uncertainty in\n\n\u00b5tgt is negligible, i.e., b \u2192 0, which simpli\ufb01es Equation 3 to P (R\u2217 = r|X) \u221d exp(cid:2)a\u2206xr(T )/\u03c32(cid:3).\n\nThis expression is a Gibbs distribution with temperature \u03c32/a. As the temperature is lowered, the\nentropy drops, and the probabilities become more extreme. Thus, larger values of a lead to faster re-\nsponses, because the greater expected signal-to-noise ratio makes evidence more reliable. How does\nthis fact relate to the blocking effect? Easy items have, by de\ufb01nition, a higher mean drift than hard\nitems; therefore, the estimated drift in the easy condition will be greater than in the hard condition,\nE[\u02c6aE] > E[\u02c6aH]. Any learning rule for \u02c6a based on recent history will yield an estimated drift in\nthe mixed condition between those of the easy and hard conditions, i.e., E[\u02c6aE] > E[\u02c6aM ] > E[\u02c6aH].\nWith response times related to \u02c6a, an easy item will slow down in the mixed condition relative to the\npure, and a hard item will speed up.\nAlthough we could \ufb01t behavioral data (e.g., Table 1) quantitatively, such \ufb01ts add no support for the\nmodel beyond a qualitative \ufb01t. The reason lies in the mapping of model decision times to human\nresponse latencies. An af\ufb01ne transform must be allowed, scaling time in the model to real-world\ntime, and also allowing for a \ufb01xed-duration stage of perceptual processing. A blocking effect of any\nmagnitude in the model could therefore be transformed to \ufb01t any pattern of data that had the right\nqualitative features. We thus focus on qualitative performance of the model.\n\n5\n\n\fFigure 2: Simulation of the blocking paradigm with random parameter settings. (a) Scatterplot of\nhard speedup vs. easy slowdown, where coloring of a cell re\ufb02ects the log(frequency) with which\na given simulation outcome is obtained. (b) Histogram of percentage reduction in the difference\nbetween easy and hard RTs as a result of intermixing. (c) Scatterplot of change in error rate between\npure and mixed conditions for easy and hard items.\n\nThe model has four internal parameters: \u03c3 (diffusion rate), \u03bb (history decay), \u03b8 (accuracy criterion),\nand n (number of response alternatives). In addition, to simulate the blocking effect, we must specify\nthe true drift distributions for easy and hard items, i.e., aE, bE, aH, and bH. (We might also allow\nfor nonzero drift rates for some or all of the distractor responses.) To explore the robustness of\nthe model, we performed 1200 replications of a blocking simulation, each with randomly drawn\nvalues for the eight free parameters. Parameters were drawn as follows: \u03c3 \u223c U(.05, .25), \u03bb \u223c\n1 \u2212 1/(1 + U(1, 20) (these values are uniform in the half-life of the exponential memory decay),\nn \u223c (cid:98)U(2, 100)(cid:99), \u03b8 \u223c U(.95, .995), aH \u223c U(.01, .05), aE \u223c aH + U(.002, .02), bH \u223c (aE \u2212\naH)/U(3, 10), and bE = bH. Each replication involved simulating three conditions: pure easy, pure\nhard, and mixed. The pure conditions were run for 5000 trials and the mixed condition for 10000\ntrials. Each condition began with an additional 25 practice trials which were discarded from our\nanalysis but were useful to eliminate the effects of initialization of \u02c6a and \u02c6b. The model parameters\nwere not adapted following error trials. For each replication and each condition, the median response\ntime (RT) and mean error rate were computed. We discarded from our analysis simulations in which\nthe error rates were grossly unlike those obtained in experimental studies, speci\ufb01cally, where the\nmean error rate in any condition was above 20%, and where the error rates for easy and hard items\ndiffered by more than a factor of 10.\nFigure 2a shows a scatterplot comparing the speedup of hard items (from pure to mixed conditions)\nto the slowdown of easy items. Units are in simulation time steps. The dashed diagonal line indicates\nspeedup comparable in magnitude to slowdown. Much of the scatter is due to sampling noise in the\nmedian RTs. The model obtains a remarkably symmetric effect: 41% of replications yield speedup\n> slowdown, 40% yield slowdown > speedup, and the remaining 19% yield exactly equal sized\neffects. The slope of the regression line through the origin is 0.97. Thus, the model shows a key\nsignature of the behavioral data\u2014symmetric blocking effects (Phenomenon P2).\nFigure 2b shows a histogram of the percentage reduction in the difference between easy and hard\nRTs as a result of intermixing. This percentage is 100 if easy RTs slow down and hard RTs speed\nup to become equal; the percentage is 0 if there is no slowdown of easy RTs or speedup of hard\nRTs. The simulation runs show a 10\u201330% reduction as a result of the blocking manipulation. This\npercentage is unaffected by the af\ufb01ne transformation required to convert simulation RTs to human\nRTs, and is thus directly comparable. Behavioral studies (e.g., Table 1) typically show 20\u201360%\neffects. Thus, the model\u2014with random parameter settings\u2014tends to underpredict human results.\nNonetheless, the model shows the key property that easy RTs are still faster than hard RTs in the\nmixed condition (Phenomenon P3).\nFigure 2c shows a scatterplot of the change in error rate for easy items (from pure to mixed condi-\ntions) versus change in error rate for hard items. Consistent with the behavioral data (Phenomenon\nP4), a speed-accuracy trade off is observed: When easy items slow down in the mixed versus pure\nconditions, error rates drop; when hard items speed up, error rates rise. This trade off is expected,\nbecause block composition affects only the stopping point of the model and not the model dynam-\nics. Thus, any speedup should yield a higher error rate, and vice versa. Interestingly, the accuracy\n\n6\n\n\fFigure 3: Human (black) and sim-\nulation (white) RTs for easy and\nhard items in a mixed block, con-\nditional on the 0, 1, and 2 previ-\nous items (Taylor & Lupker, 2001).\nLast letter in the trial sequence in-\ndicates the current trial and trial or-\nder is left to right.\n\ncriterion is \ufb01xed across conditions in the model; the differences in error rates arise because of a mis-\nmatch between the parameters a and b used to generate trials, and the parameters \u02c6a and \u02c6b estimated\nfrom the trial sequence. Thus, although the criterion does not change across conditions, and the\ncriterion is expressed in terms of accuracy (Equation 1), the block composition nonetheless affects\nthe speed-accuracy trade off.\nAlthough the blocking effect is typically characterized by comparing performance of an item type\nacross blocks, sequential effects within a block have also been examined. Taylor and Lupker (2001,\nExperiment 1) instructed participants to name high-frequency words (easy items) and nonwords\n(hard items). Focusing on the mixed block, Taylor and Lupker analyzed RTs conditional on the\ncontext\u2014the 0, 1, and 2 preceding items. The black bars in Figure 3 show the RTs conditional\non the context. Trial k is most in\ufb02uenced by trial k \u2212 1, but trial k \u2212 2 modulates RTs as well.\nThis decreasing in\ufb02uence of previous trials (Phenomenon P5) is well characterized by the model\nvia the exponential-decay parameter, \u03bb (Equation 5). To model the Taylor and Lupker data, we\nran a simulation with generic parameters which were not tuned to the data: aE = .05, aH = .04,\nbE = bH = .002, \u03c3 = .15, \u03b8 = .99, \u03bb = .5, and n = 5. We then scaled simulation RTs to human\nRTs with an af\ufb01ne transform whose two free parameters were \ufb01t to the data. The result, shown by\nthe white bars in Figure 3, captures the important properties of the data.\nWe have addressed all of the key phenomena of the blocking effect except two. Phenomenon P1\nconcerns the fact that the effect occurs across a variety of tasks and dif\ufb01culty manipulations. The\nubiquity of the effect is completely consistent with our focus on general mechanisms of response\ninitiation. The model does not make any claims about the speci\ufb01c domain or the cause of variation\nin drift rates. Phenomenon P6 states that overt responses are required to obtain the blocking effect.\nAlthough the model cannot lay claims to distinctions between overt and covert responses, it does\nrequire that a drift estimate, \u02c6\u00b5tgt, be obtained on each trial in order to adjust \u02c6a and \u02c6b, which leads\nto blocking effects. In turn, \u02c6\u00b5tgt is determined at the point in the diffusion process when a response\nwould be initiated. Thus, the model claims that selecting a response on trial k is key to in\ufb02uencing\nperformance on trial k + 1.\n\n6 Conclusions\n\nWe have argued that optimal response initiation in speeded choice tasks requires advance knowledge\nabout the dif\ufb01culty of the current decision. Dif\ufb01culty corresponds to the expected rate of evidence\naccumulation for the target response relative to distractors. When dif\ufb01culty is high, the signal-to-\nnoise ratio of the evidence-accumulation process is low, and a rational observer will wait for more\nevidence before initiating a response.\nOur model assumes that dif\ufb01culty in the current task environment is estimated from the dif\ufb01culty of\nrecent trials, under an assumption of temporal autocorrelation. This is consistent with the empiri-\ncally observed blocking effect, whereby responses are slower to easy items and faster to hard items\nwhen those items are interleaved, compared to when item types are presented in separate blocks.\nAccording to our model, mixed blocks induce estimates of local dif\ufb01culty that are intermediate be-\ntween those in pure easy and pure hard blocks. The resultant overestimation of dif\ufb01culty for easy\nitems leads to increased decision times, while an opposite effect occurs for hard items.\nWe formalize these ideas in a multiresponse diffusion model of decision making. Evidence for each\nresponse accrues in a random walk, with positive drift rate \u00b5tgt for the correct response and zero drift\nfor distractors. Analytical derivations show that conversion of evidence to a posterior distribution\n\n7\n\nEHEEHEEHHHEEEHEEEHEHHEEEHHEHEHHHHH540560580600620Trial SequenceResponse Time humansimulation\fover responses depends on \u00b5tgt, which acts as an inverse temperature in a Gibbs distribution. When\nthis parameter is uncertain, with a prior estimated from recent context, error in the estimate leads to\nsystematic bias in the response time. Underestimation of the drift rate, as with easy trials in a mixed\nblock, leads to damping of the computed posterior and response slowdown. Overestimation, as with\nhard trials in a mixed block, leads to exaggeration of the posterior and response speedup.\nThe model successfully explains the full range of phenomena associated with the blocking effect,\nincluding the effects on both RTs and errors, the patterns of slowdown of easy items and speedup\nof hard items, and the detailed sequential effects of recent trials. Moreover, the model is robust\nto parameter settings, as our random-replication simulation shows. The model is robust in other\nrespects as well: Its qualitative behavior does not depend on the number of response alternatives (we\nhave tried up to 1000), the decision rule (we have also tried a criterion based on the posterior ratio\nbetween the most and next most probable responses), the estimation algorithm for \u02c6a and \u02c6b (we have\nalso tried a Kalman \ufb01lter), and violations of assumptions of the generative model (e.g., nonzero drift\nrates for some of the distractors, re\ufb02ecting the similarity structure of perceptual representations).\nThe tradeoff between speed and accuracy in decision making is a paradigmatic problem of cognitive\ncontrol. Theories in cognitive science often hand the problem of control to a homunculus. When\ncontrol processes are speci\ufb01ed, they generally involve explicit, active, and sophisticated mechanisms\n(e.g., con\ufb02ict detection; A.D. Jones et al., 2002). Our model achieves sequential adaptation of\ncontrol via a statistical mechanism that is passive and in a sense dumb; it essentially reestimates the\nstatistical structure of the environment by updating an expectation of task dif\ufb01culty. Our belief is\nthat many aspects of cognitive control can be explained away by such passive statistical mechanisms,\neventually eliminating the homunculus from cognitive science.\n\nAcknowledgments\n\nThis research was supported by NSF grants BCS-0339103, BCS-720375, SBE-0518699, and SBE-0542013,\nand ARC Discovery Grant DP0556805. We thank the students in CSCI7222/CSCI4830/PSYC7782 for inter-\nesting discussions that led to this work.\n\nReferences\nBaum, C. W., & Veeravalli, V. (1994). A sequential procedure for multi-hypothesis testing. IEEE Trans. Inf.\n\nTheory, 40, 1994\u20132007.\n\nBogacz, R, Brown, E, Moehlis, J, Holmes, P & Cohen JD (2006). The physics of optimal decision making: A\nformal analysis of models of performance in two-alternative forced choice tasks. Psych. Rev., 113, 700\u2013765.\nBogacz, R. & Gurney, K. (2007). The basal ganglia and cortex implement optimal decision making between\n\nalternative actions. Neural Computation, 19, 442-477.\n\nDamian, M. F. (2003). Articulatory duration in single word speech production. JEP: LMC, 29, 416\u2013431.\nDorfman, D., & Glanzer, M. (1988). List composition effects in lexical decision and recognition memory. J.\n\nMem. & Lang., 27, 633\u2013648.\n\nGold, J.I., & Shadlen, M.N. (2002). Banburismus and the brain: Decoding the relationship between sensory\n\nstimuli, decisions and reward. Neuron, 36, 299\u2013308.\n\nJones, A. D., Cho, R. Y., Nystrom, L. E., Cohen, J. D., & Braver, T. S. (2002). A computational model of\nanterior cingulate function in speeded response tasks: Effects of frequency, sequence, and con\ufb02ict. Cogn.,\nAff., & Beh. Neuro., 2, 300\u2013317.\n\nKello, C. T. & Plaut, D. C. (2003). Strategic control over rate of processing in word reading: A computational\n\ninvestigation. J. Mem. & Lang., 48, 207\u2013232.\n\nKiger, J. I., & Glass, A. L. (1981). Context effects in sentence veri\ufb01cation. JEP:HPP, 7, 688\u2013700.\nLupker, S. J., Brown, P., & Colombo, L. (1997). Strategic control in a naming task: Changing routes or\n\nchanging deadlines? JEP:LMC, 23, 570\u2013590.\n\nRabbitt, PMA, & Vyas, SM (1970). An elementary preliminary taxonomy for some errors in laboratory choice\n\nRT tasks. Acta Psych., 33, 56-76.\n\nRastle, K., & Coltheart, M. (1999). Serial and strategic effects in reading aloud. JEP:HPP, 25, 482\u2013503.\nRatcliff, R., & McKoon, G. (2007). The diffusion decision model: Theory and data for two-choice decision\n\ntasks. Neural Computation, 20, 873\u2013922.\n\nRatcliff, R., Cherian, A., & Segraves, M. (2003) A comparison of macaque behavior and superior colliculus\n\nneuronal activity to predictions from models of two-choice decisions. J. Neurophys., 90, 1392\u20131407.\n\nTaylor, T. E., & Lupker, S. J. (2001). Sequential effects in naming: A time-criterion account. Journal of\n\nExperimental Psychology: Learning, Memory, and Cognition, 27, 117\u2013138.\n\n8\n\n\f", "award": [], "sourceid": 253, "authors": [{"given_name": "Matt", "family_name": "Jones", "institution": null}, {"given_name": "Sachiko", "family_name": "Kinoshita", "institution": null}, {"given_name": "Michael", "family_name": "Mozer", "institution": null}]}