{"title": "On the Accuracy of Bounded Rationality: How Far from Optimal Is Fast and Frugal?", "book": "Advances in Neural Information Processing Systems", "page_first": 1177, "page_last": 1184, "abstract": null, "full_text": "On the Accuracy of Bounded Rationality: How Far from Optimal Is Fast and Frugal?\n\nMichael Schmitt Ludwig-Marum-Gymnasium Schlossgartenstrae 11 76327 Pfinztal, Germany mschmittm@googlemail.com\n\nLaura Martignon Institut fur Mathematik und Informatik Padagogische Hochschule Ludwigsburg Reuteallee 46, 71634 Ludwigsburg, Germany martignon@ph-ludwigsburg.de\n\nAbstract\nFast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-thebest searches for a sufficiently good ordering of cues (features) in a task where objects are to be compared lexicographically. We investigate the complexity of the problem of approximating optimal cue permutations for lexicographic strategies. We show that no efficient algorithm can approximate the optimum to within any constant factor, if P = NP. We further consider a greedy approach for building lexicographic strategies and derive tight bounds for the performance ratio of a new and simple algorithm. This algorithm is proven to perform better than take-the-best.\n\n1\n\nIntroduction\n\nIn many circumstances the human mind has to make decisions when time and knowledge are limited. Cognitive psychology categorizes human judgments made under such constraints as being boundedly rational if they are \"satisficing\" (Simon, 1982) or, more generally, if they do not fall too far behind the rational standards. A class of models for human reasoning studied in the context of bounded rationality consists of simple algorithms termed \"fast and frugal heuristics\". These were the topic of major psychological research (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999). Great efforts have been put into testing these heuristics by empirical means in experiments with human subjects (Broder, 2000; Broder and Schiffer, 2003; Lee and Cummins, 2004; Newell and Shanks, 2003; Newell et al., 2003; Slegers et al., 2000) or in simulations on computers (Broder, 2002; Hogarth and Karelaia, 2003; Nellen, 2003; Todd and Dieckmann, 2005). (See also the discussion and controversies documented in the open peer commentaries on Todd and Gigerenzer, 2000.) Among the fast and frugal heuristics there is an algorithm called \"take-the-best\" (TTB) that is considered a process model for human judgments based on one-reason decision making. Which of the two cities has a larger population: (a) Dusseldorf (b) Hamburg? This is the task originally studied by Gigerenzer and Goldstein (1996) where German cities with a population of more than 100,000 inhabitants had to be compared. The available information on each city consists of the values of nine binary cues, or attributes, indicating\n\n\f\nHamburg Essen Dusseldorf Validity\n\nSoccer Team 1 0 0 1\n\nState Capital 1 0 1 1/2\n\nLicense Plate 0 1 1 0\n\nTable 1: Part of the German cities task of Gigerenzer and Goldstein (1996). Shown are profiles and validities of three cues for three cities. Cue validities are computed from the data as given here. The original data has different validities but the same cue ranking. presence or absence of a feature. The cues being used are, for instance, whether the city is a state capital, whether it is indicated on car license plates by a single letter, or whether it has a soccer team in the national league. The judgment which city is larger is made on the basis of the two binary vectors, or cue profiles, representing the two cities. TTB performs a lexicographic strategy, comparing the cues one after the other and using the first cue that discriminates as the one reason to yield the final decision. For instance, if one city has a university and the other does not, TTB would infer that the first city is larger than the second. If the cue values of both cities are equal, the algorithm passes on to the next cue. TTB examines the cues in a certain order. Gigerenzer and Goldstein (1996) introduced ecological validity as a numerical measure for ranking the cues. The validity of a cue is a real number in the interval [0, 1] that is computed in terms of the known outcomes of paired comparisons. It is defined as the number of pairs the cue discriminates correctly (i.e., where it makes a correct inference) divided by the number of pairs it discriminates (i.e., where it makes an inference, be it right or wrong). TTB always chooses a cue with the highest validity, that is, it \"takes the best\" among those cues not yet considered. Table 1 shows cue profiles and validities for three cities. The ordering defined by the size of their population is given by { D sseldorf , Essen , D sseldorf , Hamburg , Essen , Hamburg }, u u where a pair a, b indicates that a has less inhabitants than b. As an example for calculating the validity, the state-capital cue distinguishes the first and the third pair but is correct only on the latter. Hence, its validity has value 1/2. The order in which the cues are ranked is crucial for success or failure of TTB. In the exam ple of Dusseldorf and Hamburg, the car-license-plate cue would yield that Dusseldorf (D) is larger than Hamburg (HH), whereas the soccer-team cue would correctly favor Hamburg. Thus, how successful a lexicographic strategy is in a comparison task consisting of a partial ordering of cue profiles depends on how well the cue ranking minimizes the number of incorrect comparisons. Specifically, the accuracy of TTB relies on the degree of optimality achieved by the ranking according to decreasing cue validities. For TTB and the German cities task, computer simulations have shown that TTB discriminates at least as accurate as other models (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999; Todd and Dieckmann, 2005). TTB made as many correct inferences as standard algorithms proposed by cognitive psychology and even outperformed some of them. Partial results concerning the accuracy of TTB compared to the accuracy of other strategies have been obtained analytically by Martignon and Hoffrage (2002). Here we subject the problem of finding optimal cue orderings to a rigorous theoretical analysis employing methods from the theory of computational complexity (Ausiello et al., 1999). Obviously, TTB runs in polynomial time. Given a list of ordered pairs, it computes all cue validities in polynomially many computing steps in terms of the size of the list. We define the optimization problem M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y as the task of minimizing the number of incorrect inferences for the lexicographic strategy on a given list of pairs. We show that, unless P = NP, there is no polynomial-time approximation algo-\n\n\f\nrithm that computes solutions for M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y that are only a constant factor worse than the optimum, unless P = NP. This means that the approximating factor, or performance ratio, must grow with the size of the problem. As an extension of TTB we consider an algorithm for finding cue orderings that was called \"TTB by Conditional Validity\" in the context of bounded rationality. It is based on the greedy method, a principle widely used in algorithm design. This greedy algorithm runs in polynomial time and we derive tight bounds for it, showing that it approximates the optimum with a performance ratio proportional to the number of cues. An important consequence of this result is a guarantee that for those instances that have a solution that discriminates all pairs correctly, the greedy algorithm always finds a permutation attaining this minimum. We are not aware that this quality has been established for any of the previously studied heuristics for paired comparison. In addition, we show that TTB does not have this property, concluding that the greedy method of constructing cue permutations performs provably better than TTB. For a more detailed account and further results we refer to the complete version of this work (Schmitt and Martignon, 2006).\n\n2\n\nLexicographic Strategies\n\nA lexicographic strategy is a method for comparing elements of a set B {0, 1}n . Each component 1, . . . , n of these vectors is referred to as a cue. Given a, b B , where a = (a1 , . . . , an ) and b = (b1 , . . . , bn ), the lexicographic strategy searches for the smallest cue index i {1, . . . , n} such that ai and bi are different. The strategy then outputs one of \" < \" or \" > \" according to whether ai < bi or ai > bi assuming the usual order 0 < 1 of the truth values. If no such cue exists, the strategy returns \" = \". Formally, let diff : B B {1, . . . , n + 1} be the function where diff(a, b) is the smallest cue index on which a and b are different, or n + 1 if they are equal, that is, diff(a, b) = min{{i : ai = bi } {n + 1}}.\n\nLexicographic strategies may take into account that the cues come in an order that is different from 1, . . . , n. Let : {1, . . . , n} {1, . . . , n} be a permutation of the cues. It gives rise to a mapping : {0, 1}n {0, 1}n that permutes the components of Boolean vectors by (a1 , . . . , an ) = (a(1) , . . . , a(n) ). As is uniquely defined given , we simplify the notation and write also for . The lexicographic strategy under cue permutation passes through the cues in the order (1), . . . , (n), that is, it computes the function S : B B {\" < \", \" = \", \" > \"} defined as S (a, b) = S ( (a), (b)).\n\nThen, the function S : B B {\" < \", \" = \", \" > \"} computed by the lexicographic strategy is \" < \" if diff(a, b) n and adiff(a,b) < bdiff(a,b) , \" > \" if diff(a, b) n and adiff(a,b) > bdiff(a,b) , S (a, b) = \" = \" otherwise.\n\nThe problem we study is that of finding a cue permutation that minimizes the number of incorrect comparisons in a given list of element pairs using the lexicographic strategy. An instance of this problem consists of a set B of elements and a set of pairs L B B . Each pair a, b L represents an inequality a b. Given a cue permutation , we say that the lexicographic strategy under infers the pair a, b correctly if S (a, b) {\" < \", \" = \"}, otherwise the inference is incorrect. The task is to find a permutation such that the number of incorrect inferences in L using S is minimal, that is, a permutation that minimizes INCORRECT( , L) = |{ a, b L : S (a, b) = \" > \"}|.\n\n\f\n3\n\nApproximability of Optimal Cue Permutations\n\nA large class of optimization problems, denoted APX, can be solved efficiently if the solution is required to be only a constant factor worse than the optimum (see, e.g., Ausiello et al., 1999). Here, we prove that, if P = NP, there is no polynomial-time algorithm whose solutions yield a number of incorrect comparisons that is by at most a constant factor larger than the minimal number possible. It follows that the problem of approximating the optimal cue permutation is even harder than any problem in APX. The optimization problem is formally stated as follows. M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y Instance: A set B {0, 1}n and a set L B B . Solution: A permutation of the cues of B . Measure: The number of incorrect inferences in L for the lexicographic strategy under cue permutation , that is, INCORRECT( , L). Given a real number r > 0, an algorithm is said to approximate M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y to within a factor of r if for every instance (B , L) the algorithm returns a permutation such that INCORRECT( , L) r opt(L),\n\nwhere opt(L) is the minimal number of incorrect comparisons achievable on L by any permutation. The factor r is also known as the performance ratio of the algorithm. The following optimization problem plays a crucial role in the derivation of the lower bound for the approximability of M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y. MINIMUM HITTING SET Instance: A collection C of subsets of a finite set U . Solution: A hitting set for C , that is, a subset U U such that U contains at least one element from each subset in C . Measure: The cardinality of the hitting set, that is, |U |. M I N I M U M H I T T I N G S E T is equivalent to M I N I M U M S E T C OV E R. Bellare et al. (1993) have shown that M I N I M U M S E T C OV E R cannot be approximated in polynomial time to within any constant factor, unless P = NP. Thus, if P = NP, M I N I M U M H I T T I N G S E T cannot be approximated in polynomial time to within any constant factor as well. Theorem 1. For every r, there is no polynomial-time algorithm that approximates M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y to within a factor of r , unless P = NP. Proof. We show that the existence of a polynomial-time algorithm that approximates M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y to within some constant factor implies the existence of a polynomial-time algorithm that approximates M I N I M U M H I T T I N G S E T to within the same factor. Then the statement follows from the equivalence of M I N I M U M H I T T I N G S E T with M I N I M U M S E T C OV E R and the nonapproximability of the latter (Bellare et al., 1993). The main part of the proof consists in establishing a specific approximation preserving reduction, or AP-reduction, from M I N I M U M H I T T I N G S E T to M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y. (See Ausiello et al., 1999, for a definition of the AP-reduction.). We first define a function f that is computable in polynomial time and maps each instance of M I N I M U M H I T T I N G S E T to an instance of M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y. Let 1 denote the n-bit vector with a 1 everywhere and 1i1 ,...,i the vector with 0 in positions i1 , . . . , i and 1 elsewhere. Given the collection C of subsets of the set U = {u1 , . . . , un }, the function f maps C to (B , L), where B {0, 1}n+1 is defined as follows:\n\n\f\n1. Let (1, 0) B . 2. For i = 1, . . . , n, let (1i , 1) B . 3. For every {ui1 , . . . , ui } C , let (1i1 ,...,i , 1) B . Further, the set L is constructed as L = { (1, 0), (1i , 1) : i = 1, . . . , n}{ (1i1 ,...,i , 1), (1, 0) : {ui1 , . . . , ui } C }. (1) In the following, a pair from the first and second set on the right-hand side of equation (1) is referred to as an element pair and a subset pair, respectively. Obviously, the function f is computable in polynomial time. It has the following property. Claim 1. Let f (C ) = (B , L). If C has a hitting set of cardinality k or less then f (C ) has a cue permutation where INCORRECT( , L) k . To prove this, assume without loss of generality that C has a hitting set U of cardinality exactly k , say U = {uj1 , . . . , ujk }, and let U \\ U = {ujk+1 , . . . , ujn }. Then the cue permutation j1 , . . . , jk , n + 1, jk+1 , . . . , jn . results in no more than k incorrect inferences in L. Indeed, consider an arbitrary subset pair (1i1 ,...,i , 1), (1, 0) . To not be an error, one of i1 , . . . , i must occur in the hitting set j1 , . . . , jk . Hence, the first cue that distinguishes this pair has value 0 in (1i1 ,...,i , 1) and value 1 in (1, 0), resulting in a correct comparison. Further, let (1, 0), (1i , 1) be an element pair with ui U . This pair is distinguished correctly by cue n + 1. Finally, each element pair (1, 0), (1i , 1) with ui U is distinguished by cue i with a result that disagrees with the ordering given by L. Thus, only element pairs with ui U yield incorrect comparisons and no subset pair. Hence, the number of incorrect inferences is not larger than |U |. Next, we define a polynomial-time computable function g that maps each collection C of subsets of a finite set U and each cue permutation for f (C ) to a subset of U . Given that f (C ) = (B , L), the set g (C, ) U is defined as follows: 1. For every element pair (1, 0), (1i , 1) L that is compared incorrectly by , let ui g (C, ). 2. For every subset pair (1i1 ,...,i , 1), (1, 0) L that is compared incorrectly by , let one of the elements ui1 , . . . , ui g (C, ). Clearly, the function g is computable in polynomial time. It satisfies the following condition. Claim 2. Let f (C ) = (B , L). If INCORRECT( , L) k then g (C, ) is a hitting set of cardinality k or less for C . Obviously, if INCORRECT( , L) k then g (C, ) has cardinality at most k . To show that it is a hitting set, assume the subset {ui1 , . . . , ui } C is not hit by g (C, ). Then neither of ui1 , . . . , ui is in g (C, ). Hence, we have correct comparisons for the element pairs corresponding to ui1 , . . . , ui and for the subset pair corresponding to {ui1 , . . . , ui }. As the subset pair is distinguished correctly, one of the cues i1 , . . . , i must be ranked before cue n + 1. But then at least one of the element pairs for ui1 , . . . , ui yields an incorrect comparison. This contradicts the assertion that the comparisons for these element pairs are all correct. Thus, g (C, ) is a hitting set and the claim is established. Assume now that there exists a polynomial-time algorithm A that approximates M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y to within a factor of r. Consider the algorithm that, for a given instance C of M I N I M U M H I T T I N G S E T as input, calls algorithm A with input (B , L) = f (C ), and returns g (C, ) where is the output provided by A. Clearly, this new algorithm runs in polynomial time. We show that it approximates M I N I M U M\n\n\f\nAlgorithm 1 G R E E DY C U E P E R M U TAT I O N Input: a set B {0, 1}n and a set L B B Output: a cue permutation for n cues I := {1, . . . , n}; for i = 1, . . . , n do let j I be a cue where INCORRECT(j, L) = minj I INCORRECT(j , L); (i) := j ; I := I \\ {j }; L := L \\ { a, b : aj = bj } end for. H I T T I N G S E T to within a factor of r. By the assumed approximation property of algorithm A, we have INCORRECT( , L) |g (C, )| |g (C, )| r opt(L). Together with Claim 2, this implies that g ( , C ) is a hitting set for C satisfying r opt(L). r opt(C ). From Claim 1 we obtain opt(L) opt(C ) and, thus,\n\nThus, the proposed algorithm for M I N I M U M H I T T I N G S E T violates the approximation lower bound that holds for this problem under the assumption P = NP. This proves the statement of the theorem.\n\n4\n\nGreedy Approximation of Optimal Cue Permutations\n\nThe so-called greedy approach to the solution of an approximation problem is helpful when it is not known which algorithm performs best. It is a simple heuristic that in practice often provides satisfactory solutions in many situations. The algorithm G R E E DY C U E P E R M U TAT I O N that we introduce here is based on the greedy method. The idea is to select the first cue according to which single cue makes a minimum number of incorrect inferences (choosing one arbitrarily if there are two or more). After that the algorithm removes those pairs that are distinguished by the selected cue, which is reasonable as the distinctions drawn by this cue cannot be undone by later cues. This procedure is then repeated on the set of pairs left. The description of G R E E DY C U E P E R M U TAT I O N is given as Algorithm 1. It employs an extension of the function INCORRECT applicable to single cues, such that for a cue i we have INCORRECT(i, L) = |{ a, b L : ai > bi }|.\n\nIt is evident that Algorithm 1 runs in polynomial time, but how good is it? The least one should demand from a good heuristic is that, whenever a minimum of zero is attainable, it finds such a solution. This is indeed the case with G R E E DY C U E P E R M U TAT I O N as we show in the following result. Moreover, it asserts a general performance ratio for the approximation of the optimum. Theorem 2. The algorithm G R E E DY C U E P E R M U TAT I O N approximates M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y to within a factor of n, where n is the number of cues. In particular, it always finds a cue permutation with no incorrect inferences if one exists. Proof. We show by induction on n that the permutation returned by the algorithm makes a number of incorrect inferences no larger than n opt(L). If n = 1, the optimal cue\n\n\f\nF\n\n0 , 010 001 010 , 100 110 , 101 00 , 111\n\nigure 1: A set of lexicographically ordered pairs with nondecreasing cue validities (1, 1/2, and 2/3). The cue ordering of TTB (1, 3, 2) causes an incorrect inference on the first pair. By Theorem 2, G R E E DY C U E P E R M U TAT I O N finds the lexicographic ordering. permutation is definitely found. Let n > 1. Clearly, as the incorrect inferences of a cue cannot be reversed by other cues, there is a cue j with INCORRECT(j, L) opt(L). The algorithm selects such a cue in the first round of the loop. During the rest of the rounds, a permutation of n - 1 cues is constructed for the set of remaining pairs. Let j be the cue that is chosen in the first round, I = {1, . . . , j - 1, j + 1, . . . , n}, and L = L \\ { a, b : aj = bj }. Further, let optI (L ) denote the minimum number of incorrect inferences taken over the permutations of I on the set L . Then, we observe that opt(L) opt(L ) = optI (L ). The inequality is valid because of L L . (Note that opt(L ) refers to the minimum taken over the permutations of all cues.) The equality holds as cue j does not distinguish any pair in L . By the induction hypothesis, rounds 2 to n of the loop determine a cue permutation with INCORRECT( , L ) (n - 1) optI (L ). Thus, the number of incorrect inferences made by the permutation finally returned by the algorithm satisfies INCORRECT( , L) INCORRECT(j, L) + (n - 1) optI (L ), which is, by the inequalities derived above, not larger than opt(L) + (n - 1) opt(L) as stated. Corollary 3. On inputs that have a cue ordering without incorrect comparisons under the lexicographic strategy, G R E E DY C U E P E R M U TAT I O N can be better than TTB. Proof. Figure 1 shows a set of four lexicographically ordered pairs. According to Theorem 2, G R E E DY C U E P E R M U TAT I O N comes up with the given permutation of the cues. The validities are 1, 1/2, and 2/3. Thus, TTB ranks the cues as 1, 3, 2 whereupon the first pair is inferred incorrectly. Finally, we consider lower bounds on the performance ratio of G R E E DY C U E P E R M U TA T I O N . The proof of this claim is omitted here. Theorem 4. The performance ratio of G R E E DY C U E P E R M U TAT I O N is at least max{n/2, |L|/2}.\n\n5\n\nConclusions\n\nThe result that the optimization problem M I N I M U M I N C O R R E C T L E X I C O G R A P H I C S T R AT E G Y cannot be approximated in polynomial time to within any constant factor answers a long-standing question of psychological research into models of bounded rationality: How accurate are fast and frugal heuristics? It follows that no fast, that is, polynomialtime, algorithm can approximate the optimum well, under the widely accepted assumption that P = NP. A further question is concerned with a specific fast and frugal heuristic: How accurate is TTB? The new algorithm G R E E DY C U E P E R M U TAT I O N has been shown to perform provably better than TTB. In detail, it always finds accurate solutions when they exist, in contrast to TTB. With this contribution we pose a challenge to cognitive psychology: to study the relevance of the greedy method as a model for bounded rationality.\n\n\f\nAcknowledgment. The first author has been supported in part by the Deutsche Forschungsgemeinschaft (DFG).\nReferences Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., and Protasi, M. (1999). Complexity and Approximation: Combinatorial Problems and Their Approximability Properties. Springer-Verlag, Berlin. Bellare, M., Goldwasser, S., Lund, C., and Russell, A. (1993). Efficient probabilistically checkable proofs and applications to approximation. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 294304. ACM Press, New York, NY. Broder, A. (2000). Assessing the empirical validity of the \"take-the-best\" heuristic as a model of human probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26:13321346. Broder, A. (2002). Take the best, Dawes' rule, and compensatory decision strategies: A regressionbased classification method. Quality & Quantity, 36:219238. Broder, A. and Schiffer, S. (2003). Take the best versus simultaneous feature matching: Probabilistic inferences from memory and effects of representation format. Journal of Experimental Psychology: General, 132:277293. Gigerenzer, G. and Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103:650669. Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford University Press, New York, NY. Hogarth, R. M. and Karelaia, N. (2003). \"Take-the-best\" and other simple strategies: Why and when they work \"well\" in binary choice. DEE Working Paper 709, Universitat Pompeu Fabra, Barcelona. Lee, M. D. and Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the \"take the best\" and the \"rational\" models. Psychonomic Bulletin & Review, 11:343352. Martignon, L. and Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparison. Theory and Decision, 52:2971. Nellen, S. (2003). The use of the \"take the best\" heuristic under different conditions, modeled with ACT-R. In Detje, F., Dorner, D., and Schaub, H., editors, Proceedings of the Fifth International Conference on Cognitive Modeling, pages 171176, Universitatsverlag Bamberg, Bamberg. Newell, B. R. and Shanks, D. R. (2003). Take the best or look at the rest? Factors influencing \"One-Reason\" decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:5365. Newell, B. R., Weston, N. J., and Shanks, D. R. (2003). Empirical tests of a fast-and-frugal heuristic: Not everyone \"takes-the-best\". Organizational Behavior and Human Decision Processes, 91:82 96. Schmitt, M. and Martignon, L. (2006). On the complexity of learning lexicographic strategies. Journal of Machine Learning Research, 7(Jan):5583. Simon, H. A. (1982). Models of Bounded Rationality, Volume 2. MIT Press, Cambridge, MA. Slegers, D. W., Brake, G. L., and Doherty, M. E. (2000). Probabilistic mental models with continuous predictors. Organizational Behavior and Human Decision Processes, 81:98114. Todd, P. M. and Dieckmann, A. (2005). Heuristics for ordering cue search in decision making. In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages 13931400. MIT Press, Cambridge, MA. Todd, P. M. and Gigerenzer, G. (2000). Precis of \"Simple Heuristics That Make Us Smart\". Behavioral and Brain Sciences, 23:727741.\n\n\f\n", "award": [], "sourceid": 2813, "authors": [{"given_name": "Michael", "family_name": "Schmitt", "institution": null}, {"given_name": "Laura", "family_name": "Martignon", "institution": null}]}