{"title": "Generalization to Unseen Cases", "book": "Advances in Neural Information Processing Systems", "page_first": 1129, "page_last": 1136, "abstract": null, "full_text": "Generalization to Unseen Cases\nTeemu Roos Helsinki Institute for Information Technology P.O.Box 68, 00014 Univ. of Helsinki, Finland\nteemu.roos@cs.helsinki.fi\n\n Peter Grunwald CWI, P.O.Box 94079, 1090 GB, Amsterdam, The Netherlands\npdg@cwi.nl\n\n Petri Myllymaki Helsinki Institute for Information Technology P.O.Box 68, 00014 Univ of Helsinki, Finland\npetri.myllymaki@cs.helsinki.fi\n\nHenry Tirri Nokia Research Center P.O.Box 407 Nokia Group, Finland\nhenry.tirri@nokia.com\n\nAbstract\nWe analyze classification error on unseen cases, i.e. cases that are different from those in the training set. Unlike standard generalization error, this off-training-set error may differ significantly from the empirical error with high probability even with large sample sizes. We derive a datadependent bound on the difference between off-training-set and standard generalization error. Our result is based on a new bound on the missing mass, which for small samples is stronger than existing bounds based on Good-Turing estimators. As we demonstrate on UCI data-sets, our bound gives nontrivial generalization guarantees in many practical cases. In light of these results, we show that certain claims made in the No Free Lunch literature are overly pessimistic.\n\n1\n\nIntroduction\n\nA large part of learning theory deals with methods that bound the generalization error of hypotheses in terms of their empirical errors. The standard definition of generalization error allows overlap between the training sample and test cases. When such overlap is not allowed, i.e., when considering off-training-set error [1][5] defined in terms of only previously unseen cases, usual generalization bounds do not apply. The off-training-set error and the empirical error sometimes differ significantly with high probability even for large sample sizes. In this paper, we show that in many practical cases, one can nevertheless bound this difference. In particular, we show that with high probability, in the realistic situation where the number of repeated cases, or duplicates, relative to the total sample size is small, the difference between the off-training-set error and the standard generalization error is also small. In this case any standard generalization error bound, no matter how it is arrived at, transforms into a similar bound on the off-training-set error. Our Contribution We show that with probability at least 1- , if there are r repetitions in the training sample, then the difference between the off-training-set error and the standard 1l ( 4 Thm. 2). Our main generalization error is at most of order O n og + r log n\n\n\f\nresult (Corollary 1 of Thm. 1) gives a stronger non-asymptotic bound that can be evaluated numerically. The proof of Thms. 1 and 2 is based on Lemma 2, which is of independent interest, giving a new lower bound on the so-called missing mass, the total probability of as yet unseen cases. For small samples and few repetitions, this bound is significantly stronger than existing bounds based on Good-Turing estimators [6][8]. Properties of Our Bounds Our bounds hold (1) uniformly, are (2) distribution-free and (3) data-dependent, yet (4) relevant for data-sets encountered in practice. Let us consider these properties in turn. Our bounds hold uniformly in that they hold for all hypotheses (functions from features to labels) at the same time. Thus, unlike many bounds on standard generalization error, our bounds do not depend in any way on the richness of the hypothesis class under consideration measured in terms of, for instance, its VC dimension, or the margin of the selected hypothesis on the training sample, or any other property of the mechanism with which the hypothesis is chosen. Our bounds are distribution-free in that they hold no matter what the (unknown) data-generating distribution is. Our bounds depend on the data: they are useful only if the number of repetitions in the training set is very small compared to the training set size. However, in machine learning practice this is often the case as demonstrated in Sec. 3 with several UCI data-sets. Relevance Why are our results interesting? There are at least three reasons, the first two of which we discuss extensively in Sec. 4: (1) The use of off-training-set error is an essential ingredient of the No Free Lunch (NFL) theorems [1][5]. Our results counter-balance some of the overly pessimistic conclusions of this work. This is all the more relevant since the NFL theorems have been quite influential in shaping the thinking of both theoretical and practical machine learning researchers (see, e.g., Sec. 9.2 of the well-known textbook [5]). (2) The off-training-set error is an intuitive measure of generalization performance. Yet in practice it differs from standard generalization error (even with continuous feature spaces). Thus, we feel, it is worth studying. (3) Technically, we establish a surprising connection between off-training-set error (a concept from classification) and missing mass (a concept mostly applied in language modeling), and give a new lower bound on the missing mass. The paper is organized as follows: In Sec. 2 we fix notation, including the various error functionals considered, and state some preliminary results. In Sec. 3 we state our bounds, and we demonstrate their use on data-sets from the UCI machine learning repository. We discuss the implications of our results in Sec. 4. Postponed proofs are in Appendix A.\n\n2\n\nPreliminaries and Notation\n\nLet X be an arbitrary space of inputs, and let Y be a discrete space of labels. A learner observes a random training sample, D, of size n, consisting of the values of a sequence of inputlabel pairs ((X1 , Y1 ), ..., (Xn , Yn )), where (Xi , Yi ) X Y . Based on the sample, the learner outputs a hypothesis h : X Y that gives, for each possible input value, a prediction of the corresponding label. The learner is successful if the produced hypothesis has high probability of making a correct prediction when applied to a test case. (Xn+1 , Yn+1 ). Both the training sample and the test case are independently drawn from a common generating distribution P . We use the following error functionals: Definition 1 (errors). Given a training sample D of size n, the i.i.d., off-training-set, and empirical error of a hypothesis h are given by Eiid (h) := Pr[Y = h(X )] Eots (h, D) := Pr[Y n= h(X ) | X XD ] / 1 Eemp (h, D) := n i=1 I{h(Xi )=Yi } i.i.d. error, off-training-set error, empirical error,\n\nwhere XD is the set of X -values occurring in sample D, and the indicator function I{} takes value one if its argument is true and zero otherwise.\n\n\f\nThe first one of these is just the standard generalization error of learning theory. Following [2], we call it i.i.d. error. For general input spaces and generating distributions E ots (h, D) may be undefined for some D. In either case, this is not a problem. First, if XD has measure one, the off-training-set error is undefined and we need not concern ourselves with it; the relevant error measure is Eiid (h) and standard results apply1 . If, on the other hand, XD has measure zero, the off-training-set error and the i.i.d. error are equivalent and our results (in Sec. 3 below) hold trivially. Thus, if off-training-set error is relevant, our results hold. Definition 2. Given a training sample D, the sample coverage p(XD ) is the probability that a new X -value appears in D: p(XD ) := Pr[X XD ], where XD is as in Def. 1. The remaining probability, 1 - p(XD ), is called the missing mass. a) b) |Eots (h, D) - Eiid (h)| p(XD ) , p(XD ) Eots (h, D) - Eiid (h) Eiid (h) . 1 - p(XD )\n\nLemma 1. For any training set D such that Eots (h, D) is defined, we have\n\nProof. Both bounds follow essentially from the following inequalities2 : Eots (h, D) = Pr[Y = h(X )] Eiid (h) Pr[Y = h(X ), X XD ] / 1= 1 Pr[X XD ] / Pr[X XD ] / 1 - p(X D ) E ( E p iid (h) iid (h) = 1 1 - p(XD )) + 1 (X D ) 1 - p(XD ) 1 - p(XD ) Eiid (h) + p(XD ) ,\n\nwhere denotes the minimum. This gives one direction of Lemma 1.a (an upper bound on Eots (h, D)); the other direction is obtained by using analogous inequalities for the quantity 1 - Eots (h, D), with Y = h(X ) replaced by Y = h(X ), which gives the upper bound 1 - Eots (h, D) 1 - Eiid (h) + p(XD ). Lemma 1.b follows from the first line by ignoring the upper bound 1, and subtracting Eiid (h) from both sides. Given the value of (or an upper bound on) Eiid (h), the upper bound of Lemma 1.b may be significantly stronger than that of Lemma 1.a. However, in this work we only use Lemma 1.a for simplicity since it depends on p(XD ) alone. The lemma would be of little use without a good enough upper bound on the sample coverage p(XD ), or equivalently, a lower bound on the missing mass. In the next section we obtain such a bound.\n\n3\n\nAn Off-training-set Error Bound\n\nGood-Turing estimators [6], named after Irving J. Good, and Alan Turing, are widely used in language modeling to estimate the missing mass. The known small bias of such estimators, together with a rate of convergence, can be used to obtain lower and upper bound for the missing mass [7, 8]. Unfortunately, for the sample sizes we are interested in, the lower bounds are not quite tight enough (see Fig. 1 below). In this section we state a new lower bound, not based on Good-Turing estimators, that is practically useful in our context. We compare this bound to the existing ones after Thm. 2. Let Xn X be the set consisting of the n most probable individual values of X . In case there are several such subsets any one of them will do. In case X has less than n elements, Xn := X . Denote for short pn := Pr[X Xn ]. No assumptions are made regarding the value of pn , it may or may not be zero. The reason for us being interested in pn is that \n1 2\n\nNote however, that a continuous feature space does not necessarily imply this, see Sec. 4. This neat proof is due to Gilles Blanchard (personal communication).\n\n\f\nit gives us an upper bound p(XD ) pn on the sample coverage that holds for all D. We prove that when pn is large it is likely that a sample of size n will have several repeated X values so that the number of distinct X -values is less than n. This implies that if a sample with a small number of repeated X -values is observed, it is safe to assume that p n is small and therefore, the sample coverage p(XD ) must also be small. Lemma 2. The probability of obtaining a sample of size n 1 with at most 0 r < n repeated X -values is upper-bounded by Pr[\"at most r repetitions\"] (n, r, p n ) , where n n k (n, r, pn ) := (1) pk (1 - pn )n-k f (n, r, k ) kn =0 1 if k < r i k n and f (n, r, k ) is given by f (n, r, k ) := f k r. min r (n-k!+r)! n-(k-r) , 1 (n, r, pn ) is a non-increasing function of pn . For a proof, see Appendix A. Given a fixed confidence level 1 - we can now define a data-dependent upper bound on the sample coverage B (, D) := arg min {p : (n, r, p) } ,\np\n\n(2)\n\nwhere r is the number of repeated X -values in D, and (n, r, p) is given by Eq. (1). Theorem 1. For any 0 1, the upper bound B (, D) on the sample coverage given by Eq. (2) holds with at least probability 1 - : Pr [p(XD ) B (, D)] 1 - . Proof. Consider fixed values of the confidence level 1 - , sample size n, and probability pn . Let R be the largest integer for which (n, R, pn ) . By Lemma 2 the probability of obtaining at most R repetitions is upper-bounded by . Thus, it is sufficient that the bound holds whenever the number of repetitions is greater than R. For any such r > R, we have (n, r, pn ) > . By Lemma 2 the function (n, r, pn ) is non-increasing in pn , and hence it must be that pn < arg minp {p : (n, r, p) } = B (, D). Since p(XD ) pn , the bound then holds for all r > R. Rather than the sample coverage p(XD ), the real interest is often in off-training-set error. Using the relation between the two quantities, one gets the following corollary that follows directly from Lemma 1.a and Thm. 1. Corollary 1 (main result: off-training-set error bound). For any 0 1, the difference between the i.i.d. error and the off-training-set error is bounded by Pr [h |Eots (h, D) - Eiid (h)| B (, D)] 1 - . Corollary 1 implies that the off-training-set error and the i.i.d. error are entangled, thus transforming all distribution-free bounds on the i.i.d. error to similar bounds on the offtraining-set error. Since the probabilistic part of the result (Lemma 1) does not involve a specific hypothesis, Corollary 1 holds for all hypotheses at the same time, and does not depend on the richness of the hypothesis class in terms of, for instance, its VC dimension. Figure 1 illustrates the behavior of the bound (2) as the sample size grows. It can be seen that for a small number of repetitions the bound is nontrivial already at moderate sample sizes. Moreover, the effect of repetitions is tolerable, and it diminishes as the number of repetitions grows. Table 1 lists values of the bound for a number of data-sets from the UCI machine learning repository [9]. In many cases the bound is about 0.100.20 or less. Theorem 2 gives an upper bound on the rate with which the bound decreases as n grows.\n\n\f\n1 0.9 0.8 0.7\nr=\n\nT G-\n\nB (, D)\n\n0.6 0.5 0.4 0.3 0.2\n\n10\n\nr=\n\nr=\n\n1\n\n0\n\nPSfrag replacements\n\n0.1 1 10 100 1000 10000\n\nsample size\n\nFigure 1: Upper bound B (, D) given by Eq. (2) for samples with zero (r = 0) to ten (r = 10) repeated X -values on the 95 % confidence level ( = 0.05). The dotted curve is an asymptotic version for r = 0 given by Thm. 2. The curve labeled `G-T' (for r = 0) is based on Good-Turing estimators (Thm. 3 in [7]). Asymptotically, it exceeds our r = 0 bound by a factor O(log n). Bound for the UCI data-sets in Table 1 are marked with small triangles ( ). Note the log-scale for sample size.\n\nFor a proof, see Appendix A. Let us compare Thm. 2 to the existing bounds on B (, D) based on Good-Turg estimators [7, 8]. For fixed , Thm. 3 in [7] gives an upper bound in n of O (r/n + log n/ ). The exact bound is drawn as the G-T curve in Fig. 1. In contrast, , our bound gives O C + r log n/ n for a known constant C > 0. For fixed r and increasing n, this gives an improvement over the G-T und of order O(log n) if r = 0, bo and O( log n) if r > 0. For r growing faster than O( log n), asymptotically our bound becomes uncompetitive3 . The real advantage of our bound is that, in contrast to G-T, it gives nontrivial bounds for sample sizes and number of repetitions that typically occur in classification problems. For practical applications in language modeling (large samples, many repetitions), the existing G-T bound of [7] is probably preferable.\n\nTheorem 2 (a weaker bound in closed-form)1 For all n and all pn , all r < n, the function . l4 . B (, D) has the upper bound B (, D) 3 2n og + 2r log n\n\nThe developments in [8] are also relevant, albeit in a more indirect manner. In Thm. 10 of that paper, it is shown that the probability that the missing mass is larger than its expected value by an amount is bounded by e-(e/2)n 2 . In [7], Sec. 4, some techniques are developed to bound the expected missing mass in terms of the number of repetitions in the sample. One might conjecture that, combined with Thm. 10 of [8], these techniques can be extended to yield an upper bound on B (, D) of order O(r/n + 1/ n) that would be asymptotically stronger than the current bound. We plan to investigate this and other potential ways to improve the bounds in future work. Any advance in this direction makes the implications of our bounds even more compelling.\n3 If data are i.i.d. according to a fixed P , then, as follows from the strong law of large numbers, r, considered as a function of n, will either remain zero for ever or will be larger than cn for some c > 0, for all n larger than some n0 . In practice, our bound is still relevant because typical data-sets often have r very small compared to n (see Table 1). This is possible because apparently n n0.\n\n\f\nTable 1: Bounds on the difference between the i.i.d. error and the off-training-set error given by Eq. (2) on confidence level 95% ( = 0.05). A dash (-) indicates no repetitions. Bounds greater than 0.5 are in parentheses. DATA SAMPLE SIZE REPETITIONS BOUND Abalone Adult Annealing Artificial Characters Breast Cancer (Diagnostic) Breast Cancer (Original) Credit Approval Cylinder Bands Housing Internet Advertisement Isolated Letter Speech Recogn. Letter Recognition Multiple Features Musk Page Blocks Water Treatment Plant Waveform 4177 32562 798 1000 569 699 690 542 506 2385 1332 20000 2000 6598 5473 527 5000 25 8 34 236 441 1332 4 17 80 0.0383 0.0959 0.3149 (0.5112) 0.1057 (1.0) 0.0958 0.1084 0.1123 (0.9865) 0.0685 (0.6503) 0.1563 0.1671 0.3509 0.1099 0.0350\n\n4\n\nDiscussion Implications of Our Results\n\nThe use of off-training-set error is an essential ingredient of the influential No Free Lunch theorems [1][5]. Our results imply that, while the NFL theorems themselves are valid, some of the conclusions drawn from them are overly pessimistic, and should be reconsidered. For instance, it has been suggested that the tools of conventional learning theory (dealing with standard generalization error) are \"ill-suited for investigating off-trainingset error\" [3]. With the help of the little add-on we provide in this paper (Corollary 1), any bound on standard generalization error can be converted to a bound on off-training-set error. Our empirical results on UCI data-sets show that the resulting bound is often not essentially weaker than the original one. Thus, the conventional tools turn out not to be so `ill-suited' after all. Secondly, contrary to what is sometimes suggested4 , we show that one can relate performance on the training sample to performance on as yet unseen cases. On the other side of the debate, it has sometimes been claimed that the off-training-set error is irrelevant to much of modern learning theory where often the feature space is continuous. This may seem to imply that off-training-set error coincides with standard generalization error (see remark after Def. 1). However, this is true only if the associated distribution is continuous: then the probability of observing the same X -value twice is zero. However, in practice even when the feature space has continuous components, data-sets sometimes contain repetitions (e.g., Adult, see Table 1), if only for the reason that continuous features may be discretized or truncated. In practice repetitions occur in many data-sets, implying that off-training-set error can be different from the standard i.i.d. error. Thus, off-trainingset error is relevant. Also, it measures a quantity that is in some ways close to the meaning of `inductive generalization' in dictionaries the words `induction' and `generalization' frequently refer to `unseen instances'. Thus, off-training-set error is not just relevant but also intuitive. This makes it all the more interesting that standard generalization bounds transfer to off-training-set error and that is the central implication of this paper.\n4 For instance, \"if we are interested in the error for [unseen cases], the NFL theorems tell us that (in the absence of prior assumptions) [empirical error] is meaningless\" [2].\n\n\f\nAcknowledgments We thank Gilles Blanchard for useful discussions. Part of this work was carried out while the first author was visiting CWI. This work was supported in part by the Academy of Finland (Minos, Prima), Nuffic, and IST Programme of the European Community, under the PASCAL Network, IST-2002-506778. This publication only reflects the authors' views. References\n[1] Wolpert, D.H.: On the connection between in-sample testing and generalization error. Complex Systems 6 (1992) 4794 [2] Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8 (1996) 13411390 [3] Wolpert, D.H.: The supervised learning no-free-lunch theorems. In: Proc. 6th Online World Conf. on Soft Computing in Industrial Applications (2001). [4] Schaffer, C.: A conservation law for generalization performance. In: Proc. 11th Int. Conf. on Machine Learning (1994) 259265 [5] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd Edition. Wiley, 2001. [6] Good, I.J.: The population frequencies of species and the estimation of population parameters. Biometrika 40 (1953) 237264 [7] McAllester, D.A., Schapire, R.E.: On the convergence rate of Good-Turing estimators. In: Proc. 13th Ann. Conf. on Computational Learning Theory (2000) 16 [8] McAllester, D.A., Ortiz L.: Concentration inequalities for the missing mass and for histogram rule error. Journal of Machine Learning Research 4 (2003) 895911. [9] Blake, C., and Merz, C.: UCI repository of machine learning databases. Univ. of California, Dept. of Information and Computer Science (1998)\n\nA Postponed Proofs\nWe first state two propositions that are useful in the proof of Lemma 2. Proposition 1. Let Xm be a domain of size m, and let PXm be an associated probability distribution. The probability of getting no repetitions when sampling 1 k m items with replacement from distribution PXm is upper-bounded by m! Pr[\"no repetitions\" | k ] (m-k)!mk . Proof Sketch of Proposition 1. By way of contradiction it is possible to show that the prob ability of obtaining no repetitions is maximized when PXm is uniform. After this, it is easily seen that the maximal probability equals the right-hand side of the inequality.\n Proposition 2. Let Xm be a domain of size m, and let PXm be an associated probability distribution. The probability of getting at most r 0 repeated values when sampling 1 k m items with replacement from distribution PXm is upper-bounded by 1 if k < r k i m! Pr[\"at most r repetitions\" | k ] -(k-r ) min r (m-k+r)! m ,1 f k r.\n\nProof of Proposition 2. The case k < r is trivial. For k r, the event \"at most r repetitions in k draws\" is equivalent to the event that there is at least one subset of size k - r of the X -variables {X1 , . . . , Xk } such that all variables in the subset take distinct values. For a subset of size k - r, Proposition 1 implies that ts e probability that all values are distinct kh m is at most (m-k!+r)! m-(k-r) . Since there are r ubsets of the X -variables of size k - r, kg the union bound implies that multiplying this by r ives the required result.\n\n\f\nProof of Lemma 2. The probability of getting at most r repeated X -values can be upper bounded by considering repetitions in the maximally probable set Xn only. The probability of no repetitions in Xn can be broken into n + 1 mutually exclusive cases depending on how many X -values fall into the set Xn . Thus we get Pr[\"at most r repetitions in Xn \"] = kn\n=0\n\n Pr[\"at most r repetitions in Xn \" | k ] Pr[k ] ,\n\nwhere Pr[ | k ] denotes probability under the condition that k of the n cases fall into Xn , and Pr[k ] denotes the probability of the latter occurring. Proposition 2 gives an upper bound on the conditional probability. The probability Pr[kn i given by the binomial ]s distribution with parameter pn : Pr[k ] = Bin(k ; n, pn ) = k pk (1 - pn )n-k . Com n bining these gives the formula for (n, r, pn ). Showing that (n, r, pn ) is non-increasing in pn is tedious but uninteresting and we only sketch the proof: It can be checked that the conditional probability given by Proposition 2 is non-increasing in k (the min operator is essential for this). From this the claim follows since for increasing pn the binomial distribution puts more weight to terms with large k , thus not increasing the sum. Proof of Thm. 2. The first three factors in the definition (1) of (n, r, pn ) are equal to a binomial probability Bin(k ; n, pn ), and the expectation of k is thus npn . By the Hoeffd ing bound, for all > 0, the probability of k < n(pn - ) is bounded by exp(-2n 2). 2 Applying this bound with = pn /3 we get that the probability of k < 3 pn is bounded by 2 2 exp(- 9 npn ). Combined with (1) this gives the following upper bound on (n, r, pn ): -2 2 + -2 2 m ax f (n, r, k ) + max f (n, r, k ) exp max f (n, r, k ) exp 9 npn 9 npn 2 2 2 (3) where the maxima are taken over integer-valued k . In the last inequality we used the fact that for all n, r, k , it holds that f (n, r, k ) 1. Now note that for k r, we can bound f (n, r, k ) k k-r-1 j n-j r\n=0 k