{"title": "Convex Repeated Games and Fenchel Duality", "book": "Advances in Neural Information Processing Systems", "page_first": 1265, "page_last": 1272, "abstract": null, "full_text": "Convex Repeated Games and Fenchel Duality\n\n1\n\nShai Shalev-Shwartz1 and Yoram Singer1,2 School of Computer Sci. & Eng., The Hebrew University, Jerusalem 91904, Israel 2 Google Inc. 1600 Amphitheater Parkway, Mountain View, CA 94043, USA\n\nAbstract\nWe describe an algorithmic framework for an abstract game which we term a convex repeated game. We show that various online learning and boosting algorithms can be all derived as special cases of our algorithmic framework. This unified view explains the properties of existing algorithms and also enables us to derive several new interesting algorithms. Our algorithmic framework stems from a connection that we build between the notions of regret in game theory and weak duality in convex optimization.\n\n1\n\nIntroduction and Problem Setting\n\nSeveral problems arising in machine learning can be modeled as a convex repeated game. Convex repeated games are closely related to online convex programming (see [19, 9] and the discussion in the last section). A convex repeated game is a two players game that is performed in a sequence of consecutive rounds. On round t of the repeated game, the first player chooses a vector wt from a convex set S . Next, the second player responds with a convex function gt : S  R. Finally, the first player suffers an instantaneous loss gt (wt ). We study the game from the viewpoint of the first t player. The goal of the first player is to minimize its cumulative loss, gt (wt ). To motivate this rather abstract setting let us first cast the more familiar setting of online learning as a convex repeated game. Online learning is performed in a sequence of consecutive rounds. On round t, the learner first receives a question, cast as a vector xt , and is required to provide an answer for this question. For example, xt can be an encoding of an email message and the question is whether the email is spam or not. The prediction of the learner is performed based on an hypothesis, ht : X  Y , where X is the set of questions and Y is the set of possible answers. In the aforementioned example, Y would be {+1, -1} where +1 stands for a spam email and -1 stands for a benign one. After predicting an answer, the learner receives the correct answer for the question, denoted yt , and suffers loss according to a loss function (ht , (xt , yt )). In most cases, the hypotheses used for prediction come from a parameterized set of hypotheses, H = {hw : w  S }. For example, the set of linear classifiers, which is used for answering yes/no questions, is defined as H = {hw (x) = sign( w, x ) : w  Rn }. Thus, rather than saying that on round t the learner chooses a hypothesis, we can say that the learner chooses a vector wt and its hypothesis is hwt . Next, we note that once the environment chooses a question-answer pair (xt , yt ), the loss function becomes a function over the hypotheses space or equivalently over the set of parameter vectors S . We can therefore redefine the online learning process as follows. On round t, the learner chooses a vector wt  S , which defines a hypothesis hwt to be used for prediction. Then, the environment chooses a questionanswer pair (xt , yt ), which induces the following loss function over the set of parameter vectors, gt (w) = (hw , (xt , yt )). Finally, the learner suffers the loss gt (wt ) = (hwt , (xt , yt )). We have therefore described the process of online learning as a convex repeated game. In this paper we assess the performance of the first player using the notion of regret. Given a number of rounds T and a fixed vector u  S , we define the regret of the first player as the excess loss for\n\n\f\nnot consistently playing the vector u,\nT T 1t 1t gt (wt ) - gt (u) . T =1 T =1\n\nOur main result is an algorithmic framework for the first player which guarantees low regret with respect to any vector u  S . Specifically, we derive regret bounds that take the following form u  S,\nT T 1t 1t f (u) + L  gt (wt ) - gt (u)  , T =1 T =1 T\n\n(1)\n\nwhere f : S  R and L  R+ . Informally, the function f measures the \"complexity\" of vectors in S and the scalar L is related to some generalized Lipschitz property of the functions g1 , . . . , gT . We defer the exact requirements we impose on f and L to later sections. Our algorithmic framework emerges from a representation of the regret bound given in Eq. (1) using an optimization problem. Specifically, we rewrite Eq. (1) as follows\nT T 1t 1t f (u) + L  gt (wt )  inf gt (u) + . uS T T =1 T =1\n\n(2)\n\nThat is, the average loss of the first player should be bounded above by the minimum value of an optimization problem in which we jointly minimize the average loss of u and the \"complexity\" of u as measured by the function f . Note that the optimization problem on the right-hand side of Eq. (2) can only be solved in hindsight after observing the entire sequence of loss functions. Nevertheless, writing the regret bound as in Eq. (2) implies that the average loss of the first player forms a lower bound for a minimization problem. The notion of duality, commonly used in convex optimization theory, plays an important role in obtaining lower bounds for the minimal value of a minimization problem (see for example [14]). By generalizing the notion of Fenchel duality, we are able to derive a dual optimization problem, which can be optimized incrementally, as the game progresses. In order to derive explicit quantitative regret bounds we make an immediate use of the fact that dual objective lower bounds the primal objective. We therefore reduce the process of playing convex repeated games to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. By doing so we are able to tie the primal objective value, the average loss of the first player, and the increase in the dual. The rest of this paper is organized as follows. In Sec. 2 we establish our notation and point to a few mathematical tools that we use throughout the paper. Our main tool for deriving algorithms for playing convex repeated games is a generalization of Fenchel duality, described in Sec. 3. Our algorithmic framework is given in Sec. 4 and analyzed in Sec. 5. The generality of our framework allows us to utilize it in different problems arising in machine learning. Specifically, in Sec. 6 we underscore the applicability of our framework for online learning and in Sec. 7 we outline and analyze boosting algorithms based on our framework. We conclude with a discussion and point to related work in Sec. 8. Due to the lack of space, some of the details are omitted from the paper and can be found in [16].\n\n2\n\nMathematical Background\n\nWe denote scalars with lower case letters (e.g. x and w), and vectors with bold face letters (e.g. x and w). The inner product between vectors x and w is denoted by x, w . Sets are designated by upper case letters (e.g. S ). The set of non-negative real numbers is denoted by R+ . For any k  1, the set of integers {1, . . . , k } is denoted by [k ]. A norm of a vector x is denoted by x . The dual norm is defined as  = sup{ x,  : x  1}. For iexample, the Euclidean norm, x 2 = ( x, x )1/2 is dual to itself and the 1 norm, x 1 = |xi |, is dual to the  norm, x  = maxi |xi |. We next recall a few definitions from convex analysis. The reader familiar with convex analysis may proceed to Lemma 1 while for a more thorough introduction see for example [1]. A set S is\n\n\f\nconvex if for any two vectors w1 , w2 in S , all the line between w1 and w2 is also within S . That is, for any   [0, 1] we have that w1 + (1 - )w2  S . A set S is open if every point in S has a neighborhood lying in S . A set S is closed if its complement is an open set. A function f : S  R is closed and convex if for any scalar   R, the level set {w : f (w)  } is closed and convex. The Fenchel conjugate of a function f : S  R is defined as f ( ) = supwS w,  - f (w) . If f is closed and convex then the Fenchel conjugate of f is f itself. The Fenchel-Young inequality states that for any w and  we have that f (w) + f ( )  w,  . A vector  is a sub-gradient of a function f at w if for all w  S we have that f (w ) - f (w)  w - w,  . The differential set of f at w, denoted  f (w), is the set of all sub-gradients of f at w. If f is differentiable at w then  f (w) consists of a single vector which amounts to the gradient of f at w and is denoted by f (w). Sub-gradients play an important role in the definition of Fenchel conjugate. In particular, the following lemma states that if    f (w) then Fenchel-Young inequality holds with equality. Lemma 1 Let f be a closed and convex function and let  f (w ) be its differential set at w . Then, for all    f (w ) we have, f (w ) + f ( ) =  , w . A continuous function f is  -strongly convex over a convex set S with respect to a norm  if S is contained in the domain of f and for all v, u  S and   [0, 1] we have 1 f ( v + (1 - ) u)   f (v) + (1 - ) f (u) -   (1 - ) v - u 2 . (3) 2 Strongly convex functions play an important role in our analysis primarily due to the following lemma. Lemma 2 Let  be a norm over Rn and let  be its dual norm. Let f be a  -strongly convex function on S and let f be its Fenchel conjugate. Then, f is differentiable with f ( ) = arg maxxS , x - f (x). Furthermore, for any  ,   Rn we have 1 f ( + ) - f ( )  f ( ),  + 2. 2 Two notable examples of strongly convex functions which we use are as follows.\n1 Example 1 The function f (w) = 2 w 2 is 1-strongly convex over S = Rn with respect to the 2 2 norm. Its conjugate function is f ( ) = 1  2 . 2 2 n 1 Example 2 The function f (w) = i=1 wi log(wi / n ) is 1-strongly convex over the probabilistic n simplex, S = {w  R+ : w 1 = 1}, with respect to the 1 norm. Its conjugate function is n 1 f ( ) = log( n i=1 exp(i )).\n\n3\n\nGeneralized Fenchel Duality\n\nIn this section we derive our main analysis tool. We start by considering the following optimization problem, c , T inf f (w) + t=1 gt (w)\nwS\n\nwhere c is a non-negative scalar. An equivalent problem is c s T .t. w0  S and t  [T ], wt = w0 . inf f (w0 ) + t=1 gt (wt )\nw0 ,w1 ,...,wT\n\nIntroducing T vectors 1 , . . . , T , each t  Rn is a vector of Lagrange multipliers for the equality constraint wt = w0 , we obtain the following Lagrangian T T L(w0 , w1 , . . . , wT , 1 , . . . , T ) = c f (w0 ) + t=1 gt (wt ) + t=1 t , w0 - wt . The dual problem is the task of maximizing the following dual objective value, D(1 , . . . , T ) = inf L(w0 , w1 , . . . , wT , 1 , . . . , T ) w0 S,w1 ,...,wT w -T T 1 = - c sup t - f (w0 ) 0, - c t=1 t=1 sup ( wt , t - gt (wt )) wt w0 S -T T -1 t = -c f t=1 t t=1 g (t ) , c\n\n\f\nwhere, following the exposition of Sec. 2, f , g1 , . . . , gT are the Fenchel conjugate functions of f , g1 , . . . , gT . Therefore, the generalized Fenchel dual problem is -T T t sup - c f - 1 t=1 t (4) t=1 g (t ) . c\n1 ,...,T\n\nNote that when T = 1 and c = 1, the above duality is the so called Fenchel duality.\n\n4\n\nA Template Learning Algorithm for Convex Repeated Games\n\nIn this section we describe a template learning algorithm for playing convex repeated games. As mentioned before, we study convex repeated games from the viewpoint of the first player which we shortly denote as P1. Recall that we would like our learning algorithm to achieve a regret bound of the form given in Eq. (2). We start by rewriting Eq. (2) as follows c , tm tT f (u) + gt (u) (5) gt (wt ) - c L  inf\n=1 uS =1\n\nwhere c = T . Thus, up to the sublinear term c L, the cumulative loss of P1 lower bounds the optimum of the minimization problem on the right-hand side of Eq. (5). In the previous section we derived the generalized Fenchel dual of the right-hand side of Eq. (5). Our construction is based on the weak duality theorem stating that any value of the dual problem is smaller than the optimum value of the primal problem. The algorithmic framework we propose is therefore derived by incrementally ascending the dual objective function. Intuitively, by ascending the dual objective we move closer to the optimal primal value and therefore our performance becomes similar to the performance of the best fixed weight vector which minimizes the right-hand side of Eq. (5). Initially, we use the elementary dual solution 1 = 0 for all t. We assume that inf w f (w) = 0 and t for all t inf w gt (w) = 0 which imply that D(1 , . . . , 1 ) = 0. We assume in addition that f is 1 T  -strongly convex. Therefore, based on Lemma 2, the function f is differentiable. At trial t, P1 uses for prediction the vector . T (6) wt = f - 1 i=1 t i c After predicting wt , P1 receives the function gt and suffers the loss gt (wt ). Then, P1 updates the dual variables as follows. Denote by t the differential set of gt at wt , that is, t = { : w  S, gt (w) - gt (wt )  , w - wt } . (7)\n\n\n\nThe new dual variables (t+1 , . . . , t+1 ) are set to be any set of vectors which satisfy the following 1 T two conditions: (i). \n t t s.t. D(t+1 , . . . , T+1 )  D(t , . . . , t-1 ,  , t+1 , . . . , t ) 1 t t T 1\n\n(ii). i > t, t+1 = 0 i\n\n.\n\n(8)\n\nIn the next section we show that condition (i) ensures that the increase of the dual at trial t is proportional to the loss gt (wt ). The second condition ensures that we can actually calculate the dual at trial t without any knowledge on the yet to be seen loss functions gt+1 , . . . , gT . We conclude this section with two update rules that trivially satisfy the above two conditions. The first update scheme simply finds   t and set i fi=t t+1 i = . (9) t if i = t i The second update defines (t+1 , . . . , t+1 ) = argmax D(1 , . . . , T ) 1 T\n1 ,...,T\n\ns.t. i = t, i = t . i\n\n(10)\n\n\f\n5\n\nAnalysis\n\nIn this section we analyze the performance of the template algorithm given in the previous section. Our proof technique is based on monitoring the value of the dual objective function. The main result is the following lemma which gives upper and lower bounds for the final value of the dual objective function. Lemma 3 Let f be a  -strongly convex function with respect to a norm  over a set S and assume that minwS f (w) = 0. Let g1 , . . . , gT be a sequence of convex and closed functions such that inf w gt (w) = 0 for all t  [T ]. Suppose that a dual-incrementing algorithm which satisfies the conditions of Eq. (8) is run with f as a complexity function on the sequence g1 , . . . , gT . Let w1 , . . . , wT be the sequence of primal vectors that the algorithm generates and T +1 , . . . , T +1 1 T be its final sequence of dual variables. Then, there exists a sequence of sub-gradients 1 , . . . , T , where t  t for all t, such that tT\nT 1t gt (wt ) - t 2 2  c =1 =1 T +1 , . . . , T +1 )  D (1 T\n\n inf c f (w) +\nwS\n\ntT\n=1\n\ngt (w) .\n\nProof The second inequality follows directly from the weak duality theorem. Turning to t the left most inequality, denote t = D(t+1 , . . . , T+1 ) - D(t , . . . , t ) and note that 1 T 1 T +1 T +1 D(1 , . . . , T ) can be rewritten as T T 1 1 D(T +1 , . . . , T +1 ) = (11) 1 T t=1 t - D (1 , . . . , T ) = t=1 t , where the last equality follows from the fact that f (0) = g1 (0) = . . . = gT (0) = 0. The definition of the update implies that t  D(t , . . . , t-1 , t , 0, . . . , 0) - D(t , . . . , t-1 , 0, 0, . . . , 0) for 1 t 1 t t-1 some subgradient t  t . Denoting  t = - 1 j =1 j , we now rewrite the lower bound on t as, c t  -c (f ( t - t /c) - f ( t )) - gt (t ) . Using Lemma 2 and the definition of wt we get that 1 t  wt , t - gt (t ) - 2  c t 2 . (12) Since t  t and since we assume that gt is closed and convex, we can apply Lemma 1 to get that wt , t - gt (t ) = gt (wt ). Plugging this equality into Eq. (12) and summing over t we obtain that T T T t 1 t=1 t  t=1 gt (wt ) - 2  c t=1  2 . Combining the above inequality with Eq. (11) concludes our proof. The following regret bound follows as a direct corollary of Lemma 3. T 1 Theorem 1 Under the same conditions of Lemma 3. Denote L = T t=1 t 2 w  S we have, T T c f (w ) 1 1 + 2L c . t=1 gt (wt ) - T t=1 gt (w)  T T   In particular, if c = T , we obtain the bound, T T f (w)+L/(2  ) 1 1  . t=1 gt (wt ) - T t=1 gt (w)  T T\n.\n\nThen, for all\n\n6\n\nApplication to Online learning\n\nIn Sec. 1 we cast the task of online learning as a convex repeated game. We now demonstrate the applicability of our algorithmic framework for the problem of instance ranking. We analyze this setting since several prediction problems, including binary classification, multiclass prediction, multilabel prediction, and label ranking, can be cast as special cases of the instance ranking problem. Recall that on each online round, the learner receives a question-answer pair. In instance ranking, the question is encoded by a matrix Xt of dimension kt  n and the answer is a vector yt  Rkt . The semantic of yt is as follows. For any pair (i, j ), if yt,i > yt,j then we say that yt ranks\n\n\f\nthe i'th row of Xt ahead of the j 'th row of Xt . We also interpret yt,i - yt,j as the confidence in which the i'th row should be ranked ahead of the j 'th row. For example, each row of Xt encompasses a representation of a movie while yt,i is the movie's rating, expressed as the number of stars this movie has received by a movie reviewer. The predictions of the learner are determined ^ based on a weight vector wt  Rn and are defined to be yt = Xt wt . Finally, let us define two loss functions for ranking, both generalize the hinge-loss used in binary classification problems. Denote by Et the set {(i, j ) : yt,i > yt,j }. For all (i, j )  Et we define a pair-based hinge-loss i,j (w; (Xt , yt )) = [(yt,i - yt,j ) - w, xt,i - xt,j ]+ , where [a]+ = max{a, 0} and xt,i , xt,j are respectively the i'th and j 'th rows of Xt . Note that i,j is zero if w ranks xt,i higher than xt,j with a sufficient confidence. Ideally, we would like i,j (wt ; (Xt , yt )) to be zero for all (i, j )  Et . If this is not the case, we are being penalized according to some combination of the pair-based losses i,j . For example, we can set (w; (Xt , yt )) to be the average over the pair losses, ( 1 avg (w; (Xt , yt )) = |Et | i,j )Et i,j (w; (Xt , yt )) . This loss was suggested by several authors (see for example [18]). Another popular approach (see for example [5]) penalizes according to the maximal loss over the individual pairs, max (w; (Xt , yt )) = max(i,j )Et i,j (w; (Xt , yt )) . We can apply our algorithmic framework given in Sec. 4 for ranking, using for gt (w) either avg (w; (Xt , yt )) or max (w; (Xt , yt )). The following theorem provides us with a sufficient condition under which the regret bound from Thm. 1 holds for ranking as well. Theorem 2 Let f be a  -strongly convex function over S with respect to a norm  . Denote by Lt the maximum over (i, j )  Et of xt,i - xt,j 2 . Then, for both gt (w) = avg (w; (Xt , yt )) and  gt (w) = max (w; (Xt , yt )), the following regret bound holds PT 1 T T f (u)+ T L /(2  ) 1 1 t  =1 t u  S, T gt (wt ) - T gt (u)  . t=1 t=1 T\n\n7\n\nThe Boosting Game\n\nIn this section we describe the applicability of our algorithmic framework to the analysis of boosting algorithms. A boosting algorithm uses a weak learning algorithm that generates weak-hypotheses whose performances are just slightly better than random guessing to build a strong-hypothesis which can attain an arbitrarily low error. The AdaBoost algorithm, proposed by Freund and Schapire [6], receives as input a training set of examples {(x1 , y1 ), . . . , (xm , ym )} where for all i  [m], xi is taken from an instance domain X , and yi is a binary label, yi  {+1, -1}. The boosting process proceeds in a sequence of consecutive trials. At trial t, the booster first defines a distribution, denoted wt , over the set of examples. Then, the booster passes the training set along with the distribution wt to the weak learner. The weak learner is assumed to return a hypothesis ht : X  {+1, -1} whose average error is slightly smaller than 1 . That is, there exists a constant  > 0 such that, 2 m def 1-yi ht (xi ) t=  1 - . (13) i=1 wt,i 2 2 The goal of the boosting algorithm is to invoke the weak learner several times with different distributions, and to combine the hypotheses returned by the weak learner into a final, so called strong, hypothesis whose error is small. The final hypothesis combines linearly the T hypotheses returned by the weak learner with coefficients 1 , . . . , T , and is defined to be the sign of hf (x) where T hf (x) = t=1 t ht (x) . The coefficients 1 , . . . , T are determined by the booster. In Ad1 1 aBoost, the initial distribution is set to be the uniform distribution, w1 = ( m , . . . , m ). At iter1 ation t, the value of t is set to be 2 log((1 - t)/ t ). The distribution is updated by the rule wt+1,i = wt,i exp(-t yi ht (xi ))/Zt , where Zt is a normalization factor. Freund and Schapire [6] have shown that under the assumption given in Eq. (13), the error of the final strong hypothesis is at most exp(-2  2 T ). Several authors [15, 13, 8, 4] have proposed to view boosting as a coordinate-wise greedy optimization process. To do so, note first that hf errs on an example (x, y ) iff y hf (x)  0. Therefore, the exp-loss function, defined as exp(-y hf (x)), is a smooth upper bound of the zero-one error, which equals to 1 if y hf (x)  0 and to 0 otherwise. Thus, we can restate the goal of boosting as minimizing the average exp-loss of hf over the training set with respect to the variables 1 , . . . , T . To simplify our derivation in the sequel, we prefer to say that boosting maximizes the negation of the loss, that is, - . m T 1 max - m i=1 exp yi t=1 t ht (xi ) (14)\n1 ,...,T\n\n\f\nIn this view, boosting is an optimization procedure which iteratively maximizes Eq. (14) with respect to the variables 1 , . . . , T . This view of boosting, enables the hypotheses returned by the weak learner to be general functions into the reals, ht : X  R (see for instance [15]). In this paper we view boosting as a convex repeated game between a booster and a weak learner. To motivate our construction, we would like to note that boosting algorithms define weights in two different domains: the vectors wt  Rm which assign weights to examples and the weights {t : t  [T ]} over weak-hypotheses. In the terminology used throughout this paper, the weights wt  Rm are primal vectors while (as we show in the sequel) each weight t of the hypothesis ht is related to a dual vector t . In particular, we show that Eq. (14) is exactly the Fenchel dual of a primal problem for a convex repeated game, thus the algorithmic framework described thus far for playing games naturally fits the problem of iteratively solving Eq. (14). To derive the primal problem whose Fenchel dual is the problem given in Eq. (14) let us first denote by vt the vector in Rm whose ith element is vt,i = yi ht (xi ). For all t, we set gt to be the function gt (w) = [ w, vt ]+ . Intuitively, gt penalizes vectors w which assign large weights to examples which are predicted accurately, that is yi ht (xi ) > 0. In particular, if ht (xi )  {+1, -1} and wt is a distribution over the m examples (as is the case in AdaBoost), gt (wt ) reduces to 1 - 2 t (see Eq. (13)). In this case, minimizing gt is equivalent to maximizing the error of the individual T hypothesis ht over the examples. Consider the problem of minimizing c f (w) + t=1 gt (w) where f (w) is the relative entropy given in Example 2 and c = 1/(2  ) (see Eq. (13)). To derive its Fenchel dual, we note that gt (t ) = 0 if there exists t  [0, 1] such that t = t vt and otherwise gt (t ) =  (see [16]). In addition, let us define t = 2  t . Since our goal is to maximize the t dual, we can restrict t to take the form t = t vt = 2  vt , and get that . = 1m tT i P 1 - T=1 t yi ht (xi ) -1 t log e (15) D(1 , . . . , T ) = -c f - t vt c =1 2 m =1 Minimizing the exp-loss of the strong hypothesis is therefore the dual problem of the following primal minimization problem: find a distribution over the examples, whose relative entropy to the uniform distribution is as small as possible while the correlation of the distribution with each vt is as small as possible. Since the correlation of w with vt is inversely proportional to the error of ht with respect to w, we obtain that in the primal problem we are trying to maximize the error of each individual hypothesis, while in the dual problem we minimize the global error of the strong hypothesis. The intuition of finding distributions which in retrospect result in large error rates of individual hypotheses was also alluded in [15, 8]. We can now apply our algorithmic framework from Sec. 4 to boosting. We describe the game t with the parameters t , where t  [0, 2  ], and underscore that in our case, t = 2  vt . At the beginning of the game the booster sets all dual variables to be zero, t t = 0. At trial t of the boosting game, the booster first constructs a primal weight vector wt  Rm , which assigns importance weights to the examples in the training set. The primal vector wt is constructed as in i Eq. (6), that is, wt = f ( t ), where  t = - i vi . Then, the weak learner responds by presenting the loss function gt (w) = [ w, vt ]+ . Finally, the booster updates the dual variables so as to increase the dual objective function. It is possible to show that if the range of ht is {+1, -1} 1 then the update given in Eq. (10) is equivalent to the update t = min{2  , 2 log((1 - t)/ t )}. We have thus obtained a variant of AdaBoost in which the weights t are capped above by 2  . A disadvantage of this variant is that we need to know the parameter  . We would like to note in passing that this limitation can be lifted by a different definition of the functions gt . We omit the details due to the lack of space. To analyze our game of boosting, we note that the conditions given in Lemma 3 holds T and therefore the left-hand side inequality given in Lemma 3 tells us that t=1 gt (wt ) - T t T +1 T +1 1 , . . . , T ) . The definition of gt and the weak learnability as  D (1 t=1  2 2c sumption given in Eq. (13) imply that wt , vt  2  for all t. Thus, gt (wt ) = wt , vt  2  which also implies that t = vt . Recall that vt,i = yi ht (xi ). Assuming that the range of ht is [+1, -1] we get that t   1. Combining all the above with the left-hand side inequality T given in Lemma 3 we get that 2 T  - 2 c  D(T +1 , . . . , T +1 ). Using the definition of D (see 1 T Eq. (15)), the value c = 1/(2  ), and rearranging terms we recover the original bound for AdaBoost PT m -yi 2 1 t=1 t ht (xi )  e-2  T . i=1 e m\n\n\f\n8\n\nRelated Work and Discussion\n\nWe presented a new framework for designing and analyzing algorithms for playing convex repeated games. Our framework was used for the analysis of known algorithms for both online learning and boosting settings. The framework also paves the way to new algorithms. In a previous paper [17], we suggested the use of duality for the design of online algorithms in the context of mistake bound analysis. The contribution of this paper over [17] is three fold as we now briefly discuss. First, we generalize the applicability of the framework beyond the specific setting of online learning with the hinge-loss to the general setting of convex repeated games. The setting of convex repeated games was formally termed \"online convex programming\" by Zinkevich [19] and was first presented by Gordon in [9]. There is voluminous amount of work on unifying approaches for deriving online learning algorithms. We refer the reader to [11, 12, 3] for work closely related to the content of this paper. By generalizing our previously studied algorithmic framework [17] beyond online learning, we can automatically utilize well known online learning algorithms, such as the EG and p-norm algorithms [12, 11], to the setting of online convex programming. We would like to note that the algorithms presented in [19] can be derived as special cases of our algorithmic framework 1 by setting f (w) = 2 w 2 . Parallel and independently to this work, Gordon [10] described another algorithmic framework for online convex programming that is closely related to the potential based algorithms described by Cesa-Bianchi and Lugosi [3]. Gordon also considered the problem of defining appropriate potential functions. Our work generalizes some of the theorems in [10] while providing a somewhat simpler analysis. Second, the usage of generalized Fenchel duality rather than the Lagrange duality given in [17] enables us to analyze boosting algorithms based on the framework. Many authors derived unifying frameworks for boosting algorithms [13, 8, 4]. Nonetheless, our general framework and the connection between game playing and Fenchel duality underscores an interesting perspective of both online learning and boosting. We believe that this viewpoint has the potential of yielding new algorithms in both domains. Last, despite the generality of the framework introduced in this paper, the resulting analysis is more distilled than the earlier analysis given in [17] for two reasons. (i) The usage of Lagrange duality in [17] is somehow restricted while the notion of generalized Fenchel duality is more appropriate to the general and broader problems we consider in this paper. (ii) The strongly convex property we employ both simplifies the analysis and enables more intuitive conditions in our theorems. There are various possible extensions of the work that we did not pursue here due to the lack of space. For instanc, our framework can naturally be used for the analysis of other settings such as repeated games (see [7, 19]). The applicability of our framework to online learning can also be extended to other prediction problems such as regression and sequence prediction. Last, we conjecture that our primal-dual view of boosting will lead to new methods for regularizing boosting algorithms, thus improving their generalization capabilities.\n\nReferences\n[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2006. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. M. Collins, R.E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 2002. K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. JMLR, 7, Mar 2006. Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, 1995. Y. Freund and R.E. Schapire. Game theory, on-line prediction and boosting. In COLT, 1996. J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 2000. G. Gordon. Regret bounds for prediction problems. In COLT, 1999. G. Gordon. No-regret algorithms for online convex programs. In NIPS, 2006. A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. Machine Learning, 43(3), 2001. J. Kivinen and M. Warmuth. Relative loss bounds for multidimensional regression problems. Journal of Machine Learning, 45(3),2001. L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 1999. Y. Nesterov. Primal-dual subgradient methods for convex problems. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (UCL), 2005. R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):140, 1999. S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Technical report, The Hebrew University, 2006. S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In COLT, 2006. J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In ESANN, April 1999. M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.\n\n\f\n", "award": [], "sourceid": 3107, "authors": [{"given_name": "Shai", "family_name": "Shalev-shwartz", "institution": null}, {"given_name": "Yoram", "family_name": "Singer", "institution": null}]}