This is a learning theory paper in situation where the usual mean loss objective function is replaced by a risk-sensitive objective with different weights attributed to data depending on the loss. This setting is of high importance in robust learning, where only a fraction of the sample with smallest losses is considered. This paper provides an analysis of this setting via Rademacher bounds. The paper suggests a connection to Sample-Variance-Penalization (SVP) and concludes with some experimental results. The appendix also contains robustness analysis. Robust learning is one of the few new issues that rise in importance for our community, and I think this work shed some new interesting lightning about it. Only one reviewer score slightly under acceptation (with a 5), he nevertheless agree for poster acceptance in the post-rebuttal discussion. The main weakness he proposed was: “I believe that the main weakness of the paper is that on a technical level, the results (lemma 2 and theorem 3) are just direct extensions of classic uniform convergence arguments based on Rademacher complexity.” I, and at least one other reviewers think that this should not be consider as a weakness: “To me the reduction of the nonlinear objective to a linear one is simple and elegant. There is a wealth of methods to bound Rademacher averages, which can be combined with the results in the paper in a simple and efficient way.” Hence, and especially because there is a need in our community for a better understanding of robust learning and of out-of-distribution learning, I recommend poster acceptance.