{ "File Number": "1186", "Title": "Learning from Label Proportions by Learning with Label Noise", "Limitation": "A limitation of our approach is that the theory makes an assumption that may not be verifiable in practice. Future research directions include optimizing the grouping of bags and adapting LLPFC to other objectives beyond accuracy. Acknowledgement The authors were supported in part by the National Science Foundation under awards 1838179 and 2008074, and by the Department of Defense, Defense Threat Reduction Agency under award HDTRA1-20-2-0002. 2https://github.com/liujiabin008/LLP-GAN", "Reviewer Comment": "Reviewer_2: While the idea of interpreting learning from label proportions as a noisy-label-learning problem is not original to this paper, the multiclass application and use of forward-correction instead of unbiased loss functions constitute a non-trivial step over the existing literature. I agree with the authors that the additional results for forward correction may be of independent interest, though whether people working on noisy-label-learning will find these results in this paper I don't know. Maybe these results should, in addition to being stated here, be presented at some workshop or similar.\nOverall, I found the paper to be well written and easy enough to follow in the main argument. In terms of notation, maybe this could be improved a bit, e.g. the symbol\nγ\nk\ni\n,\nc\n1\nhas 3(!!) levels of subscripts. Similarly, the acronyms can get rather long: LLPFC-..., LLPGAN, LLPVAT, LMNTM. None of these are actually pronouncable. Given that the context of the paper is LLP, I think it would be enough to mark the algorithms as FC-..., GAN, VAT, for example.\nSo far, I've only gone through the math in appendices C1 and C2. Aside from the comments below, this looks good. I will update the review once I've managed to go through the rest of the proofs.\nQuestions:\nIn equation (1), shouldn't\nα\ni\n(\nc\n)\nalso be an empirical/estimated quantity?\nl. 713 How de we know\nA\n(\nϵ\n)\nis the closed interval, as opposed to being open?\nProof of Thm. 17 Where does the first line,\nC\n2\n<\nδ\n(\nC\n1\n)\n⟹\n…\n, come from?\nLemma 22 should require that the matrix be invertible, I think.\nLimitations:\nNot a limitation per se, but as far as I can see this work leaves open quite trivial imporvements of the proposed algorithm, i.e. choosing the labels that are assigned to the bags in a way that minimizes the amount of noise that needs to be assumed.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_3: pros:\n-The paper studies the problem of learning from label proportions in a theoretically grounded way. They provide calibration analysis of the forward correction loss and then apply it to their problem setting. They also derive the generalization error bound of the proposed method under some assumptions.\ncons:\n-The title of the paper is a bit misleading, as the problem settings of learning from label proportions (LLP) and learning with label noise (LLN) are not mutually exclusive. To be specific, both of them can be seen as learning from corrupted distributions, where the class-conditional densities p(x/y) are corrupted in the former setting (LLP), while the class posterior probabilities p(y/x) are corrupted in the latter setting (LLN). In the binary case, the former is also called mutually contaminated (MC) learning, and the latter is also called class-conditional label noise (CCN) learning. It is proved that CCN is a special case of MC, see sections 2.2-2.3 in the following paper for details:\nMenon et al. \"Learning from corrupted binary labels via class-probability estimation.\" International conference on machine learning. PMLR, 2015.\nTherefore, my feeling is that, in a general viewpoint, LLP can also be seen as a kind of label noise problem, and the title may cause some confusion that how to solve one label noise problem by another label noise problem.\n-The writing of the paper is unclear and some parts are hard to follow. For example, in lines 40-42, the authors claim that Patrini et al. 2017 proposed forward correction loss to remedy the backward correction loss since it is poorly suited for multi-class deep learning settings; however, in the original Patrini et al. 2017 paper, they seem only mention that forward correction is an alternative method that corrects the model predictions (see Section 4.2 in Patrini et al. 2017 paper), without any discussions on how it remedies the backward correction. Also in lines 50-52, it seems that the mutual contamination model is not that different from the noise transition matrix used in this paper, which can be seen as a 2 by 2 noise matrix corrupting the class-conditionals.\n-Some sections seem incomplete. Especially for the algorithm section, how to estimate the noise matrix is an important step for the proposed method, but the details of these steps are not included in the main paper. And there are no experimental results provided in the main paper. I strongly recommend simplifying section 4.3, as the definitions are mainly borrowed from existing papers and may be moved to the appendix, and using the saved space for discussing the noise matrix estimation algorithm and reporting experimental results.\nQuestions:\n-The proposed method randomly partitions the given bags into groups and then formulates the problem of learning with multiple noise transition matrices. This random partition step seems problematic to me. As different partitions may lead to different noise transition matrices, which determine the difficulty of each LLN problem and will significantly affect the performance of the learned classifier. In Scott and Zhang 2020, they have discussed that the optimal pairing of bags can be obtained by solving an integer programming problem. In the current paper, could the authors provide similar analysis on the optimal partition strategy?\n-In theorem 7, the weights w_i are expected to reweight noiser or larger subsets of data differently. But it is still unclear to me how to choose them in practice.\n-In Section 5.3, the authors discuss a general case of LLP with NC bags. What if we are given m bags and m is not divisible by C? How to deal with such a general case in the current learning framework?\nLimitations:\n-The paper has some unclear parts that need to be carefully modified, and the organization of the paper can be improved.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 2 fair\nContribution: 3 good\n\nReviewer_4: Strengths\nThe connection between LLP and LLN is reasonable and novel.\nThey provide theoretical analysis and this would be useful for both LLN and LLP community.\nWeaknesses\nLack of the experiment results\nThe paper did not provide any empirical results in the main paper.\nAlso, there are no/few explanation of the main experiment results in both main and supplementary paper.\nGap of theory and practice\nThe theoretical analysis has many assumptions, but they did not give theoretical or empirical explanations to show that the assumption is reasonable in practice.\nQuestions:\nIt would be nice to provide the explanation of the experiment results and what they support.\nIs it easy to estimate the clean prior\nσ\n(\nc\n)\n?\nAre Assumption 13 and 14 empirically reasonable in LLP?\nLimitations:\nThey addressed the limitations in the paper.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 2 fair\nContribution: 3 good\n\nReviewer_5: Strengths:\nA novel extension of FC technique is proposed that removes the limitation on the number of transition matrices.\nA clear connection between LLP and LLN is constructed with a strategy of converting label proportion to noisy labels and noise-transition matrices. With this conversion, the problem of multi-class LLP can be solved with consistency guarantee.\nRigorous theoretical analyses are conducted to support all the claims made in this paper. The provided generalization error bound well depicts factors that have influence on the proposed method.\nWeaknesses:\nThe presentation of this paper should be improved for better readability. LLPFC, the main result of this paper, appears in the latter part of this paper, which makes the key point of this paper ambiguous. I think this problem can be mitigated by exchanging the order of Section 4.3 and Section 5.\nQuestions:\nThere can be quite a number of assignments of labels w.r.t. different bags and thus the construction of transition matrices can be flexible. For example, in the scenario mentioned in Section 5.2, there can be C! different assignments. If there exists an optimal assignment of pseudo noisy labels?\nLimitations:\nThe authors have addressed the limitation of assumptions made in Section 5.1.\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 2 fair\nContribution: 4 excellent", "abstractText": "Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of Patrini et al. [30]. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods.", "1 Introduction": "In the weakly supervised problem of learning from label proportions (LLP), the learner is presented with bags of instances, where each bag is annotated with the proportions of the different classes in the bag. The learner’s objective is to produce a classifier that accurately assigns labels to individual instances in the future. LLP arises in various applications including high energy physics [7], election prediction [45], computer vision [4, 20], medical image analysis [2], remote sensing [8], activity recognition [32], and reproductive medicine [12].\nTo date, most methods for LLP have addressed the setting of binary classification [50, 36, 39, 34, 41, 44, 24, 37, 38], although multiclass methods have also recently been investigated [9, 22, 46]. The dominant approach to LLP in the literature is “label proportion matching”: train a classifier to accurately reproduce the observed label proportions on the training data, perhaps with additional regularization. In the multiclass setting, the Kullback-Leibler (KL) divergence between the observed and predicted label proportions is adopted by the leading approaches to assess proportion matching. Unfortunately, while matching the observed label proportions is intuitive and can work well in some settings, it has little theoretical basis [50, 38], especially in the multiclass setting, and there are natural settings where it fails [50, 39].\nRecently, Scott and Zhang [39] demonstrated a principled approach to LLP with performance guarantees based on a reduction to learning with label noise (LLN) in the binary setting. Their basic strategy was to pair bags, and view each pair of bags as an LLN problem, where the observed label proportions are related to the “label flipping” or “noise transition” probabilities. Using an existing technique for LLN based on loss correction, which allows the learner to train directly on the noisy data, they formulated an overall objective based on a (weighted) sum of objectives for each pair of bags. They established generalization error analysis and consistency for the method, and also showed that in the context of kernel methods, their approach outperformed the leading kernel methods.\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nThe objective of the present paper is to develop a theoretically grounded and practical approach to multiclass LLP, drawing inspiration from Scott and Zhang [39]. The primary challenge stems from the fact that Scott and Zhang [39] employed the so-called “backward correction” loss, which solves LLN by scaling the output of a loss function of interest according to the noise transition probabilities [28, 30, 35]. While this loss correction was demonstrated to work well for kernel methods in a binary setting, Patrini et al. [30] introduced an alternative loss correction that performs better empirically in deep learning settings (see also [53]). They proposed the “forward correction” loss, which scales the inputs to a loss function of interest according to the noise transition probabilities. Patrini et al. [30] find that backward correction “does not seem to work well in the low noise regime,” and is “a linear combination of losses” with “coefficients that can be far [apart] by orders of magnitude ” which “makes the learning harder”.\nThe present work is thus inspired by Scott and Zhang [39] but uses the forward correction (FC) loss in a multiclass setting. This requires a number of technical modifications to the arguments of Scott and Zhang [39]. Most notably, it now becomes necessary to demonstrate that the FC loss is calibrated with respect to the 0-1 loss, a critical property needed for showing consistency. Such analysis is inherently not needed when using the backward correction, where the target excess risk is proportional to the surrogate excess risk (from which calibration follows trivially). Furthermore, Scott and Zhang [39] does not require analysis of proper composite losses, which are needed in the FC framework. Finally, the multiclass setting involves new estimation challenges not present in the binary case. These factors mean that our work is not a straightforward extension of Scott and Zhang [39]. Indeed, the authors of a recent report acknowledge that it is “difficult to extend [the method of Scott and Zhang [39]] to multiclass classification\" [16].\nAdditional related work: Much work on LLP has focused on learning specific types of models, including support vector machines [36, 50, 47, 33, 5, 19, 40], probabilistic models [18, 13, 45, 32, 12], random forests [41], neural networks [21, 1, 9, 22, 46], and clustering-based models, [3, 44]. Many of these works develop learning criteria that are specific to the model being learned.\nOn the theoretical front, Quadrianto et al. [34] and Patrini et al. [30] initiated the learning theoretic study of LLP, introducing Rademacher style bounds for linear methods, but they do not address consistency w.r.t. a classification performance measure. Yu et al. [51] provides support for label proportion matching but only under the assumption that the bags are very pure. Saket [37] studies learnability of linear threshold functions. Recently Saket et al. [38] introduced a condition under which label proportion matching does provably well w.r.t. a squared error loss in the binary setting, and developed an associated algorithm. This method does not scale easily to large datasets, and further requires knowledge of how bags are grouped according to different bag-generating distributions.\nA handful of recent papers have studied multiclass LLP in deep learning scenarios. Dulac-Arnold et al. [9] study the KL loss for label proportion matching, and a variant based on optimal transport. Liu et al. [22, 23] examine an approach based on generative adversarial models. Tsai and Lin [46] study the use of a regularizer derived from semi-supervised learning. One challenge common to these approaches is that their implementations employ mini-batches of bags, which becomes computationally prohibitive for large bag sizes when the batch size is still very small, e.g., 2 or 3 bags. In contrast, our approach avoids this issue. Finally, a recent technical report presents a risk analysis for multiclass LLP under the assumption of fixed bag size, which we do not require [16]. Their method is not tractable for large bag sizes in which case they approximate their objective “using the bag-level loss proposed in the existing research.\"\nContributions and Outline: Our contributions and the paper structure are summarized as follows. In Section 2, we review the FC loss as a solution to LLN. In Section 3, we extend the theory of the FC loss for LLN. In particular, we show that the FC loss is “uniformly calibrated” with respect to the 0-1 loss using the framework of Steinwart [43], establish an excess risk bound, and determine an explicit lower bound on the calibration function in terms of the noise transition matrix. In Section 4, we extend the results of Section 3 to the setting with multiple noise transition matrices, which form the basis of our approach to LLP. In particular, we establish an excess risk bound and generalization error analysis for learning with multiple noise transition matrices, which in turn enables proofs of consistency. In Section 5, we state the probabilistic model for reducing LLP to LLN with multiple different noise transition matrices and present the LLPFC algorithms. Experiments with deep neural networks are presented in Section 6, where we observe that our approach outperforms competing methods by a substantial margin. Proofs appear in the supplemental material.", "2 Learning with Label Noise and the Forward Correction Loss": "This section sets notation and introduces the FC loss as a solution to learning with label noise. Let X be the feature space and Y = {1, 2, . . . , C} be the label space, C 2 N. We define the C-simplex as C = {p 2 RC : pi 0, 8i = 1, 2, . . . , C, P C i=1 pi = 1} and denote its interior by ̊ C . Let P be a probability measure on the space X ⇥ Y .\nViewing P as the “clean” probability measure, a noisy probability measure with label-dependent label noise can be constructed from P in terms of a C ⇥ C column-stochastic matrix T , referred to as the noise transition matrix. Formally, we define a measure P̄T on X ⇥ Y ⇥ Y by requiring 8 events A ⇢ X , P̄T (A ⇥ {i} ⇥ {j}) = P (A ⇥ {i})tj,i where tj,i is the element at the j-th row and i-th column of T . Let (X,Y, Ỹ ) have joint distribution P̄T where X is the feature vector, Y is the “clean” label, and Ỹ is the “noisy” label. Thus the element of T at row i and column j is ti,j = P̄T (Ỹ = i|Y = j). In addition, P is the marginal distribution of (X,Y ). Define PT to be the marginal distribution of (X, Ỹ ). Let F be the collection of all measurable functions from X to C .\nThe existence of a regular conditional distribution is guaranteed by the Disintegration Theorem (e.g. Theorem 6.4 in Kallenberg [14]) under suitable properties (e.g. when X is a Radon space). While the existence of regular conditional probability is beyond the scope of this paper, we assume fixed regular conditional distributions for Y and Ỹ given X exist, denoted by P (· | ·) : Y ⇥ X ! [0, 1] and PT (· | ·) : Y ⇥ X ! [0, 1], respectively. Given x 2 X , we define the probability vectors ⌘(x) = [P (1 | x), . . . , P (C | x)]tr and ⌘T (x) = [PT (1 | x), . . . , PT (C | x)]\ntr where we use tr to denote transposition. It directly follows that ⌘T (x) = T⌘(x).\nWe use R+ to denote the positive real numbers. The goal of LLN is to learn a classifier that optimizes a performance measure defined w.r.t. P , given access to corrupted training data (Xi, Ỹi) i.i.d. ⇠ PT . In this work we assume T is known or can be estimated, as is the case when we apply LLN techniques to LLP (see Section 5). A more formal formulation of LLP is given in Section 5.\nWhen attempting to minimize the risk associated to the 0-1 loss and the clean distribution P , it is common to employ a smooth or convex surrogate loss. For LLN problems, the idea of a loss correction is to modify the surrogate loss so that when optimized using the noisy data, it still achieves the desired goal. Below, we introduce the forward correction loss, before which we need to define inner risk and proper loss. For this purpose we focus on loss functions of the form L : C ⇥ Y ! R. Definition 1. Let L : C ⇥ Y ! R be a loss function. The inner L-risk at x with probability measure P is CL,P,x : C ! R, CL,P,x(q) := EY⇠P (·|x)L(q, Y ). The minimal inner L-risk at x with a probability measure P is C⇤\nL,P,x := infq2 C CL,P,x(q).\nDefinition 2. ` : C ⇥ Y ! R is a proper loss if 8 probability measures P on X ⇥ Y , 8x 2 X , C ⇤\n`,P,x = C`,P,x(⌘(x)), and a proper loss is called strictly proper if the minimizer of C`,P,x is\nunique for all x 2 X .\nCommonly used proper losses include the log loss `log(q, c) = log qc, the square loss `sq(q, c) =P C\nc0=1( c=c0 qc0) 2, and the 0-1 loss `01(q, c) = c 6=min{argmaxj qj}, among which only the log loss and the square loss are strictly proper [49]. Here denotes the indicator function. Note that it is common to compose proper losses with inverted link functions, leading to familiar losses like the cross-entropy loss. Such losses are discussed further in Section 4.\nWe are now ready to introduce the forward correction loss. Definition 3. Let ` be a strictly proper loss and let T be a noise transition matrix. Define the forward correction loss of ` as `T : C ⇥ Y ! R, `T (q, c) := `(Tq, c).\nIt follows from the definition that, if T is invertible, then the inner `T -risk under the distribution PT has a unique minimizer ⌘(x). Next we introduce L-risk and L-Bayes risk associated with a loss L. Definition 4. Let L : C ⇥ Y ! R and P be a probability measure. Define the L-risk of f with distribution P to be RL,P : F ! R, RL,P (f) := EP [L(f(X), Y )] and the L-Bayes risk to be R ⇤\nL,P := inff2F RL,P (f).\nWe call RL,P (f) R⇤L,P the excess L-risk of f under distribution P . Given a proper loss `, Theorem 2 of Patrini et al. [30] establishes Fisher consistency of the FC loss, meaning the minimizer of `-risk\nunder the clean distribution P is the same as the minimizer of `T -risk under noisy distribution PT : argmin\nf2F RL,P (f) = argminf2F R`T ,PT (f). Next, we present a stronger result relating the\nexcess `T -risk under the noisy distribution PT to the excess 0-1 risk under the clean distribution P .", "3 Calibration Analysis for the Forward Correction Loss": "Our objective in this section is to show that when L is the 0-1 loss and ` is a continuous strictly proper surrogate loss, there exists a strictly increasing, invertible function ✓ with ✓(0) = 0 such that 8f 2 F and 8 distributions P , ✓\nRL,P (f) R⇤L,P R`T ,PT (f) R ⇤ `T ,PT . Given such a bound, it follows\nthat consistency w.r.t the surrogate risk implies consistency w.r.t. the target risk. The results in this section are standalone results for the FC loss that may be of independent interest, and will be extended in the next section in relation to LLP. The following theorem guarantees the existence of such function ✓, given that T is invertible.\nTheorem 5. Let ` be a continuous strictly proper loss and T be an invertible column-stochastic matrix. Let L be the 0-1 loss. Assume R⇤\n`T ,PT < 1. Then 9✓ : [0, 1] ! [0,1] that is\nstrictly increasing and continuous, satisfying ✓(0) = 0, such that 8f 2 F ,RL,P (f) R⇤L,P\n✓ 1 ⇣ R`T ,PT (f) R ⇤\n`T ,PT\n⌘ .\nThe function ✓ in Theorem 5 depends on ` and T . The following proposition provides a convex lower bound on ✓ for the commonly used log loss `log(q, c) = log qc. Let M 2 RC⇥C be a matrix and let k · k be a norm on RC . The subordinate matrix norm induced by k · k is kMk := sup\nx2RC :x 6=0 kMxk kxk .\nWhen k · k is the 1-norm on RC , the induced norm is denoted kMk1, referred to as the matrix 1-norm, and can be computed as kMk1 = max1jC P C i=1 |M(i, j)| [10].\nProposition 6. Let T 2 RC⇥C be an invertible, column-stochastic matrix. Define ✓ T : [0,1] ! [0,1] by ✓ T (✏) = 12 ✏ 2\nkT 1k21 . If L is the 0/1 loss, ` is the log loss, then for all f 2 F and distributions\nP , RL,P (f) R⇤L,P ✓ 1 T ⇣ R`T ,PT (f) R ⇤\n`T ,PT\n⌘ = p 2kT 1k1 q R`T ,PT (f) R ⇤\n`T ,PT\nThe factor kT 1k1 may be viewed as a constant that captures the overall amount of label noise. The more noise, the larger the constant. For example, let I and N be the identity and the all 1/C’s matrices, respectively. Let ↵ 2 [0, 1] and T = (1 ↵)I +↵N . Thus, ↵ = 0 represents the noise-free case and ↵ = 1 the noise-only case. It is easy to verify that T 1 = (1 ↵) 1(I ↵N) and kT 1 k1 = (1 ↵) 1(1 + (1 2/C)↵).", "4 Learning with Multiple Noise Transition Matrices": "Our algorithms for LLP, formally stated in subsection 5.4, reduce the problem of LLP to LLN by partitioning bags into groups and modeling each group as an LLN problem. Since each group has its own noise transition matrix, this leads to a new problem that we refer to as learning with multiple noise transition matrices (LMNTM). In this section, we show how to extend the calibration analysis of section 3 to this setting. In addition, we offer a generalization error bound that justifies an empirical risk minimization learning procedure based on a weighted sum of FC losses.", "4.1 Learning with Multiple Noise Transition Matrices": "We first define the LMNTM problem formally. For all n 2 N, denote Nn = {1, 2, . . . , n}. Consider a clean distribution P on X ⇥ Y and noise transition matrices T1, T2, . . . , TN . For each i we denote the noisy prior as the ↵i 2 ̊C where, 8c 2 Y , ↵i(c) = PTi(Ỹ = c). We assume the ↵i’s are known for theoretical analysis. In practice, ↵i is estimable as discussed below. In LMNTM, we observe data points S =\nXi,c,j : i 2 NN , c 2 Y, j 2 Nni,c where Xi,c,j iid ⇠ PTi(· | c), and ni,c 2 N is the\nnumber of data points drawn from the class conditional distribution PTi(· | c). Assume all Xi,c,j’s are mutually independent. We make additional remarks on this setting in Section C.1 in the appendix.", "4.2 A Risk for LMNTM": "The following result extends Theorem 5 to LMNTM. It establishes that the risk eR`,P,T , which can be estimated from LMNTM training data, is a valid surrogate risk. This type of result is not needed for the backward correction approach of Scott and Zhang [39]. Theorem 7. Let L be the 0-1 loss and N 2 N. Consider a sequence of invertible column-stochastic matrices T = {Ti} N i=1 and a continuous strictly proper loss function `. Let w = (wi) N i=1 2 N . Define eR`,P,T : F ! R by eR`,P,T (f) := P N i=1 wiR`Ti ,PTi (f) and eR⇤ `,P,T = inff2F eR`,P,T (f). Assume 8i 2 {1, 2, . . . , N},R⇤ `Ti ,PTi < 1. Then 9 a strictly increasing continuous function ✓ : [0, 1]! [0,1] with ✓(0) = 0 s.t. for all P , 8f 2 F , ✓ RL,P (f) R⇤L,P eR`,P,T (f) eR⇤`,P,T .\nThe weights wi allow the user flexibility, for example, to place different weights on noisier or larger subsets of data. Unlike Scott and Zhang [39], however, because the weights appear in both our excess risk bound and generalization error bound, it is not straightforward to optimize them a priori. We discuss weight optimization in detail in Section F in the appendix.", "4.3 Generalization Error Bound": "The aggregate risk eR`,P,T is desirable because it can naturally be estimated from the given data. We propose the empirical risk\nR̂w,S(f) = NX\ni=1\nwi\nCX\nc=1\n↵i(c)\nni,c\nni,cX\nj=1\n`Ti(f(Xi,c,j), c). (1)\nIt should be noted that R̂w,S(f) is an unbiased estimate of R̃`,P,T (f). Here we establish a generalization error bound for this estimate which builds on Rademacher complexity analysis .\nTo state the bound, we must first introduce the notion of a proper composite loss [49]. This stems from the fact that in practice, a function f outputting values in C is typically obtained by composing a RC-valued function (such as a neural network with C output layer nodes), with another function RC ! C such as the softmax function. Thus, let : U ⇢ C ! V be an invertible function where V is a subset of a normed space, referred to as an invertible link function. Consider G ⇢ F := { f : f 2 F}, and observe that 8g 2 G, 1 g 2 F . In practice, is fixed and we seek to learn g 2 G that leads to an f 2 F with a risk close to the Bayes risk. An example of 1 is the softmax function so that : U ! V, i(p) = log pi 1C P C k=1 log pk, ( 1) i (s) = e siPC k=1 e sk where U is the interior of C and V = {s 2 RC : P C\ni=1 si = 0}. This motivates the following definition.\nDefinition 8. Given an invertible link function : U ⇢ C ! V , we define the proper composite loss ` of a proper loss ` : C ⇥ Y ! R to be ` : V ⇥ Y ! R, `(v, c) = ` 1(v), c .\nFor example, when ` is the log loss and 1 is the softmax function, ` is the cross-entropy (or multinomial logistic) loss. With this notation, we are now able to state our generalization error bound for LMNTM. We study two popular choices of function classes, the reproducing kernel Hilbert space (RKHS) and the multilayer perceptron (MLP). We use G1 to denote the Cartesian product of C balls of radius R in the RKHS and G2 to denote a multilayer perceptron with C outputs. Definition 9. Let k be a symmetric positive definite (SPD) kernel, and let H be the associated reproducing kernel Hilbert space (RKHS). Assume k is bounded by K, meaning 8x, kk(·, x)k\nH K.\nLet Gk K,R denote the ball of radius R in H. Define G1 = GkK,R ⇥ G k K,R ⇥ · · ·⇥ G k K,R (C copies).\nWe follow Zhang et al. [54] and define real-valued MLPs inductively: Definition 10. Define N1 = x! hx, vi : v 2 Rd, kvk2 , and for m > 2, inductively define\nNm = n x! P d j=1 vjµ(fj(x)) : v 2 Rd, kvk1 , fj 2 Nm 1 o\n, where 2 R+ and µ is a 1- Lipschitz activation function. Define an MLP which outputs a vector in RC by G2 = Nm ⇥Nm ⇥ · · ·⇥Nm (C copies). We additionally assume that the choice of µ satisfies 8m 2 N, 0 2 µ Nm. Theorem 11. Let T1, T2, . . . , TN be invertible column-stochastic matrices. Let ` be a proper loss such that 8i, c the function `Ti (·, c) is Lipschitz continuous w.r.t. the 2-norm. Let S be the set of\ndata points as defined in Section 4.1. Assume sup x2X ,g2Gq kg(x)k2 Aq for some constant Aq, 8q 2 {1, 2}. Let R̂w,S be as defined in equation (1). eR(g) := eR`,P,T 1 g = ES h R̂w,S(g) i .\nThen for each q 2 {1, 2}, 8 2 [0, 1], with probability at least 1 ,\nsup g2Gq\nR̂w,S(g) eR(g) (max\ni\n( `Ti Aq + `Ti 0 )\nr 2 log 2\n+ CBq max\ni\n`Ti )\nvuut NX\ni=1\nCX\nc=1\nw 2 i ni,c .\nwhere Bq is a constant depending on Gq , `Ti 0 = maxc `Ti (0, c) , and `Ti is the smallest real number such that it is a Lipschitz constant of `Ti (·, c) for all c.\nTheorem 11 is a special case of of Lemma 26 which extends the notion of Rademacher complexity to the LMNTM setting and applies to arbitrary function classes. Lemma 26 is presented in the appendix.\nLet HMi denote the harmonic mean of ni,1, . . . , ni,C , i.e., HMi = CPC c=1 1 ni,c . The term qP N\ni=1\nP C\nc=1 w\n2 i\nni,c could be written as\nq C P N\ni=1 w\n2 i\nHMi and is optimized by wi =\nHMi/ P N m=1 HMm, leading to qP N i=1 P C c=1 w 2 i ni,c = q CPN i=1 HMi . The term q CPN i=1 HMi\nvanishes (needed to establish consistency) when N goes to infinity, or when 9i s.t. 8c, ni,c goes to infinity. For the special case where all bags have the same size n and all weights wi are 1/N ,qP\nN i=1\nP C\nc=1 w\n2 i ni,c = q C Nn . Thus, consistency is possible even if bag size remains bounded. As-\nsuming ` is the log loss and 1 is the softmax function, we next study the constants `Ti and `Ti 0 .\nProposition 12. Let ` be the log loss, 1 be the softmax function, and T be a column-stochastic matrix. Then | `T | p 2.\nThe constant | `T |0 = maxc| `T (0, c)| = maxc log( 1 C\nP C\nj=1 tc,j). The invertibility of T guarantees P C\nj=1 tc,j is positive and hence the finiteness of | `T |0. However, if we have a “bad\" T ,P C\nj=1 tc,j could be arbitrarily close to 0 leading to a large | `T |0.\nFollowing Theorem 11, if the function class G has a universal approximation property, such as an RKHS associated to a universal kernel, or an MLP with increasing number of nodes, consistency for LMNTM via (regularized) minimization of R̂w,S(g) can be shown by leveraging standard techniques, provided N !1 (bag size may remain bounded). Then the excess risk bound in Theorem 7 would automatically imply consistency with respect to 0-1 loss.", "5 The LLPFC algorithms": "In this section, we define a probabilistic model for LLP, show how LLP reduces to LMNTM, and introduce algorithms that we refer to as the LLPFC algorithms.", "5.1 Probabilistic Model for LLP": "Given a measure P on the space X ⇥ Y , let {Pc : c 2 Y} denote the class-conditional distributions of X , i.e., 8 events A ⇢ X , Pc(A) = P (A | Y = c). Let (c) = P (Y = c), 8c 2 Y and call = ( (1), . . . , (C)) the clean prior. Assume 8c 2 Y, (c) 6= 0. Given z = (z(1), . . . , z(C)) 2 C , let Pz be the probability measure on X ⇥ Y s.t. 8 events A ⇢ X , 8i 2 Y, Pz(A⇥ {i}) = z(i)Pi(A). Thus Pz has the same class-conditional distributions as P but a variable prior z.\nWe first define a model for a single bag. Given z 2 C , we say that bag b is governed by z 2 C if b is a collection of feature vectors\nXj : j 2 N|b| annotated by label proportion\nẑ = (ẑ(1), ẑ(2), . . . , ẑ(C)), where |b| denotes the cardinality of the bag, each Xj is paired with an unobserved label Yj s.t. (Xj , Yj) iid ⇠ Pz , and ẑ(c) = 1|b| P |b|\nj=1 Yj=c. Note EPz [ẑ] = z and Pz(Yj = c) = z(c). We think of z as the true label proportion and ẑ as the empirical label proportion.\nUsing this model for individual bags, we now formally state a model for LLP. Given bags {bk}, let each bk be governed by k. Each bk is a collection of feature vectors X k j : j 2 N|bk| where\n(Xk j , Y k j ) i.i.d. ⇠ P k and Y kj is unknown. Further assume the Xkj ’s are independent for all k and j. In practice, k is unknown and we observe ̂k with ̂k(c) = 1|bk| P |bk| j=1 Y kj =c instead. The goal is learn an f that minimizes the risk RL,P = E(X,Y )⇠P [L(f(X), Y )] where L is the 0-1 loss, given access to the training data {(bk, ̂k)}.", "5.2 The Case of C Bags: Reduction to LLN": "To explain our reduction of LLP to LLN, we first consider the case of exactly C bags b1, b2, . . . , bC , governed by respective (unobserved) 1, . . . , C 2 C , and annotated with label proportions ̂1, . . . ̂C . Define 2 RC⇥C by (i, j) = i(j), and let tr denote the transpose of . Recall that is the class prior associated to P . To model LLP with C bags as an LLN problem, we make the following assumption on and :\nAssumption 13. 9 unique ↵ 2 ̊C s.t. tr↵ = .\nWe write ↵ = (↵(1), . . . ,↵(C)). Assumption 13 is equivalent to: { 1, . . . , C} is a linearly independent set and is in the interior of the convex hull of { 1, . . . , C}. Ternary plots in Figure 5.2 visualize examples where assumption 13 holds and fails when C = 3. Intuitively, assumption 13 is more likely to hold when { i : i 2 NC} are more “spread out” in C , in which case it is more likely for to reside in the convex hull of { i : i 2 NC}.\nTo reduce LLP with C bags to LLN, we simply propose to assign the “noisy label” Ỹ = i to all elements of bag bi and to construct a noise transition matrix T with T (i, j) = i(j)↵(i)/ (j). Assumption 13 ensures T is indeed a column-stochastic matrix. Thus, the probability measure P̄T on X ⇥ Y ⇥ Y satisfies ↵(i) = P̄T (Ỹ = i) and P i(·) = P̄T (· | Ỹ = i), which further implies i(c) = P̄T (Y = c | Ỹ = i). We confirm these facts in Section E in the appendix. Such construction transforms LLP with C bags into LLN with an estimable noise transition matrix T . Each element of a bag can then be viewed as a triplet (X,Y, Ỹ ), with Y unobserved, such that (X,Y ) is drawn from P Ỹ . After assigning the noisy labels, we have a dataset S C c=1 X c j , c : j 2 N|bc| along with the noise transition matrix T . This allows us to leverage the forward correction loss `T to minimize the objective R`T ,PT (f) = EPT [`T (f(X), Ỹ )] which can be estimated by the empirical risk P C\nc=1 ↵(c) |bc|\nP |bc|\nj=1 `T\nf(Xc j ), c .", "5.3 The General Case: Reduction to LMNTM": "More generally, consider LLP with NC bags, N 2 N. We propose to randomly partition the bags into N groups, each with C bags indexed from 1 to C. Let ki,c denote the index of the c-th bag in the i-th group. Thus, bki,c is the c-th bag in the i-th group and it is governed by ki,c . For i 2 NN , define the matrix i 2 RC⇥C by i(c1, c2) = ki,c1 (c2), 8c1, c2 2 Y . We make the following assumption on the i’s and :\nAssumption 14. For each i 2 NN , 9 unique ↵i 2 ̊C s.t. tri ↵i = .\nThus, every group i can be modeled as above as an LLN problem with noise transition matrix Ti where Ti(c1, c2) = ki,c1 (c2)↵i(c1)/ (c2). Data points in the bag assigned with noisy label c in the i-th group can be viewed as drawn i.i.d. from the class conditional distribution PTi(· | c). This\nAlgorithm 1 LLPFC-ideal 1: Input: {(bk, k)}NCk=1 and w 2 N where\nbk = X\nk j : j 2 N|bk| .\n2: Randomly partition the bags into N groups {Gi} N i=1 s.t. Gi = (bki,c , ki,c) : c 2 Y 3: for i = 1 : N do 4: i [ ki,1 , ki,2 , . . . , ki,C ] tr\n5: ↵i tr\ni\n6: for c1 = 1 : C, c2 = 1 : C do 7: Ti(c1, c2) ki,c1 (c2)↵i(c1)/ (c2) 8: end for 9: end for\n10: Train f with the empirical objective (1)\nAlgorithm 2 LLPFC-uniform 1: Input: {(bk, ̂k)}NCk=1 and w 2 N where\nbk = X\nk j : j 2 N|bk| .\n2: Partition the bags as step 2 in Algorithm 1. 3: for i = 1 : N do 4: ̂i [̂ki,1 , ̂ki,2 , . . . , ̂ki,C ] tr 5: ni P C\nc=1 bki,c\n6: ↵̂i(c) |bki,c |/ni for each c = 1 : C 7: ̂i ̂tri ↵̂i 8: for c1 = 1 : C, c2 = 1 : C do 9: T̂i(c1, c2) ̂ki,c1 (c2)↵̂i(c1)/̂i(c2)\n10: end for 11: end for 12: Train with P i,c wi ni P j ` T̂i (f(X ki,c j ), c).\nproblem now maps directly to LMNTM as described in Section 4.1, and satisfies the associated performance guarantees. In the next subsection, we spell out the associated algorithm.", "5.4 Algorithms": "As above, assume we have NC bags where N 2 N. Let each bag bk be governed by k 2 C and be annotated by label proportion ̂k. We first present the LLPFC-ideal algorithm in an ideal setting where , the k’s and the ↵i’s are known precisely and Assumption 14 holds. We then present the real-world adaptations LLPFC-uniform and LLPFC-approx in practical settings.\nThe LLPFC-ideal algorithm is presented in Algorithm 1. We follow the idea in section 5.3 to partition the bags into N groups of C bags, and model each group as an LLN problem. In Algorithm 1, we assume k and are known and Assumption 14 holds. The theoretical analysis in Section 4 is immediately applicable to the LLPFC-ideal algorithm. We partition the bags by uniformly randomly partitioning the set of indices NNC into disjoint subsets {ki,c : c 2 Y}, i 2 NN , where ki,c denotes the index of the c-th bag in the i-th group. We denote the inverse transpose of i by i tr.\nIn practice, when k is unknown, we replace k with ̂k as a plug-in method. Hence, we work with ̂ =\nPNC k=1|bk|̂kPNC k=1 |bk|\nand ̂i = [̂ki,1 , ̂ki,2 , . . . , ̂ki,C ]tr instead of and i in Algorithm 1, respectively. Here ̂ is the label proportion of all training data points and we use it as an estimate of the clean prior . Likewise, ↵i = tri in Algorithm 1 should be replaced with ↵̂i = ̂ tr i ̂ and we would like to use ̂i, ̂, and ↵̂i to calculate T̂i as an estimate of Ti. For this to make sense, we need ↵̂i = ̂ tri ̂ 2 ̊\nC , which is equivalent to ̂ being in the interior of the convex hull of ̂ki,c : c 2 Y for all i. However, this may not be the case in practice. Thus, we consider two\nAlgorithm 3 LLPFC-approx 1: Input: {(bk, ̂k)}NCk=1 and w 2 N where\nbk = X\nk j : j 2 N|bk| .\n2: ̂ PNC\nk=1|bk|̂kPNC k=1 |bk|\n3: Partition the bags as step 2 in Algorithm 1. 4: for i = 1 : N do 5: ̂i [̂ki,1 , ̂ki,2 , . . . , ̂ki,C ] tr\n6: ↵̂i argmin↵2 C ||̂ ̂ tr i ↵|| 2 2 7: ̂i ̂tri ↵̂i 8: for c1 = 1 : C, c2 = 1 : C do 9: T̂i(c1, c2) ̂ki,c1 (c2)↵̂i(c1)/̂i(c2)\n10: end for 11: end for 12: Train with P i,c wi↵̂i(c)\n|bki,c |\nP j ` T̂i (f(X ki,c j ), c)\nheuristics to estimate T̂i as real-world adaptations of the LLPFC-ideal algorithm. The first, called LLPFC-uniform, is presented in Algorithm 2 which sets ↵̂i by counting the occurrences of the noisy labels. This is motivated by our model wherein ↵i is the noisy class prior for the i-th group. The second, called LLPFC-approx, is presented in Algorithm 3 and sets ↵̂i to be the solution of argmin\n↵2 C ||̂ ̂i↵|| 2 2. It should be noted that in both practical algorithms, we use a different\n̂i as an estimate of for each group, to ensure that each T̂i is a column-stochastic matrix. In experiments where we have NC + k number of bags with 0 < k < C, we can randomly resample NC number of bags and regroup them in every few epochs. Both real-world adaptations perform reasonably well in experiments.", "6 Experiments": "1 We compare against three previous works that have studied LLP applying deep learning to image\nData set Method 32 64 128 256 512 1024 2048\nCIFAR10 LLPGAN .3630 ± .01 .3133 ± .02 .3328 ± .03 .3363 ± .03 .3460 ± .03 .2824 ± .05 .2236 ± .08\nLLPFC-uniform .6145 ± .01 .5826 ± .01 .5565 ± .03 .5452 ± .01 .5511 ± .02 .5358 ± .01 .5438 ± .03 LLPFC-approx .6169 ± .01 .5875 ± .01 .5642 ± .02 .5687 ± .02 .5621 ± .03 .5610 ± .01 .5567 ± .02\nSVHN LLPGAN .2378 ± .24 .7135 ± .06 .7680 ± .04 .6058 ± .29 .4863 ± .22 .1725 ± .06 .1382 ± .04\nLLPFC-uniform .8800 ± .00 .8581 ± .01 .8480 ± .01 .8393 ± .01 .8347 ± .01 .8258 ± .01 .8327 ± .01 LLPFC-approx .8779 ± .01 .8519 ± .01 .7061 ± .33 .8453 ± .02 .8423 ± .01 .8386 ± .01 .8527 ± .01\ndata: Dulac-Arnold et al. [9] study the KL loss described in the introduction, and a novel loss 1Code is available at https://github.com/Z-Jianxin/LLPFC\nbased on optimal transport. They find that KL performs just as well as the novel loss. Liu et al. [22] employ the KL loss within a generative adversarial framework (LLPGAN). Tsai and Lin [46] propose augmenting the KL loss with a regularizer from semi-supervised learning and show improved performance (LLPVAT). We compare both LLPFC-uniform and LLPFC-approx against the KL loss, LLPGAN, and LLPVAT to clearly establish which empirical objective is better. Recent papers on multiclass LLP for which code is not available were not included [23, 16].\nWe generate bags with fixed, equal sizes in {32, 64, 128, 256, 512, 1024, 2048}. To generate each bag, we first sample a label proportion from the uniform distribution on C . Then we sample data points from a benchmark dataset without replacement using a multinomial distribution with parameter . It should be noted that Tsai and Lin [46], Dulac-Arnold et al. [9], and Liu et al. [22] generate bags by shuffling all data points and making every B data points a bag where B is a fixed bag size. Their method is equivalent to sampling data points without replacement using a multinomial distribution with a fixed parameter =\n1 C , 1 C , . . . , 1 C . As noted by Scott and Zhang [39], this leads to bags\nwith very similar label proportions which makes the learning task much more challenging.\nWe repeat each experiment 5 times and report the mean test accuracy and standard deviation. All models are trained on a single Nvidia Tesla v100 GPU with 16GB RAM. In our implementation of LLPFC algorithms, the weight w is set to be ( 1\nN , . . . , 1 N ) 2 N and our choice of the proper\ncomposite loss is the cross-entropy loss.\nFor the comparison against KL and LLPVAT, we perform experiments on three benchmark image datasets: the “letter” split of EMNIST [6], SVHN [29], and CIFAR10 [17]. To show that our approach is robust to the choice of architecture, we experiment with three different networks: Wide ResNet-16-4 [52], ResNet18 [11], and VGG16 [42]. We train these networks with the parameters suggested in the original papers. The test accuracies are reported in Tables 1, 2, and 3. Since convergence in the GAN framework is sensitive to the choice of architecture and hyperparameters, we compare LLPFC against LLPGAN using the architecture proposed in the original paper along with the hyperparameters suggested in their code2. It should be noted that for LLPFC we only use the discriminator for classification and did not use the generator to augment data. Since Liu et al. [22] only provide hyperparameters for colored images, we perform experiments on SVHN and CIFAR10 only. The test accuracies are reported in Table 4.\nLLPFC-uniform and LLPFC-approx substantially outperform the competitors in a clear majority of settings. The experiment results clearly establish our methods as the state-of-the-art by a substantial margin. All three competitors perform gradient descent with minibatches of bags and the GPU at times runs out of memory when the bag size is large. Our implementation, which also uses stochastic optimization, does not suffer from this phenomenon. Full experimental details are in the appendix.", "7 Conclusions and Future Work": "We propose a theoretically supported approach to LLP by reducing it to learning with label noise and using the forward correction (FC) loss. An excess risk bound and generalization error analysis are established. Our approach outperforms leading existing methods in deep learning scenarios across multiple datasets and architectures. A limitation of our approach is that the theory makes an assumption that may not be verifiable in practice. Future research directions include optimizing the grouping of bags and adapting LLPFC to other objectives beyond accuracy.\nAcknowledgement The authors were supported in part by the National Science Foundation under awards 1838179 and 2008074, and by the Department of Defense, Defense Threat Reduction Agency under award HDTRA1-20-2-0002.\n2https://github.com/liujiabin008/LLP-GAN", "Reviewer Summary": "Reviewer_2: The paper investigates multiclass learning from label proportions, i.e. the situation in which the training data is only available in \"bags\". For each bag, the fraction of samples belonging to a certain class is known, but the exact assignment of instances to classes is not. The idea is to assign an arbitrary label to each instance in a bag, and then consider the bag to be a noisy learning problem with known transition matrix. For learning with noisy labels, they employ the forward-correction approach, proving new calibration-style generalization bounds for this, and then extending these to the multiple-noise matrix (i.e. multiple bags -- learning from label proportions case).\n\nReviewer_3: -This paper considers a conventional weakly supervised learning problem called learning from label proportions, where the training data are given in the form of bags and the proportions of each class within each bag are also given; the goal is to train a classifier predicting instance labels from such bag data. To solve the problem, they partition the bags into groups and model each group as a standard label noise problem. Next, they apply the forward correction loss, which was proposed for dealing with class-conditional label noise, to each group and study the generalization error bound of the proposed method. They also conduct some experiments to verify the effectiveness of the proposed method on some benchmark datasets.\n\nReviewer_4: The paper focused on learning from label proportions (LLP) task. First, they show that background theory of forward correction loss in learning with label noise (LLN), and extend the results with multiple noise transition matrices. Next, they propose LLPFC algorithm with probabilistic model for reducing LLP to LLN with multiple noise transition matrices.\n\nReviewer_5: Multi-class label proportion learning (LLP) is a weakly supervised learning problem that aims at obtaining a multi-class classifier with only bags (each bag is a collection of data points) and the proportions of different classes contained in the bags. In this paper, it is shown that the problem of label proportion learning can be solved efficiently with the techniques in the field of noisy label learning (LLN). The authors first give the consistency results on the Forward Correction (FC) technique, which is one of the prevailing methods in the field of LLN, and then extend it to enable the use of multiple transition matrices. Under some mild assumptions, the authors show that the problem of LLP can be solved with the proposed extension of FC. Experiments on benchmark datasets are conducted to show the efficiency of the proposed method.", "Cited in": [], "Cited By": [], "_matched_df_index": 153, "df_title": "Learning from Label Proportions by Learning with Label Noise" }