{ "File Number": "1048", "Title": "Mask Matching Transformer for Few-Shot Segmentation", "Limitation": "Limitations and societal impact. Our MM-Former introduces the paradigm of decompose first and then blend to the research of few-shot segmentation, which is a totally new perspective and may inspire future researchers to develop more advanced versions. However, there is still a large gap between the current results and the oracle (≈ 20% mIoU). How to further narrow this gap is our future research focus. Acknowledgment. This work was supported in part by the National Key R & D Program of China (No.2021ZD0112100), the National NSF of China (No.U1936212, No.62120106009), the Fundamental Research Funds for the Central Universities (No. K22RC00010). Yao Zhao and Yunchao Wei are the corresponding authors.", "Reviewer Comment": "Reviewer_4: Strengths\nThis paper is well-organized and can be easily understood by readers. The technical details are introduced clearly.\nThe authors conducted extensive experiments on multiple benchmarks to investigate the effectiveness of different modules and designs in this paper.\nWeakness\nIn the potential objects segmenter section, the authors just borrow the ideas from Mask2Former to generate object proposals. But in few-shot learning, the number of meaningful object parts is unknown both in the base training set and the novel test set. How does the network learn these object segmenter without any guidance? If the object segmenter is inaccurate, will it affect the subsequent query-related part merge?\nThe proposed MM-Former exploits transformers as decoders for few-shot semantic segmentation. As is known a transformer may need a lot of learnable parameters. It will be better to list the parameters of different modules in MM-Former to see how the number of learnable parameters of transformers affects the final performance.\nIn Table 2, we can find that the performance of MM-Former on PASCAL is not good as other state-of-the-art algorithms. I want to see a thorough analysis of such a phenomenon. Maybe we can know more about the characteristics of MM-Former.\nQuestions:\nI am curious about whether the number of meaningful object proposals affects the final performance. There are also many other issues discussed in the weakness section.\nLimitations:\nI didn't see any obvious negative societal limitations of this work.\nEthics Flag: No\nEthics Review Area: I don’t know\nSoundness: 3 good\nPresentation: 3 good\nContribution: 2 fair\n\nReviewer_5: The writing of this paper is redundant and ambiguous, and the logic is not clear.\nThere are lots of grammatical errors which need to be corrected.\nLack of innovation and low novelty.\nPoor performance on the Pascal dataset even compared with the methods proposed years ago.\nProposed two-stage strategy introduces a new perspective focusing on mask-level segmentation.\nQuestions:\nIn #40, \" Concretely, pixel-level relationships are modeled between the support and query feature maps either by attention mechanism or 4D convolutions.\" Using \"either...or...\" is too arbitrary. There are also other methods to deal with the relationship between the query and support sets, like HSNet and PGNet.\nPoor performance on the Pascal dataset even compared with the methods in the community proposed years ago. In #85, the author attributed the reason to the limited scale of the dataset, which does not convince the reviewer.\nAdding more visual illustrations will make it clearer while introducing the network structure in subsection 2.2.\nIn the Introduction and Abstract sections, the author emphasizes the advantages of the algorithm in complexity. Relevant comparative experiments and explanations need to be supplemented to support the claims.\nIn #115, \" We use the outputs from the last three layers in the following modules\", \"the last three layers\" is ambiguous. Please further elaborate on the details.\nThe novelty is marginal since the core idea is borrowed from MaskFormer.\nThe impacts of different components brought to the model efficiency should be discussed, including the overall final model efficiency in terms of fps and model size.\nThere are quite a lot of grammatical errors and ambiguous expressions. The reviewer did not try to point them all out together. Please check your manuscript carefully and correct all mistakes before submission. Typos include but are not limited to:\na. In #22, \"one of a fundamental tasks\" -> \"one of the fundamental tasks\"\nb. In #22, “ has achieved grand success” -> \" has achieved a grand success\"\nc. In #29, \" segment any objects\" remove \"any\"\nd. A large number of articles are missing or used incorrectly. For example, \"We refer this kind of method as 'few-to-many' matching approach.\" -> \"We refer this kind of method as a 'few-to-many' matching approach.\" I won't list all since there are too many.\ne. In #37, \"While acceptable results obtained\" -> \"While acceptable results were obtained\". Lack of the predicate.\nf. In #51, \"Rather than matching\" -> \"Rather than being matched\"\ng. In #300, \", it is not suitable for the matching problem and impair the matching performance.\" -> \"impairs\"\nLimitations:\nThe stated limitation, i.e., a large gap between the oracle case, indicates room for future improvements, the reviewer still believes that broader negative impacts and limitations should be discussed.\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 2 fair\nContribution: 2 fair\n\nReviewer_6: Strengths\nThe structure of this paper is well-organized and easy to follow the ideas of the authors.\nThe proposed method achieves significant improvements on the two datasets compared with other recent methods.\nWeaknesses\nThe only novel contribution of this paper is the Feature Alignment Block (FAB) in Mask Matching Module and it significantly improves the results, the other blocks are incremental. The authors simply adapt it to Mask2Former architecture. It would be better to investigate more about the FAB in different methods (CyCTR, HSNet) to prove its effectiveness is generalizable enough.\nThe “few-to-few” matching (L53) is not new. Previous approaches ([2], [3]) have applied it to the few-shot instance segmentation task. The proposed method (the mask2former architecture) tends to overfit small datasets (L225), then it is not suitable for the few-shot setting.\nRelated work is not well presented. They do not provide any analysis or comparison of previous work.\nQuestions:\nCan the author add the running time, training time, and the memory consumed by the model? As the Mask Matching Module is implemented by standard transformer decoder layers, does it requires large memory and time?\nWhat is the detailed implementation of the Mask Matching Module? In Fig. 2, the final mask has the same resolution as the F2 feature map, how to upsample the mask to the original resolution of the input image?\nExplain the self-alignment module and intuition of this. Can we replace it with the self-attention module (compare the performance and runtime)\nExplain the use of the MLP in Eq.(4): describe the input, and output range, and how it can correct the cosine similarity vector of N potential masks. Visualize the M and M^ before and after. It helps the reader what is happening inside the block. Why do not use dice loss in L_m (L173)\nL180: “Meanwhile, only constrain on Sˆ pos may lead all outputs of Cross Alignment block to tend to be same”: why? explain in more detail?\nL182: how to find the lowest IoU (ex: 2 potential masks with no overlap with GT)\nCan the authors add more visualization of the failure cases?\nIn Eq. (4), redundant closing parenthesis ')'.\nI will increase my score if the authors address all of my concerns.\nLimitations:\nNo\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 3 good\nContribution: 2 fair", "abstractText": "In this paper, we aim to tackle the challenging few-shot segmentation task from a new perspective. Typical methods follow the paradigm to firstly learn prototypical features from support images and then match query features in pixel-level to obtain segmentation results. However, to obtain satisfactory segments, such a paradigm needs to couple the learning of the matching operations with heavy segmentation modules, limiting the flexibility of design and increasing the learning complexity. To alleviate this issue, we propose Mask Matching Transformer (MM-Former), a new paradigm for the few-shot segmentation task. Specifically, MM-Former first uses a class-agnostic segmenter to decompose the query image into multiple segment proposals. Then, a simple matching mechanism is applied to merge the related segment proposals into the final mask guided by the support images. The advantages of our MM-Former are two-fold. First, the MM-Former follows the paradigm of decompose first and then blend, allowing our method to benefit from the advanced potential objects segmenter to produce high-quality mask proposals for query images. Second, the mission of prototypical features is relaxed to learn coefficients to fuse correct ones within a proposal pool, making the MM-Former be well generalized to complex scenarios or cases. We conduct extensive experiments on the popular COCO-20 and Pascal-5 benchmarks. Competitive results well demonstrate the effectiveness and the generalization ability of our MM-Former. Code is available at github.com/Picsart-AI-Research/Mask-Matching-Transformer.", "1 Introduction": "Semantic segmentation, one of the fundamental tasks in computer vision, has achieved a grand success [3, 5, 37, 12] in recent years with the advantages of deep learning techniques [11] and large-scale annotated datasets [14, 6]. However, the presence of data samples naturally abides by a long-tailed distribution where the overwhelming majority of categories have very few samples. Therefore, few-shot segmentation [21, 27, 35] is introduced to segment objects of the tail categories only according to a minimal number of labels.\nMainstream few-shot segmentation approaches typically follow the learning-to-learning fashion, where a network is trained with episodic training to segment objects conditioned on a handful of labeled samples. The fundamental idea behind it is how to effectively use the information provided by the labeled samples (called support) to segment the test (referred to as the query) image. Early\n∗Work done during an internship at Picsart AI Research (PAIR).\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nworks [27, 36, 33, 24, 30] achieve this by first extracting one or few semantic-level prototypes from features of support images, and then pixels in the query feature map are matched (activated) by the support prototypes to obtain the segmentation results. We refer to this kind of method as “few-to-many” matching paradigm since the number of support prototypes is typically much less (e.g., two to three orders of magnitude less) than the number of pixels in the query feature map. While acceptable results were obtained, this few-to-many matching paradigm turns out to be restricted in segmentation performance due to the information loss in extracting prototypes. Therefore recent advances [34, 26, 19] proceed to a “many-to-many” matching fashion. Concretely, pixellevel relationships are modeled between the support and query feature maps either by attention machanism [34, 26] or 4D convolutions [19]. Benefiting from these advanced techniques, the many-to-many matching approaches exhibit excellent performance over the few-to-many matching counterparts. Overall, the aforementioned approaches construct modules combining the matching operation with segmentation modules and optimizing them jointly. For the sake of improving the segmentation quality, techniques of context modeling module such as atrous spatial pooling pyramid [3], self-attention [39] or multi-scale feature fusion [24] are integrated with the matching operations [33] and then are simultaneously learned via the episodic training [30, 34, 24]. However, this joint learning fashion not only vastly increases the learning complexity, but also makes it hard to distinguish the effect of matching modules in few-shot segmentation.\nTherefore, in this work, we steer toward a different perspective for few-shot segmentation: decoupling the learning of segmentation and matching modules as illustrated in Fig. 1. Rather than being matched with the pixel-level query features, the support samples are directly matched with a few class-agnostic query mask proposals, forming a “few-to-few” matching paradigm. By performing matching in the mask level, several advantages are provided: 1) Such a few-to-few matching paradigm releases matching from the segmentation module and focuses on the matching problem itself. 2) It reduces the training complexity, thus a simple few-to-few matching is enough for solving the few-shot segmentation problem. 3) While previous works turned out to be overfitting when using high-level features for matching and predicting the segmentation mask [24, 27, 34], the learning of our matching and segmentation module would not affect each other and hence avoids this daunting problem.\nTo achieve this few-to-few matching paradigm, we introduce a two-stage framework, named Mask Matching Transformer (dubbed as MM-Former), that generates mask proposals for the query image in the first stage and then matches the support samples with the mask proposals in the second stage. Recently, MaskFormer [4, 5] formulates semantic segmentation as a mask classification problem, which obtains semantic segmentation results by combining the predictions of binary masks and the corresponding classification scores, where the masks and the scores are both obtained by using a transformer decoder. It provides the flexibility for segmenting an image of high quality without knowing the categories of the objects in advance. Inspired by this, we also use the same transformer decoder as in [4] to predict a set of class-agnostic masks based on the query image only. To further determine the target objects indicated by the support annotation, a simple Mask Matching Module is constructed. Given the features extracted from both support and query samples, the Mask Matching Module obtains prototypes from both support and query features through masked global average pooling [27]. Further, a matching operation is applied to match the supports with all query proposals and produces a set of coefficients for each query candidate. The final segmentation result for a given\nsupport(s)-query pair is acquired by combining the mask proposals according to the coefficients. In addition, to resolve the problem of representation misalignment caused by the differences between query and support images, a Feature Alignment Block is integrated into the Mask-Matching Module. Concretely, since the features for all images are extracted with a fixed network, they may not be aligned well in the feature space, especially for testing images with novel classes. A Self Alignment block and a Cross Alignment block are introduced to consist of the Feature Alignment Block to align the query and support samples in the feature space so that the matching operation can be safely applied to the extracted features.\nWe evaluate our MM-Former on two commonly used few-shot segmentation benchmarks: COCO-20i and Pascal-5i. Our model stands out from previous works on the challenging COCO-20i dataset. While our MM-Former only performs comparably with previous state-of-the-art methods due to the limited scale of the Pascal dataset, our MM-Former exhibits a strong transferable ability across different datasets (i.e., COCO-20i → Pascal-5i), owing to our superior mask-matching design. In a nutshell, our contributions can be summarized as follows: (1) We put forward a new perspective for few-shot segmentation, which decouples the learning of matching and segmentation modules, allowing more flexibility and lower training complexity. (2) We introduce a simple two-stage framework named MM-Former that efficiently matches the support samples with a set of query mask proposals to obtain segmentation results. (3) Extensive evaluations on COCO-20i and Pascal-5i demonstrate the potential of the method to be a robust baseline in the few-to-few matching paradigm.", "2 Methodology": "Problem Setting: Few-shot segmentation aims at training a segmentation model that can segment novel objects with very few labeled samples. Specifically, given two image sets Dtrain and Dtest with category set Ctrain and Ctest respectively, where Ctrain and Ctest are disjoint in terms of object categories (Ctrain∩Ctest = ∅). The model trained on Dtrain is directly applied to test on Dtest. The episodic paradigm was adopted in [24, 38] to train and evaluate few-shot models. A k-shot episode {{Is}k, Iq} is composed of k support images Is and a query image Iq, all {Is}k and Iq contain objects from the same category. We estimate the number of episodes for training and testing set are Ntrain and Ntest, the training set and test set can be represented by Dtrain = {{Is}k, Iq}Ntrain and Dtest = {{Is}k, Iq}Ntest . Note that both support masks Ms and query masks Mq are available for training, and only support masks Ms are accessible during testing.\nOverview: The proposed architecture can be divided into three parts, i.e., Backbone Network, Potential Objects Segmenter and Mask Matching Module. Specifically, the Backbone Network is used to extract features only, whose parameters are fixed during the training. The Potential Objects Segmenter (dubbed as POS) is applied to produce multiple mask proposals that may contain potential object regions within the given image. The Mask Matching Module (dubbed as MM module) takes support cues as guidance to choose the most likely ones from the mask proposals. The selected masks are finally merged into the target output. The complete diagram of the architecture is shown in Fig. 2. Each of the modules will be explained in detail in the following subsections.", "2.1 Feature Extraction Module": "We adopt a ResNet [11] to extract features for input images. Unlike previous few shot segmentation methods [24, 38, 31] using Atrous Convolution to replace strides in convolutions for keeping larger resolutions, we keep the original structure of ResNet following [5]. We use the outputs from the last three layers in the following modules and named them as FS and FQ for IS and IQ, where F = { F i } , i ∈ [3, 4, 5] and F is the features of IS or IQ, i is the layer index of backbone. We further extract the output of query layer2 to obtain the segmentation mask (named as F 2Q). F 2, F 3, F 4 and F 5 have strides of {4, 8, 16, 32} with respect to the input image.", "2.2 Potential Objects Segmenter": "The POS aims to segment all the objects in one image. Follow Mask2Former [4], a standard transformer decoder [25] is used to compute cross attention between FQ and N learnable embeddings. The transformer decoder consists of 3 consecutive transformer layers, each of which takes the corresponding F i as an input. Each layer in the transformer decoder can be formulated as\nEl+1 = TLayerl(El, F i), (1) where El and El+1 represent the N learnable embeddings before and after applying the transformer layer respectively. TLayer denote a transformer decoder layer. We simplify the representation of transformer decoder, whereas we conduct the same pipeline proposed by Mask2Former. The output of the transformer decoder is multiplied with F 2Q to get N mask proposals M ∈ RN×H/4×W/4. Note that Sigmoid is applied to normalize all mask proposals to [0, 1]. Besides, our POS abandons the classifier of Mask2Former since we don’t need to classify the mask proposals.", "2.3 Mask Matching (MM) Module": "In MM, our goal is to use support cues as guidance to match the relevant masks. The building blocks of MM are a Feature Alignment block and a Learnable Matching block. We first apply Feature Alignment block to align FQ and FS from the pixel level. Then, the Learnable Matching block matches appropriate query masks correspondence to the support images.\nFeature Alignment Block: We achieve the alignment using two types of building blocks: a SelfAlignment block and a Cross-Alignment block. The complete architecture is shown in Fig. 3.\nWe adopt the Self-Alignment block to align features in each channel. Inspired from Polarized Self-Attention [16], we design a non-parametric block to normalize representations. Specifically, the input feature map F ∈ Rc×hw is first averaged at the channel dimension to obtain Favg ∈ R1×hw. Favg is regarded as an anchor to obtain the attention weight A ∈ Rc×1 by matrix multiplication: A = FFTavg, which represents the weights of different channels. A is used to activate the\nfeature by position-wise-multiplication (i.e., expand at the spatial dimension and perform point-wise multiplication with the feature). In this way, the input feature is adjusted across the channel dimension, and the outliers are expected to be smoothed. Note that the Self Alignment block processes FS and FQ individually and does not involve interactions across images.\nThe Cross-Alignment block is introduced to mitigate divergence across different images. FS and FQ are fed into two weight-shared transformer decoders in parallel. We take ith layer as an example in Fig. 3, which can be formulated as\nF̂ iQ = MLP(MHAtten(F i Q, F i S , F i S)),\nF̂ iS = MLP(MHAtten(F i S , F i Q, F i Q)),\n(2)\nwhere F iQ and F i S represent the i th layer feature in FQ and FS . F̂ i represents the alignment features. (F̂ = { F̂ i }L i , i ∈ [3, 4, 5]). MLP denote MultiLayer Perceptron and MHAtten represents multi-\nhead attention [25]\nMHAtten(q, k, v) = softmax( qkT√ dk )v, (3)\nwhere q, k, v mean three matrices, and dk is the dimension of q and k elements. k, v are downsampled to 132 of the original resolution to save computation. To distinguish from the phrase “Query Image” in few-shot segmentation and “matrix Q” in Transformer, we name “Query Image” as Q and “matrix Q” as q. We simplify the representation of MHAtten and omit some Shortcut-Connections and Layer Normalizations in transformer decoder, whereas we conduct the same pipeline with the standard transformer.\nLearnable Matching Block: After acquiring F̂S and F̂Q, we first apply masked global average pooling (GAP) [38, 27, 24, 31] on each F̂ i and concat them together to generate prototypes for support ground-truth and N query mask proposals. Named as { P gtS } , { P iQ } , P gtS , P n Q ∈ R3d and\nn ∈ [1, 2...N ]. Here d represents the dimension of F̂ i, 3d is achieved by concatenating { F̂ 3, F̂ 4, F̂ 5 } together. We use cosine distance to measure the similarity between the prototypes of P gtS and P n Q.\nIn some cases, the mask corresponding to the prototype with the highest similarity may not be complete (e.g., the support image has only parts of the object). So, we further use an MLP layer to merge corresponding masks. The detailed diagram can be formulated as\nS = cos(P gtS , P n Q), n ∈ [1, 2...N ],\nM̂ = M ×MLP(S), S ∈ R1×N , (4)\nwhere M̂ is our final result, MLP and cos indicate the fully connected operation and the cosine similarity. We take N similarities (S) as the input of MLP, and use the output to perform a weighted average of N mask proposals. Note that we do not select the mask with the highest similarity directly, our ablation studies prove that using this block can help improve the performance.", "2.4 Objective": "In POS, we adopt segmentation loss functions proposed by Mask2Former (denote as LP ). We apply Hungarian algorithm to match mask proposals with groud-truth and only conduct Dice Loss to supervise on masks with the best matching.\nIn MM module, we conduct Dice Loss on M̂ (denote as LM ) and design a contrastive loss to constrain the cross-alignment module. Our goal is to make prototypes for the same class more similar while different classes less similar by constraining S. We first normalize S to [0, 1] by min-max normalization Ŝ = S−min(S)max(S)−min(S)+ε . Then we calculate IoU between N mask proposals and Query ground-truth. We assume that mask proposals contain various objects in query images. It is unrealistic to constrain the corresponding prototypes across different categories since it is hard to acquire the proper similarity among them. Therefore, we apply a criterion at the location of max(IoU) and denote the point as positive point Ŝpos. Only constrain on Ŝpos may lead all outputs of Cross Alignment block tend to be same. Thus, we add a constraint on the point in Ŝ corresponding to the lowest IoU, and denote it as negative point Ŝneg . We assign ypos = 1 and yneg = 0 to Ŝpos and Ŝneg during the optimization, respectively. Therefore, the cross-alignment loss Lco can be defined as\nLco = − 1\n2 (ypos log Ŝpos + (1− yneg) log (1− Ŝneg)) (5)\nThus, the final loss function can be formulated as L = LP + λ1LM + λ2Lco, where λ1 and λ2 are constants and are set to 10 and 6 in our experiments.", "2.5 Training Strategy": "In order to avoid the mutual influence of POS and MM module during training, we propose a two-stage training strategy of first training POS and then training MM. In addition, by decoupling POS and MM, the network can share the same POS under 1-shot and K-shot settings, which greatly improves the training efficiency.\nK-shot Setting: Based on the two stages training strategy, MM-Former can easily extend to the K-shot setting by averaging knowledge from K samples, i.e., P gtS . Note that after pre-trained the POS, MM-Former can be applied to 1-shot/ K-shot tasks with only a very small amount of training.", "3.1 Dataset and Evaluation Metric": "We conduct experiments on two popular few-shot segmentation benchmarks, Pascal-5i [9] and COCO-20i [14], to evaluate our method. Pascal-5i with extra mask annotations SBD [10] consisting of 20 classes are separated into 4 splits. For each split, 15 classes are used for training and 5 classes for testing. COCO-20i consists of annotated images from 80 classes.We follow the common data split settings in [20, 38, 19] to divide 80 classes evenly into 4 splits, 60 classes for training and test on 20 classes. 1,000 episodes from the testing split are randomly sampled for evaluation. To quantitatively evaluate the performance, we follow common practice [24, 27, 36, 19, 38], and adopt mean intersection-over-union (mIoU) as the evaluation metrics for experiments.", "3.2 Implementation details": "The training process of our MM-Former is divided into two stages. For the first stage, we freeze the ImageNet [6] pre-trained backbone. The POS is trained on Pascal-5i for 20,000 iterations and 60,000 iterations on COCO-20i, respectively. Learning rate is set to 1e−4, batch size is set to 8. For the second stage, we freeze the parameters of the backbone and the POS, and only train the MM module for 10,000/20,000 iterations on Pascal-5i / COCO-20i, respectively. Learning rate is set to 1e−4, batch size is set to 4. For both stages, we use AdamW [17] optimizer with a weight decay of 5e−2. The learning rate is decreased using the poly schedule with a factor of 0.9. All images are resized and cropped into 480× 480 for training. We also employ random horizontal flipping and random crop techniques for data augmentation. All the experiments are conducted on a single RTX A6000 GPU. The standard ResNet-50 [11] is adopted as the backbone network.", "3.3 Comparison with State-of-the-art Methods": "We compare the proposed approach with state-of-the-art methods [24, 31, 13, 28, 19, 18, 1, 15, 38] on Pascal-5i and COCO-20i datasets. The results are shown in Tab. 1 and Tab. 2.\nResults on COCO-20i. In Tab. 1, our MM-Former performs remarkably well in COCO for both 1-shot and 5-shot setting. Specifically, we achieve 3.9% improvement on 1-shot compared with CyCTR [34] and outperform HSNet [19] by 1.5% mIoU .\nResults on Pascal-5i. Due to the limited number of training samples in the Pascal dataset, the POS may easily overfit during the first training stage. Therefore, following recent works [24, 1, 19], we include the transferring results that transfer the COCO-trained model to be tested on Pascal. Note that when training on the COCO dataset, the testing classes shared with the Pascal dataset are removed, so as to avoid the category information leakage.\nAccording to Tab. 2, although our MM-Former is slightly inferior to some competitive results when training on Pascal dataset, we find MM-Former exhibits remarkable transferability when training on COCO and testing on Pascal. Specifically, the previous state-of-the-art method HSNet shows powerful results on Pascal→Pascal but degrades when transferring from COCO to Pascal. Instead, our MM-Former further enhances the performance of 1-shot and 5-shot by 4.4% and 5.5%, outperforming HSNet by 6.1% and 1.7%, respectively.\nOracle Analysis. We also explore the room for further improvement of this new few-shot segmentation paradigm by using query ground truth (GT) during inference. The results refer to the last rows in Tab. 1, 2. In detail, after the POS generates N mask proposals, we use the GT mask to select one proposal mask with the highest IoU, and regard this result as the segmentation result. Note that this natural selection is not the optimal solution because there may be other masks complementary to the selected one, but it is still a good oracle to show the potential of the new learning paradigm. According to the results, there is still a large gap between current performance and the oracle (≈ 20% mIoU), which suggests that our model has enormous potential for improvement whereas we have achieved state-of-the-art performance.", "3.4 Ablation Studies": "We conduct ablation studies on various choices of designs of our MM-Former to show their contribution to the final results. Component-wise ablations, including the MM Module and the POS, are shown in Sec. 3.4.1. The experiments are performed with the 1-shot setting on Pascal-5i and COCO-20i. We further experimentally demonstrate the benefits of the two-stage training strategy in Sec. 3.4.2.", "3.4.1 Component-wise Ablations": "To understand the effect of each component in the MM module, we start the ablations with a heuristic matching baseline and progressively add each building block.\nBaseline. The result of the heuristic matching baseline is shown in the first row of Tab. 3a, which directly selects the mask corresponding to the highest cosine similarity with the support prototype. Note that when not using the learnable matching block, the results in Tab. 3a are all obtained in the same way as the heuristic matching baseline. We observe that the heuristic matching strategy does not provide strong performance, which is caused by the feature misalignment problem and fails to fuse multiple mask proposals.\nSelf-Alignment Block. With the self-alignment block, the performance is improved by 4.3% on Pascal and 1.2% on COCO, as shown by the 2nd result in Tab. 3a, demonstrating that channel-wise attention does help normalize the features for comparison. However, the performance is still inferior, encouraging us to further align the support and query features with the cross-alignment block.\nCross-Alignment Block. In the third result of Tab. 3a, we experiment with a non-parametric variant of the cross-alignment block that removes all learnable parameters in the cross-alignment block. A significant performance drop is observed. This is not surprising because the attention in the cross-alignment block cannot attend to proper areas due to the feature misalignment. When learning the cross-alignment block, indicated by the 4th result of Tab. 3a, the performance is remarkably improved by 14% on Pascal and 12.9% on COCO, manifesting the necessity of learning the feature alignment for further matching.\nLearnable Matching Block. Surprisingly, simply using our learnable matching block can already achieve decent performance (the 5th result in Tab. 3a) compared with the baseline, thanks to its capability to adaptively match and merge multiple mask proposals.\nMask Matching Module. By applying all components, our MM-Former pushes the state-of-theart performance on COCO to 43.2% (the 7th result in Tab. 3a). In addition, to encourage the alignment of query and support in the feature space, we add the auxiliary loss Lco to the output of the cross-alignment block, which additionally enhances the performance by more than 1%.\nPotential Objects Segmenter Although we follow Mask2former to build our POS, several differences are made and we evaluate the choice of design as follows. Mask Classification. In Mask2Former [4], a linear classifier is trained with cross-entropy loss for categorizing each mask proposal, while in our MM-Former for few-shot segmentation, we remove it to avoid learning class-specific representations in the first training stage. The result in Tab. 4a shows that the classifier harms the performance due to that the linear classifier would make the network fit the “seen” classes in the training set. Since this change only affects the first stage, we use the oracle results to demonstrate the effect. Numbers of Proposals. In Tab. 4b, we try to vary the numbers of the mask proposals N . Increasing the number of N will significantly improve the oracle result and our result. Thus we chose 100 as the default value in all other experiments. It is worth noting that, when varying the number from 10 to 100, our result is improved by 5.4%, but the oracle result is improved by 12.1%, indicating the large room for improvement with our new mask matching paradigm.\nEffect of Different Feature Extraction. Previous few-shot segmentation works [34, 24, 30] typically integrate matching operations with segmentation-specific techniques [39, 24, 3]. Following Mask2Former, our POS also includes a multi-scale deformable attention (MSDeformAttn) [39]. In\nTab. 3b, we investigate using the features from the MSDeformAttn instead of the backbone feature for the MM module. Interestingly, although the feature after the context modeling is essential for segmentation, it is not suitable for the matching problem and impairs the matching performance.", "3.4.2 Analysis of Training Strategy": "Effect of Two-stage Training. One may wonder what if we couple the training of POS and MM modules. Tab. 3c experiments on this point, the joint optimization is inferior to the two-stage training strategy, since POS and MM have different convergence rates.\nEfficiency of Two-stage Training.We analyze the efficiency of our method and provide comparisons of training time, and training memory consumption (Tab. 5). All models are based on the ResNet-50 backbone and tested on the COCO benchmark. All models are tested with RTX A6000 GPU(s) for a fair comparison. Training times for CyCTR and HSNet are reported according to the official implementation. We report the training time for the first and second stages separately. It is worth noting that for the same test split, our method can share the same stage-1 model across 1-shot and 5-shot. The training time of stage-1 for 5-shot can be ignored if 1-shot models already exist.", "3.5 Analysis of model transferability": "Our MM-Former shows a better transfer performance when trained on COCO but a relatively lower performance when trained on Pascal. We make an in-depth study of this phenomenon.\nEffect of the number of training samples: We use all training samples belonging to 15 Pascal training classes from COCO to train MMFormer. In this case, training samples are 9 times larger than the number in Pascal but the categories are the same, dubbed COCO-15 in Tab. 6. When the number of classes is limited, more training data would worsen the matching\nperformance (60.7% vs. 63.3% for 1-shot and 64.8% vs. 64.9% for 5-shot), though a better POS could be obtained, as indicated by the oracle result (86.3% vs. 82.5%).\nEffect of the number of training classes: We randomly sample an equal number of training images ( 6000 images averaged across 4 splits) as in Pascal training set from 75 classes (excluding test classes) in COCO to train our MM-Former, dubbed COCO-75-sub in Tab. 6. When training with the same amount of data, more classes lead to better matching performance (66.8% vs. 63.3% for 1-shot and 68.9% vs. 64.9% for 5-shot).\nIn a word, the number of classes determines the quality of the matching module. This finding is reasonable and inline with the motivation of few-shot segmentation and meta-learning: learning to learn by observing a wide range of tasks and fast adapting to new tasks. When the number of classes is limited, the variety of tasks and meta-knowledge are restricted, therefore influencing the learning of the matching module.", "3.6 Qualitative Analysis": "Visual examples: We show some visual examples in Fig. 4. Without loss of generality, some support images may only contain part of the object (e.g., the 2nd row). Directly selecting the mask with the highest cosine similarity can not obtain the anticipated result. Using a learnable mask matching block to fuse multiple masks can solve the problem to a large extent, the proposed feature alignment block can further improve our model by alleviating the misalignment problem, e.g. the results in the last row.\nRobustness analysis: We also provide robustness analysis in Fig. 6, which uses an anomaly support sample for segmenting the query image. Compared with the previous state-of-theart method, HSNet, which tends to segment the salient object in the image, our model is more robust to the anomaly inputs.\nExplanation of SA: SA is proposed to align the\nfeatures at the channel dimension so that outliers at the channel dimension could be smoothed and features would be more robust by aligning with the attended global (specifically, channel-wise weighed average) features. Fig. 5 proves this point. It can be seen that Favg (row 2th) has response to general foreground regions. Important channels (row 3th) are emphasized, and outliers are suppressed (row 4th).", "4 Related Work": "Few-Shot Segmentation [21] is established to perform segmentation with very few labeled images. Many recent approaches formulate few-shot segmentation from the view of metric learning [23, 7, 27]. PrototypicalNet [22] is the first to perform metric learning on few-shot segmentation. PFENet [24] further designs an feature pyramid module to extract features from multi-levels. Many recent methods point out that only a single support prototype is insufficient to represent a given category. To address this problem, [32] attempt to obtain multiple prototypes via EM algorithm. [15] utilized super-pixel segmentation technique to generate multiple prototypes. Another way to solve the above problem is to apply pixel-level attention mechanism. [32, 26] attempt to use graph attention networks to utilize all foreground support pixel features. HSNet [19] propose to learn dense matching through 4D Convolution. CyCTR [38] points out that not all foreground pixels are conducive to segmentation and adopt cycle-consistency technology to filter out proper pixels to guide segmentation.\nTransformers originally proposed for NLP [25] are being rapidly adapted in computer vision task [8, 2, 29, 5, 4]. The major benefit of transformers is the ability to capture global information using self-attention module. DETR [2] is the first work applying Transformers on object detection task. Mask2Former [4] using Transformers to unify semantic segmentation and instance segmentation. Motivated by the design of MaskFormer, we apply transformers to segment all potential objects in one image, align support features and query features in pixel-level within our MM-Former.", "5 Conclusion": "In the paper, we present Mask Matching Transformer (MM-Former), a new perspective to tackle the challenging few-shot segmentation task. Different from the previous practice, MM-Former is a two-stage framework, which adopts a Potential Objects Segmentor and Mask Matching Module to first produce high-quality mask proposals and then blend them into the final segmentation result. Extensive experiments on COCO-20i and Pascal-5i well demonstrate the effectiveness and the generalization advantage of the proposed MM-Former. We hope our MM-Former can serve as a solid baseline and help advance the future research of few-shot segmentation.\nLimitations and societal impact. Our MM-Former introduces the paradigm of decompose first and then blend to the research of few-shot segmentation, which is a totally new perspective and may inspire future researchers to develop more advanced versions. However, there is still a large gap between the current results and the oracle (≈ 20% mIoU). How to further narrow this gap is our future research focus.\nAcknowledgment. This work was supported in part by the National Key R & D Program of China (No.2021ZD0112100), the National NSF of China (No.U1936212, No.62120106009), the Fundamental Research Funds for the Central Universities (No. K22RC00010). Yao Zhao and Yunchao Wei are the corresponding authors.", "Reviewer Summary": "Reviewer_4: This paper tackles the few-shot segmentation task through a proposed mask matching transformer (MM-Former). The MM-Former contains two parts. The first part decomposes query images into multiple segmentation proposals with a class-agnostic segmenter. The second part merges related segment proposals into final masks guided by support images. With the ResNet-50 backbone, the proposed method get competitive results on the popular COCO and PASCAL benchmarks.\n\nReviewer_5: This paper proposes a new two-stage framework that decouples the matching and segmentation modules for few-shot segmentation. Extensive experiments and ablation studies on COCO and Pascal datasets also verify the algorithm's effectiveness.\n\nReviewer_6: This paper proposes a few-shot semantic segmentation (FSSS) network based on Mask2Former [1]. By formulating the semantic segmentation as a mask classification problem like [1], the authors introduced a new matching mechanism in mask-level instead of pixel-level matching in previous FSSS methods. Additionally, they propose a feature-alignment block based on the attention mechanism to align both support and query features individually and cross-align between them. They conducted experiments on two FSSS datasets (COCO-20i and PASCAL-5i) and achieve the SOTA results.", "Cited in": [ { "title": "Few-shot segmentation without meta-learning: A good transductive inference is all you need?", "year": 2021.0, "authors": "Malik Boudiaf; Hoel Kervadec; Ziko Imtiaz Masud; Pablo Piantanida; Ismail Ben Ayed; Jose Dolz", "arxiv_di": "2012.06166", "Introduction": "Few-shot learning, which aims at classifying instances from unseen classes given only a handful of training ex-amples, has witnessed a rapid progress in the recent years. To quickly adapt to novel classes, there has been a substantial focus on the meta-learning (or learning-to-learn) paradigm [27,31,35]. Meta-learning approaches popularized the need of structuring the training data into episodes, thereby simulating the tasks that will be presented at inference. Nevertheless, despite the achieved improvements, several recent image classification works [2,4,6,12,32,44] observed that meta-learning might have limited generalization capacity beyond the standard 1-or 5-shot classification benchmarks. For instance, in more realistic settings with domain shifts, simple classification baselines may outperform much more complex meta-learning methods [4,12].\nDeep-learning based semantic segmentation has been generally nurtured from the methodological advances in image classification. Few-shot segmentation, which has gained popularity recently [10,17,19,23,25,33,36,37,38,39,41,42], is no exception. In this setting, a deep segmentation model is first pre-trained on base classes. Then, model generalization is assessed over few-shot tasks and novel classes unseen during base training. Each task includes an unlabeled test image, referred to as the query, along with a few labeled images (the support set). The recent literature in few-shot segmentation follows the learning-tolearn paradigm, and substantial research efforts focused on the design of specialized architectures and episodic-training schemes for base training. However, i) episodic training itself implicitly assumes that testing tasks have a structure (e.g., the number of support shots) similar to the tasks used at the meta-training stage; and ii) base and novel classes are often assumed to be sampled from the same dataset.\nIn practice, those assumptions may limit the applicability of the existing few-shot segmentation methods in realistic scenarios [3,4]. In fact, our experiments proved consistent with findings in few-shot classification when going beyond the standard settings and benchmarks. Particularly, we ob-served among state-of-the-art methods a saturation in performances [3] when increasing the number of labeled samples (See Table 3). Also, in line with very recent observations in image classification [4], existing meta-learning methods prove less competitive in cross-domains scenarios (See Table 4). This casts doubts as to the viability of the current few-shot segmentation benchmarks and datasets; and motivates re-considering the relevance of the meta-learning paradigm, which has become the de facto choice in the fewshot segmentation litterature.", "Related_Work": "Few-Shot Learning for classification Meta-learning has become the de facto solution to learn novel tasks from a few labeled samples. Even though the idea is not new [28], it has been revived recently by several popular works in few-shot classification [9,26,27,31,35]. These works can be categorized into gradient-or metric-learning-based methods. Gradient approaches resort to stochastic gradient descent (SGD) to learn the commonalities among different tasks [26,9]. Metric-learning approaches [35,31] adopt deep networks as feature-embedding functions, and compare the distances between the embeddings. Furthermore, in a recent line of works, the transductive setting has been investigated for few-shot classification [6,2,14,16,20,24,31,44], and yielded performance improvements over inductive inference. These results are in line with established facts in classical transductive inference [34,15,5], well-known to outperform its inductive counterpart on small training sets.\nTo a large extent, these transductive classification works follow well-known concepts in semi-supervised learning, such as graph-based label propagation [20], entropy minimization [6] or Laplacian regularization [44]. While the entropy is a part of our transductive loss, we show that it is not sufficient for segmentation tasks, typically yielding trivial solutions.\nFew-shot segmentation Segmentation can be viewed as a classification at the pixel level, and recent efforts mostly went into the design of specialized architectures. Typically, the existing methods use a two-branch comparison framework, inspired from the very popular prototypical networks for few-shot classification [31]. Particularly, the support images are employed to generate class prototypes, which are later used to segment the query images via a prototypequery comparison module. Early frameworks followed a dual-branch architecture, with two independent branches [29,7,25], one generating the prototypes from the support images and the other segmenting the query images with the learned prototypes. More recently, the dual-branch setting has been unified into a single-branch, employing the same embedding function for both the support and query sets [42,30,37,38,21]. These approaches mainly aim at exploiting better guidance for the segmentation of query images [42,23,36,40], by learning better class-specific representations [37,19,21,38,30] or iteratively refining these [41]. Graph CNNs have also been employed to establish more robust correspondences between the support and query images, enhancing the learned prototypes [36]. Alternative solutions to learn better class representations include: imprinting the weights for novel classes [30], decomposing the holistic class representation into a set of part-aware prototypes [21] or mixing several prototypes, each corresponding to diverse image regions [38]. , Ω ⊂ R 2 an image space, x n : Ω → R 3 an input image, and y n : Ω → {0, 1} |Y base | its corresponding pixelwise one-hot annotation. At inference, we test our model through a series of K-shots tasks. Each K-shots task consists of a support set S = {(x k , y k )} K k=1 , i.e. K fully annotated images, and one unlabelled query image x Q , all from the same novel class. This class is randomly sampled from a set of novel classes Y novel such that Y base ∩ Y novel = ∅. The goal is to leverage the supervision provided by the support set in order to properly segment the object of interest in the query image.", "Conclusion": "Without resorting to the popular meta-learning paradigm, our proposed RePRI achieves new state-of-theart results on standard 5-shot segmentation benchmarks, while being close to best performing approaches in the 1-shot setting. RePRI is modular and can, therefore, be used in conjunction with any feature extractor regardless how the base training was performed. Supported by the findings in this work, we believe that the relevance of the episodic training should be re-considered in the context of few-shot segmentation, and we provide a strong baseline to stimulate future research on this topic. Our results indicate that current state-of-the-art methods may have difficulty with more challenging settings, when dealing with domain shift or conducting inference on tasks whose structures are different from those seen in trainingscenarios that have been overlooked in the literature. These findings align with recent observations in few shot classification [4,3]. Furthermore, embedding more accurate foreground-background proportion estimates appears to be a very promising way of constraining the inference, as demonstrated with the significantly improved results obtained by the oracle. Our implementation is publicly available online: https://github.com/mboudiaf/ RePRI-for-Few-Shot-Segmentation.", "Experiment_and_Results": "Datasets We resort to two public few-shot segmentation benchmarks, PASCAL-5 i and COCO-20 i , to evaluate our method. PASCAL-5 i is built from PASCALVOC 2012 [8], and contains 20 object categories split into 4 folds. For each fold, 15 classes are used for training and the remaining 5 categories for testing. COCO-20 i is built from MS-COCO [18] and is more challenging, as it contains more samples, more classes and more instances per image. Similar to PASCAL-5 i , COCO-20 i dataset is divided into 4folds with 60 base classes and 20 test classes in each fold.\nTraining We build our model based on PSPNet [43] with Resnet-50 and Resnet-101 [13] as backbones. We train the feature extractor with standard cross-entropy over the base classes during 100 epochs on PASCAL-5 i , and 20 epochs on COCO-20 i , with batch size set to 12. We use SGD as optimizer with the initial learning rate set to 2.5e-3 and we use cosine decay. Momentum is set to 0.9, and weight decay to 1e-4. Label smoothing is used with smoothing parameter = 0.1. We did not use multi-scaling, nor deep supervision, unlike the original PSPNet paper [43]. As for data augmentations, we only use random mirror flipping.\nInference At inference, following previous works [21,37], all images are resized to a fixed 417 × 417 resolution. For each task, the classifier θ is built on top of the features from the penultimate layer of the trained network.\nFor our model with ResNet-50 as backbone, this results in a 53 × 53 × 512 feature map. SGD optimizer is used to train θ, with a learning rate of 0.025. For each task, a total of 50 iterations are performed. The parameter t π is set to 10. For the main method, the weights λ H and λ KL are both initially set to 1/K, such that the CE term plays a more important role as the number of shots K grows. For t ≥ t π , λ KL is increased by 1 to further encourage the predicted proportion close to π (tπ) . Finally, the temperature τ is set to 20.\nEvaluation We employ the widely adopted mean Intersection over Union (mIoU). Specifically, for each class, the classwise-IoU is computed as the sum over all samples within the class of the intersection over the sum of all unions. Then, the mIoU is computed as the average over all classes of the classwise-IoU. Following previous works [21], 5 runs of 1000 tasks each are computed for each fold, and the average mIoU over runs is reported. Main method First, we investigate the performance of the proposed method in the popular 1-shot and 5-shot settings on both PASCAL-5 i and COCO-20 i , whose results are reported in Table 1 and2. Overall, we found that our method compares competitively with state-of-the-art approaches in the 1-shot setting, and significantly outperforms recent methods in the 5-shot scenario. Additional qualitative results on PASCAL-5 i are shown in the supplemental material.\nBeyond 5-shots In the popular learning-to-learn paradigm, the number of shots leveraged during the metatraining stage has a direct impact on the performance at inference [3]. Particularly, to achieve the best performance, meta-learning based methods typically require the numbers of shots used during meta-training to match those employed during meta-testing. To demonstrate that the proposed method is more robust against differences on the number of labeled support samples between the base and test sets, we further investigate the 10-shot scenario. Particularly, we trained the methods in [33,38] by using one labeled sample per class, i.e., 1-shot task, and test the models on a 10-shots task. Interestingly, we show that the gap between our method and current state-of-the-art becomes larger as the number of support images increases (Table 3), with significant gains of 6% and 4% on PASCAL-5 i and COCO-20 i , respectively. These results suggest that our transductive inference leverages more effectively the information conveyed in the labeled support set of a given task. We now investigate the ideal scenario where an oracle provides the exact foreground/background proportion in the query image, such that π (t) = π * , ∀t. Reported results in this scenario, referred to as Oracle (Table 1 and 2) show impressive improvements over both our current method and all previous works, with a consistent gain across datasets and tasks. Particularly, these values range from 11% and 14 % on both PASCAL-5 i and COCO-20 i and in both 1-shot and 5-shot settings. We believe that these findings convey two important messages. First, it proves that there exists a simple linear classifier that can largely outperform state-of-the-art meta-learning models, while being built on top of a feature extractor trained with a standard cross-entropy loss. Second, these results indicate that having a precise size of the query object of interest acts as a strong regularizer. This suggests that more efforts could be directed towards properly constraining the optimization process of w and b, and opens a door to promising avenues. We reproduced and compared to the two best performing methods [33,21] using their respective official GitHub repositories. Table 4 summarizes the results for the 1-shot and 5-shot cross-domain experiments. We observe that in the presence of domain-shift, our method outperforms existing methods in both 1-shot and 5-shot scenarios, with again the improvement jumping from 2% in 1-shot to 4% in 5-shot. In Table 7, we show the details of the cross-domain folds used for the domain-shift experiments. Also, in Table 8, the per-fold results of the same experiment are available. In Table 9, we give the per-fold results of the 10-shot experiments. In Figure 4, we provide some qualitative results on PASCAL-5 i that show how our method helps refining the initial predictions of the classifier.\nFigure 4: Qualitative results on PASCAL 5 i . Initial column refers to the predictions right after initializing the prototypes, while Final column refers to the prediction after running our inference. Best viewed in colors in high resolution.", "Extra": "In this work, we forego meta-learning, and re-consider a simple cross-entropy supervision during training on the base classes for feature extraction. Additionally, we propose a transductive inference that better leverages the support-set supervision than the existing methods. Our contributions can be summarized as follows:\n• We present a new transductive inference-RePRI (Region Proportion Regularized Inference)-for a given few-shot segmentation task. RePRI optimizes a loss integrating three complementary terms: i) a standard cross-entropy on the labeled pixels of the support images; ii) the entropy of the posteriors on the query pixels of the test image; and iii) a global KL divergence regularizer based on the proportion of the predicted foreground pixels within the test image. RePRI can be used on top of any trained feature extractor, and uses exactly the same information as standard inductive methods for a given few-shot segmentation task.\n• Although we use a basic cross-entropy training on the base classes, without complex meta-learning schemes, RePRI yields highly competitive performances on the standard few-shot segmentation benchmarks, PASCAL-5 i and COCO-20 i , with gains around 5% and 6% over the state-of-the-art in the 5-and 10shot scenarios, respectively.\n• We introduce a more realistic setting where, in addition to the usual shift on classes between training and testing data distributions, a shift on the images' feature distribution is also introduced. Our method achieves the best performances in this scenario.\n• We demonstrate that a precise region-proportion information on the query object improves substantially the results, with an average gain of 13% on both datasets. While assuming the availability of such information is not realistic, we show that inexact estimates can still lead to drastic improvements, opening a very promising direction for future research. There exist different ways of leveraging the base set D base . Meta-learning, or learning to learn, is the dominant paradigm in the few-shot literature. It emulates the test-time scenario during training by structuring D base into a series of training tasks. Then, the model is trained on these tasks to learn how to best leverage the supervision from the support set in order to enhance its query segmentation. Recently, Cao et al. [3] formally proved that the number of shots K train used in train-ing episodes in the case of prototypical networks represents a learning bias, and that the testing performance saturates quickly when K test differs from K train . Empirically, we observed the same trend for current few-shot segmentation methods, with minor improvements from 1-shot to 5-shot performances (Table 1). In practice, the format of the test tasks may be unknown beforehand. Therefore, we want to take as little assumptions as possible on this. This motivates us to employ a feature extractor f φ trained with standard crossentropy supervision on the whole D base set instead, without resorting to episodic training. Objective In what follows, we use as a placeholder to denote either a support subscript k ∈ {1, ..., K} or the query subscript Q. At inference, we consider the 1-way segmentation problem: y : Ω → {0, 1} 2 is the function representing the dense background/foreground (B/F) mask in image x . For both support and query images, we extract features z := f φ (x ) and z : Ψ → R C , where C is the channel dimension in the feature space Ψ, with lower pixel resolution |Ψ| < |Ω|.\nUsing features z , our goal is to learn the parameters θ of a classifier that properly discriminates foreground from background pixels. Precisely, our classifier p : Ψ → [0, 1] 2 assigns a (B/F) probability vector to each pixel j ∈ Ψ in the extracted feature space.\nFor each test task, we find the parameters θ of the classifier by optimizing the following transductive objective:\nmin θ CE + λ H H + λ KL D KL ,(1)\nwhere λ H , λ KL ∈ R are non-negative hyper-parameters balancing the effects of the different terms. We now describe in details each of the terms in Eq. ( 1):\nCE = - 1 K|Ψ| K k=1 j∈Ψ y k (j) log(p k (j))\nis the cross-entropy between the downsampled labels y k from support images and our classifier's soft predictions. Simply minimizing this term will often lead to degenerate solutions, especially in the 1-shot setting, as observed in Figure 1-the classifier θ typically overfits the support set S, translating into small activated regions on the query image.\nH = - 1 |Ψ| j∈Ψ p Q (j) log (p Q (j))\nis the Shannon entropy of the predictions on the queryimage pixels. The role of this entropy term is to make the model's predictions more confident on the query image. The use of H originates from the semi-supervised literature [11,22,1]. Intuitively, it pushes the decision boundary drawn by the linear classifier towards low-density regions of the extracted query feature space. While this term plays a crucial role in conserving object regions that were initially predicted with only medium confidence, its sole addition to CE does not solve the problem of degenerate solutions, and may even worsen it in some cases.\nD KL = p Q log p Q π , with p Q = 1 |Ψ| j∈Ψ p Q (j)\n, is a Kullback-Leibler (KL) Divergence term that encourages the B/F proportion predicted by the model to match a parameter π ∈ [0, 1] 2 . Notice that the division inside the log applies element-wise. The joint estimation of parameter π in our context is further discussed in a following paragraph. Here, we argue that this term plays a key role in our loss. First, in the case where parameter π does not match the exact B/F proportion of the query image, this term still helps avoiding the degenerate solutions stemming from CE and H minimization. Second, should an accurate estimate of the B/F proportion in the query image be available, it could easily be embedded through this term, resulting in a substantial performance boost, as discussed in Section 4.\nChoice of the classifier As we optimize θ for each task at inference, we want our method to add as little computational load as possible. In this regard, we employ a simple linear classifier with learnable parameters θ (t) = {w (t) , b (t) }, with t the current step of the optimization procedure, w (t) ∈ R C the foreground prototype and b (t) ∈ R the corresponding bias. Thus, the probabilities p (t) k and p (t) Q at iteration t, for pixel j ∈ Ψ can be obtained as follow:\np (t) (j) := 1 -s (t) (j) s (t) (j) ,(2)\nwhere\ns (t) (j) = sigmoid τ cos z (j), w (t) -b (t) ,\nτ ∈ R is a temperature hyper-parameter and cos the cosine similarity. The same classifier is used to estimate the support set probabilities p k and the query predicted probabilities p Q . At initialization, we set prototype w (0) to be the average of the foreground support features, i.e.\nw (0) = 1 K|Ψ| K k=1\nj∈Ψ y k (j) 1 z k (j), with y k (j) 1 the foreground component of the one-hot label of image x k at pixel j. Initial bias b (0) is set as the mean of the foreground's soft predictions on the query image: b\n(0) = 1 |Ψ| j∈Ψ p Q (j) 1 .\nThen, w (t) and b (t) are optimized with gradient descent. The computational footprint of this pertask optimization is discussed in Section 4.\nJoint estimation of B/F proportion π Without additional information, we leverage the model's label-marginal distribution over the query image p (t) Q in order to learn π jointly with classifier parameters. Note that minimizing Eq. ( 1) with respect to π yields π (t) = p (t) Q . Empirically, we found that, after initialization, updating π only once during optimization, at a later iteration, t π was enough:\nπ (t) =    p (0) Q 0 ≤ t ≤ t π p (tπ) Q t > t π .(3)\nIntuitively, the entropy term H helps gradually refine initially blurry soft predictions (third column in Fig. 1), which turns p\n(t)\nQ into an improving estimate of the true B/F proportion. A quantitative study of this phenomenon is provided in Section 4.3. Therefore, our inference can be seen as a joint optimization over θ and π, with D KL serving as a selfregularization that prevents the model's marginal distribution p\n(t) Q from diverging.\nOracle case with a known π As an upper bound, we also investigate the oracle case, where we have access to the true B/F proportion in x Q :\nπ * = 1 |Ψ| j∈Ψ y Q (j).(4) We introduce a more realistic, cross-domain setting (COCO-20 i to PASCAL-VOC). We argue that such setting is a step towards a more realistic evaluation of these methods, as it can assess the impact on performances caused by a domain shift between the data training distribution and the testing one. We believe that this scenario can be easily found in practice, as even slight alterations in the data collection process might result in a distributional shift. We Table 1: Results of 1-way 1-shot and 1-way 5-shot segmentation on PASCAL-5 i using the mean-IoU. Best results in bold. reproduce the scenario where a large labeled dataset is available (e.g., COCO-20 i ), but the evaluation is performed on a target dataset with a different feature distribution (e.g., PASCAL-VOC). As per the original work [18], significant differences exist between the two original datasets. For instance, images in MS-COCO have on average 7.7 instances of objects coming from 3.5 distinct categories, while PASCAL-VOC only has an average of 3 instances from 2 distinct categories.\nEvaluation We reuse models trained on each fold of COCO-20 i and generate tasks using images from all the classes in PASCAL-VOC that were not used during train- ing. For instance, fold-0 of this setting means the model was trained on fold-0 of COCO-20 i and tested on the whole PASCAL-VOC dataset, after removing the classes seen in training. A complete summary of all the folds is available in the Supplemental material. Impact of each term in the main objective While Fig. 1 provides qualitative insights on how each term in Eq. ( 1) affects the final prediction, this section provides a quantitative evaluation of their impact, evaluated on PASCAL-5 i (Table 5). Quantitative results confirm the qualitative insights observed in Fig. 1, as both CE and CE + H losses drastically degrade the performance compared to the proportionregularized loss, i.e., CE + H + D KL . For example, in the 1-shot scenario, simply minimizing the CE results in more than 20% of difference compared to the proposed model. In this case, the prototype w tends to overfit the support sample and only activates regions of the query object that strongly correlate with the support object. Such behavior hampers the performance when the support and query objects exhibit slights changes in shape or histogram colors, for example, which may be very common in practice.\nAdding the entropy term H to CE partially alleviates this problem, as it tends to reinforce the model in being confident on positive pixels initially classified with mid or low confidence. Nevertheless, despite improving the naive CE based model, the gap with the proposed model remains considerably large, with 10% difference. One may notice that the differences between CE and CE + H + D KL decrease in the 5-shot setting, since overfitting 5 support samples si-multaneously becomes more difficult. The results from this ablation experiment reinforce our initial hypothesis that the proposed KL term based on the size parameter π acts as a strong regularizer.\nInfluence of the parameter t π In Fig. 2, we plot the averaged mIoU (over 4 folds) as a function of t π varying over the full range t π ∈ [1,50]. For 5-shot, the performances are stable and remain largely above SOTA for all t π . As for the 1-shot case, the range [5,15] Influence of parameter π misestimation Precisely knowing the foreground/background (B/F) proportion of the query object is unrealistic. To quantify the deviation from the exact B/F proportion π * , we introduce the relative error on the foreground size:\nδ (t) = π (t) 1 π * 1 -1,(5)\nwhere π * 1 represents the exact foreground proportion in the query image, extracted from its corresponding ground truth, and π (t)\n1 our estimate at iteration t, which is derived from the soft predicted segmentation. As observed from Fig. 1, the initial prototype often results in a blurred probability map, from which only a very coarse estimate of the query proportion can be inferred and used as π (0) . The distribution of δ over 5000 tasks is presented in Fig. 3a. It clearly shows that the initial prediction typically provides an overestimate of the actual query foreground size, while finetuning the classifier θ for 10 iterations with our main loss (Eq. 1) already provides a strictly more accurate estimate, as conveyed by the right box plot in Fig. 3a, with an average δ around 0.7. Now, a natural question remains: how good does the estimate need to be in order to approach the oracle results? To answer this, we carry out a series of controlled (a) Relative error δ distribution of our current method, at initialization δ (0) and after 10 gradient iterations δ (10) . experiments where, instead of computing π (t) with Eq.\n(3), we use a δ-perturbed oracle at initialization, such that π\n(t) 1 = π * 1 (1+δ).\nEach point in Fig. 3b represents the mIoU obtained over 5000 tasks for a given perturbation δ. Fig. 3b reveals that exact B/F proportion is not required to significantly close the gap with the oracle. Specifically, foreground size estimates ranging from -10% to +30% with respect to the oracle proportion are sufficient to achieve 70%+ of mIoU, which represents an improvement of 10% over the current state-of-the art. This suggests that more refined size estimation methods may significantly increase the performance of the proposed method.\nComputational efficiency We now inspect the computational cost of the proposed model, and compare to recent existing methods. Unlike prior work, we solve an optimization problem at inference, which naturally slows down the inference process. However, in our case, only a single prototype vector w ∈ R C , where we recall C is the feature channel dimension, and a bias b ∈ R need to be optimized for each task. Furthermore, in our setting C = 512, and therefore the problem can still be solved relatively efficiently, leading to reasonable inference times. In Table 6, we summarize the FPS rate at inference for our method, as well as for two competing approaches that only require a forward pass. We can observe that, unsurprisingly, our method reports lower FPS rates, without becoming unacceptably slower. The reported values indicate that the differences in inference times are small compared to, for example, the approach in [33]. Particularly, in the 1-shot scenario, our method processes tasks 3 FPS slower than [33], whereas this gap narrows down to 0.7 FPS in the 5-shot setting." }, { "title": "End-to-end object detection with transformers", "year": 2020.0, "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "arxiv_di": "2005.12872", "Introduction": "The goal of object detection is to predict a set of bounding boxes and category labels for each object of interest. Modern detectors address this set prediction task in an indirect way, by defining surrogate regression and classification problems on a large set of proposals [37,5], anchors [23], or window centers [53,46]. Their performances are significantly influenced by postprocessing steps to collapse near-duplicate predictions, by the design of the anchor sets and by the heuristics that assign target boxes to anchors [52]. To simplify these pipelines, we propose a direct set prediction approach to bypass the surrogate tasks. This end-to-end philosophy has led to significant advances in complex structured prediction tasks such as machine translation or speech recognition, but not yet in object detection: previous attempts [43,16,4,39] either add other forms of prior knowledge, or have not proven to be competitive with strong baselines on challenging benchmarks. This paper aims to bridge this gap. We streamline the training pipeline by viewing object detection as a direct set prediction problem. We adopt an encoder-decoder architecture based on transformers [47], a popular architecture for sequence prediction. The self-attention mechanisms of transformers, which explicitly model all pairwise interactions between elements in a sequence, make these architectures particularly suitable for specific constraints of set prediction such as removing duplicate predictions.\nOur DEtection TRansformer (DETR, see Figure 1) predicts all objects at once, and is trained end-to-end with a set loss function which performs bipartite matching between predicted and ground-truth objects. DETR simplifies the detection pipeline by dropping multiple hand-designed components that encode prior knowledge, like spatial anchors or non-maximal suppression. Unlike most existing detection methods, DETR doesn't require any customized layers, and thus can be reproduced easily in any framework that contains standard CNN and transformer classes. 1 .\nCompared to most previous work on direct set prediction, the main features of DETR are the conjunction of the bipartite matching loss and transformers with (non-autoregressive) parallel decoding [29,12,10,8]. In contrast, previous work focused on autoregressive decoding with RNNs [43,41,30,36,42]. Our matching loss function uniquely assigns a prediction to a ground truth object, and is invariant to a permutation of predicted objects, so we can emit them in parallel.\nWe evaluate DETR on one of the most popular object detection datasets, COCO [24], against a very competitive Faster R-CNN baseline [37]. Faster R-CNN has undergone many design iterations and its performance was greatly improved since the original publication. Our experiments show that our new model achieves comparable performances. More precisely, DETR demonstrates significantly better performance on large objects, a result likely enabled by the non-local computations of the transformer. It obtains, however, lower performances on small objects. We expect that future work will improve this aspect in the same way the development of FPN [22] did for Faster R-CNN.\nTraining settings for DETR differ from standard object detectors in multiple ways. The new model requires extra-long training schedule and benefits from auxiliary decoding losses in the transformer. We thoroughly explore what components are crucial for the demonstrated performance.\nThe design ethos of DETR easily extend to more complex tasks. In our experiments, we show that a simple segmentation head trained on top of a pretrained DETR outperfoms competitive baselines on Panoptic Segmentation [19], a challenging pixel-level recognition task that has recently gained popularity.", "Related_Work": "Our work build on prior work in several domains: bipartite matching losses for set prediction, encoder-decoder architectures based on the transformer, parallel decoding, and object detection methods.", "Methodology": "Two ingredients are essential for direct set predictions in detection: (1) a set prediction loss that forces unique matching between predicted and ground truth boxes; (2) an architecture that predicts (in a single pass) a set of objects and models their relation. We describe our architecture in detail in Figure 2.", "Conclusion": "We presented DETR, a new design for object detection systems based on transformers and bipartite matching loss for direct set prediction. The approach achieves comparable results to an optimized Faster R-CNN baseline on the challenging COCO dataset. DETR is straightforward to implement and has a flexible architecture that is easily extensible to panoptic segmentation, with competitive results. In addition, it achieves significantly better performance on large objects than Faster R-CNN, likely thanks to the processing of global information performed by the self-attention.\nThis new design for detectors also comes with new challenges, in particular regarding training, optimization and performances on small objects. Current detectors required several years of improvements to cope with similar issues, and we expect future work to successfully address them for DETR.", "Experiment_and_Results": "We show that DETR achieves competitive results compared to Faster R-CNN in quantitative evaluation on COCO. Then, we provide a detailed ablation study of the architecture and loss, with insights and qualitative results. Finally, to show that DETR is a versatile and extensible model, we present results on panoptic segmentation, training only a small extension on a fixed DETR model. We provide code and pretrained models to reproduce our experiments at https://github.com/facebookresearch/detr.\nDataset. We perform experiments on COCO 2017 detection and panoptic segmentation datasets [24,18], containing 118k training images and 5k validation images. Each image is annotated with bounding boxes and panoptic segmentation. There are 7 instances per image on average, up to 63 instances in a single image in training set, ranging from small to large on the same images. If not specified, we report AP as bbox AP, the integral metric over multiple thresholds.\nFor comparison with Faster R-CNN we report validation AP at the last training epoch, for ablations we report median over validation results from the last 10 epochs.\nTechnical details. We train DETR with AdamW [26] setting the initial transformer's learning rate to 10 -4 , the backbone's to 10 -5 , and weight decay to 10 -4 . All transformer weights are initialized with Xavier init [11], and the backbone is with ImageNet-pretrained ResNet model [15] from torchvision with frozen batchnorm layers. We report results with two different backbones: a ResNet-50 and a ResNet-101. The corresponding models are called respectively DETR and DETR-R101. Following [21], we also increase the feature resolution by adding a dilation to the last stage of the backbone and removing a stride from the first convolution of this stage. The corresponding models are called respectively DETR-DC5 and DETR-DC5-R101 (dilated C5 stage). This modification increases the resolution by a factor of two, thus improving performance for small objects, at the cost of a 16x higher cost in the self-attentions of the encoder, leading to an overall 2x increase in computational cost. A full comparison of FLOPs of these models and Faster R-CNN is given in Table 1.\nWe use scale augmentation, resizing the input images such that the shortest side is at least 480 and at most 800 pixels while the longest at most 1333 [50]. To help learning global relationships through the self-attention of the encoder, we also apply random crop augmentations during training, improving the performance by approximately 1 AP. Specifically, a train image is cropped with probability 0.5 to a random rectangular patch which is then resized again to 800-1333. The transformer is trained with default dropout of 0.1. At inference time, some slots predict empty class. To optimize for AP, we override the prediction of these slots with the second highest scoring class, using the corresponding confidence. This improves AP by 2 points compared to filtering out empty slots.\nOther training hyperparameters can be found in section A.4. For our ablation experiments we use training schedule of 300 epochs with a learning rate drop by a factor of 10 after 200 epochs, where a single epoch is a pass over all training images once. Training the baseline model for 300 epochs on 16 V100 GPUs takes 3 days, with 4 images per GPU (hence a total batch size of 64). For the longer schedule used to compare with Faster R-CNN we train for 500 epochs with learning rate drop after 400 epochs. This schedule adds 1.5 AP compared to the shorter schedule. Some extra qualitative results for the panoptic prediction of the DETR-R101 model are shown in Fig. 11. Increasing the number of instances By design, DETR cannot predict more objects than it has query slots, i.e. 100 in our experiments. In this section, we analyze the behavior of DETR when approaching this limit. We select a canonical square image of a given class, repeat it on a 10 × 10 grid, and compute the percentage of instances that are missed by the model. To test the model with less than 100 instances, we randomly mask some of the cells. This ensures that the absolute size of the objects is the same no matter how many are visible. To account for the randomness in the masking, we repeat the experiment 100 times with different masks. The results are shown in Fig. 12. The behavior is similar across classes, and while the model detects all instances when up to 50 are visible, it then starts saturating and misses more and more instances. Notably, when the image contains all 100 instances, the model only detects 30 on average, which is less than if the image contains only 50 instances that are all detected. The counter-intuitive behavior of the model is likely because the images and the detections are far from the training distribution.\nNote that this test is a test of generalization out-of-distribution by design, since there are very few example images with a lot of instances of a single class. It is difficult to disentangle, from the experiment, two types of out-of-domain generalization: the image itself vs the number of object per class. But since few to no COCO images contain only a lot of objects of the same class, this type of experiment represents our best effort to understand whether query objects overfit the label and position distribution of the dataset. Overall, the experiments suggests that the model does not overfit on these distributions since it yields near-perfect detections up to 50 objects.", "Extra": "There is no canonical deep learning model to directly predict sets. The basic set prediction task is multilabel classification (see e.g., [40,33] for references in the context of computer vision) for which the baseline approach, one-vs-rest, does not apply to problems such as detection where there is an underlying structure between elements (i.e., near-identical boxes). The first difficulty in these tasks is to avoid near-duplicates. Most current detectors use postprocessings such as non-maximal suppression to address this issue, but direct set prediction are postprocessing-free. They need global inference schemes that model interactions between all predicted elements to avoid redundancy. For constant-size set prediction, dense fully connected networks [9] are sufficient but costly. A general approach is to use auto-regressive sequence models such as recurrent neural networks [48]. In all cases, the loss function should be invariant by a permutation of the predictions. The usual solution is to design a loss based on the Hungarian algorithm [20], to find a bipartite matching between ground-truth and prediction. This enforces permutation-invariance, and guarantees that each target element has a unique match. We follow the bipartite matching loss approach. In contrast to most prior work however, we step away from autoregressive models and use transformers with parallel decoding, which we describe below. Transformers were introduced by Vaswani et al . [47] as a new attention-based building block for machine translation. Attention mechanisms [2] are neural network layers that aggregate information from the entire input sequence. Transformers introduced self-attention layers, which, similarly to Non-Local Neural Networks [49], scan through each element of a sequence and update it by aggregating information from the whole sequence. One of the main advantages of attention-based models is their global computations and perfect memory, which makes them more suitable than RNNs on long sequences. Transformers are now replacing RNNs in many problems in natural language processing, speech processing and computer vision [8,27,45,34,31].\nTransformers were first used in auto-regressive models, following early sequenceto-sequence models [44], generating output tokens one by one. However, the prohibitive inference cost (proportional to output length, and hard to batch) lead to the development of parallel sequence generation, in the domains of audio [29], machine translation [12,10], word representation learning [8], and more recently speech recognition [6]. We also combine transformers and parallel decoding for their suitable trade-off between computational cost and the ability to perform the global computations required for set prediction. Most modern object detection methods make predictions relative to some initial guesses. Two-stage detectors [37,5] predict boxes w.r.t. proposals, whereas single-stage methods make predictions w.r.t. anchors [23] or a grid of possible object centers [53,46]. Recent work [52] demonstrate that the final performance of these systems heavily depends on the exact way these initial guesses are set. In our model we are able to remove this hand-crafted process and streamline the detection process by directly predicting the set of detections with absolute box prediction w.r.t. the input image rather than an anchor.\nSet-based loss. Several object detectors [9,25,35] used the bipartite matching loss. However, in these early deep learning models, the relation between different prediction was modeled with convolutional or fully-connected layers only and a hand-designed NMS post-processing can improve their performance. More recent detectors [37,23,53] use non-unique assignment rules between ground truth and predictions together with an NMS.\nLearnable NMS methods [16,4] and relation networks [17] explicitly model relations between different predictions with attention. Using direct set losses, they do not require any post-processing steps. However, these methods employ additional hand-crafted context features like proposal box coordinates to model relations between detections efficiently, while we look for solutions that reduce the prior knowledge encoded in the model.\nRecurrent detectors. Closest to our approach are end-to-end set predictions for object detection [43] and instance segmentation [41,30,36,42]. Similarly to us, they use bipartite-matching losses with encoder-decoder architectures based on CNN activations to directly produce a set of bounding boxes. These approaches, however, were only evaluated on small datasets and not against modern baselines. In particular, they are based on autoregressive models (more precisely RNNs), so they do not leverage the recent transformers with parallel decoding. DETR infers a fixed-size set of N predictions, in a single pass through the decoder, where N is set to be significantly larger than the typical number of objects in an image. One of the main difficulties of training is to score predicted objects (class, position, size) with respect to the ground truth. Our loss produces an optimal bipartite matching between predicted and ground truth objects, and then optimize object-specific (bounding box) losses.\nLet us denote by y the ground truth set of objects, and ŷ = {ŷ i } N i=1 the set of N predictions. Assuming N is larger than the number of objects in the image, we consider y also as a set of size N padded with ∅ (no object). To find a bipartite matching between these two sets we search for a permutation of N elements σ ∈ S N with the lowest cost:\nσ = arg min σ∈S N N i L match (y i , ŷσ(i) ),(1)\nwhere L match (y i , ŷσ(i) ) is a pair-wise matching cost between ground truth y i and a prediction with index σ(i). This optimal assignment is computed efficiently with the Hungarian algorithm, following prior work (e.g. [43]). The matching cost takes into account both the class prediction and the similarity of predicted and ground truth boxes. Each element i of the ground truth set can be seen as a y i = (c i , b i ) where c i is the target class label (which may be ∅) and b i ∈ [0, 1] 4 is a vector that defines ground truth box center coordinates and its height and width relative to the image size. For the prediction with index σ(i) we define probability of class c i as pσ(i) (c i ) and the predicted box as bσ(i) . With these notations we define L match (y i , ŷσ(i) ) as\n-1 {ci =∅} pσ(i) (c i ) + 1 {ci =∅} L box (b i , bσ(i) ).\nThis procedure of finding matching plays the same role as the heuristic assignment rules used to match proposal [37] or anchors [22] to ground truth objects in modern detectors. The main difference is that we need to find one-to-one matching for direct set prediction without duplicates.\nThe second step is to compute the loss function, the Hungarian loss for all pairs matched in the previous step. We define the loss similarly to the losses of common object detectors, i.e. a linear combination of a negative log-likelihood for class prediction and a box loss defined later:\nL Hungarian (y, ŷ) = N i=1 -log pσ(i) (c i ) + 1 {ci =∅} L box (b i , bσ (i)) , (2\n)\nwhere σ is the optimal assignment computed in the first step (1). In practice, we down-weight the log-probability term when c i = ∅ by a factor 10 to account for class imbalance. This is analogous to how Faster R-CNN training procedure balances positive/negative proposals by subsampling [37]. Notice that the matching cost between an object and ∅ doesn't depend on the prediction, which means that in that case the cost is a constant. In the matching cost we use probabilities pσ(i) (c i ) instead of log-probabilities. This makes the class prediction term commensurable to L box (•, •) (described below), and we observed better empirical performances.\nBounding box loss. The second part of the matching cost and the Hungarian loss is L box (•) that scores the bounding boxes. Unlike many detectors that do box predictions as a ∆ w.r.t. some initial guesses, we make box predictions directly. While such approach simplify the implementation it poses an issue with relative scaling of the loss. The most commonly-used 1 loss will have different scales for small and large boxes even if their relative errors are similar. To mitigate this issue we use a linear combination of the 1 loss and the generalized IoU loss [38] L iou (•, •) that is scale-invariant. Overall, our box loss is L box (b i , bσ(i) ) defined as\nλ iou L iou (b i , bσ(i) ) + λ L1 ||b i -bσ(i) || 1 where λ iou , λ L1 ∈ R are hyperparameters.\nThese two losses are normalized by the number of objects inside the batch. The overall DETR architecture is surprisingly simple and depicted in Figure 2. It contains three main components, which we describe below: a CNN backbone to extract a compact feature representation, an encoder-decoder transformer, and a simple feed forward network (FFN) that makes the final detection prediction. Unlike many modern detectors, DETR can be implemented in any deep learning framework that provides a common CNN backbone and a transformer architecture implementation with just a few hundred lines. Inference code for DETR can be implemented in less than 50 lines in PyTorch [32]. We hope that the simplicity of our method will attract new researchers to the detection community.\nBackbone. Starting from the initial image x img ∈ R 3×H0×W0 (with 3 color channelsfoot_1 ), a conventional CNN backbone generates a lower-resolution activation map f ∈ R C×H×W . Typical values we use are C = 2048 and H, W = H0 32 , W0 32 . Transformer encoder. First, a 1x1 convolution reduces the channel dimension of the high-level activation map f from C to a smaller dimension d. creating a new feature map z 0 ∈ R d×H×W . The encoder expects a sequence as input, hence we collapse the spatial dimensions of z 0 into one dimension, resulting in a d×HW feature map. Each encoder layer has a standard architecture and consists of a multi-head self-attention module and a feed forward network (FFN). Since the transformer architecture is permutation-invariant, we supplement it with fixed positional encodings [31,3] that are added to the input of each attention layer. We defer to the supplementary material the detailed definition of the architecture, which follows the one described in [47]. Transformer decoder. The decoder follows the standard architecture of the transformer, transforming N embeddings of size d using multi-headed self-and encoder-decoder attention mechanisms. The difference with the original transformer is that our model decodes the N objects in parallel at each decoder layer, while Vaswani et al. [47] use an autoregressive model that predicts the output sequence one element at a time. We refer the reader unfamiliar with the concepts to the supplementary material. Since the decoder is also permutation-invariant, the N input embeddings must be different to produce different results. These input embeddings are learnt positional encodings that we refer to as object queries, and similarly to the encoder, we add them to the input of each attention layer.\nThe N object queries are transformed into an output embedding by the decoder. They are then independently decoded into box coordinates and class labels by a feed forward network (described in the next subsection), resulting N final predictions. Using self-and encoder-decoder attention over these embeddings, the model globally reasons about all objects together using pair-wise relations between them, while being able to use the whole image as context.\nPrediction feed-forward networks (FFNs). The final prediction is computed by a 3-layer perceptron with ReLU activation function and hidden dimension d, and a linear projection layer. The FFN predicts the normalized center coordinates, height and width of the box w.r.t. the input image, and the linear layer predicts the class label using a softmax function. Since we predict a fixed-size set of N bounding boxes, where N is usually much larger than the actual number of objects of interest in an image, an additional special class label ∅ is used to represent that no object is detected within a slot. This class plays a similar role to the \"background\" class in the standard object detection approaches.\nAuxiliary decoding losses. We found helpful to use auxiliary losses [1] in decoder during training, especially to help the model output the correct number of objects of each class. We add prediction FFNs and Hungarian loss after each decoder layer. All predictions FFNs share their parameters. We use an additional shared layer-norm to normalize the input to the prediction FFNs from different decoder layers. Transformers are typically trained with Adam or Adagrad optimizers with very long training schedules and dropout, and this is true for DETR as well. Faster R-CNN, however, is trained with SGD with minimal data augmentation and we are not aware of successful applications of Adam or dropout. Despite these differences we attempt to make a Faster R-CNN baseline stronger. To align it with DETR, we add generalized IoU [38] to the box loss, the same random crop augmentation and long training known to improve results [13]. Results are presented in Table 1. In the top section we show Faster R-CNN results from Detectron2 Model Zoo [50] for models trained with the 3x schedule. In the middle section we show results (with a \"+\") for the same models but trained with the 9x schedule (109 epochs) and the described enhancements, which in total adds 1-2 AP. In the last section of Number of encoder layers. We evaluate the importance of global imagelevel self-attention by changing the number of encoder layers (Table 2). Without encoder layers, overall AP drops by 3.9 points, with a more significant drop of 6.0 AP on large objects. We hypothesize that, by using global scene reasoning, the encoder is important for disentangling objects. In Figure 3, we visualize the attention maps of the last encoder layer of a trained model, focusing on a few points in the image. The encoder seems to separate instances already, which likely simplifies object extraction and localization for the decoder.\nNumber of decoder layers. We apply auxiliary losses after each decoding layer (see Section 3.2), hence, the prediction FFNs are trained by design to pre- dict objects out of the outputs of every decoder layer. We analyze the importance of each decoder layer by evaluating the objects that would be predicted at each stage of the decoding (Fig. 4). Both AP and AP 50 improve after every layer, totalling into a very significant +8.2/9.5 AP improvement between the first and the last layer. With its set-based loss, DETR does not need NMS by design. To verify this we run a standard NMS procedure with default parameters [50] for the outputs after each decoder. NMS improves performance for the predictions from the first decoder. This can be explained by the fact that a single decoding layer of the transformer is not able to compute any cross-correlations between the output elements, and thus it is prone to making multiple predictions for the same object. In the second and subsequent layers, the self-attention mechanism over the activations allows the model to inhibit duplicate predictions. We observe that the improvement brought by NMS diminishes as depth increases. At the last layers, we observe a small loss in AP as NMS incorrectly removes true positive predictions.\nSimilarly to visualizing encoder attention, we visualize decoder attentions in Fig. 6, coloring attention maps for each predicted object in different colors. We observe that decoder attention is fairly local, meaning that it mostly attends to object extremities such as heads or legs. We hypothesise that after the encoder has separated instances via global attention, the decoder only needs to attend to the extremities to extract the class and object boundaries.\nImportance of FFN. FFN inside tranformers can be seen as 1 × 1 convolutional layers, making encoder similar to attention augmented convolutional networks [3]. We attempt to remove it completely leaving only attention in the transformer layers. By reducing the number of network parameters from 41.3M to 28.7M, leaving only 10.8M in the transformer, performance drops by 2.3 AP, we thus conclude that FFN are important for achieving good results.\nImportance of positional encodings. There are two kinds of positional encodings in our model: spatial positional encodings and output positional encod- ings (object queries). We experiment with various combinations of fixed and learned encodings, results can be found in table 3. Output positional encodings are required and cannot be removed, so we experiment with either passing them once at decoder input or adding to queries at every decoder attention layer. In the first experiment we completely remove spatial positional encodings and pass output positional encodings at input and, interestingly, the model still achieves more than 32 AP, losing 7.8 AP to the baseline. Then, we pass fixed sine spatial positional encodings and the output encodings at input once, as in the original transformer [47], and find that this leads to 1.4 AP drop compared to passing the positional encodings directly in attention. Learned spatial encodings passed to the attentions give similar results. Surprisingly, we find that not passing any spatial encodings in the encoder only leads to a minor AP drop of 1.3 AP. When we pass the encodings to the attentions, they are shared across all layers, and the output encodings (object queries) are always learned. Given these ablations, we conclude that transformer components: the global self-attention in encoder, FFN, multiple decoder layers, and positional encodings, all significantly contribute to the final object detection performance.\nLoss ablations. To evaluate the importance of different components of the matching cost and the loss, we train several models turning them on and off. There are three components to the loss: classification loss, 1 bounding box distance loss, and GIoU [38] loss. The classification loss is essential for training and cannot be turned off, so we train a model without bounding box distance loss, and a model without the GIoU loss, and compare with baseline, trained with all three losses. Results are presented in table 4. GIoU loss on its own accounts Table 3: Results for different positional encodings compared to the baseline (last row), which has fixed sine pos. encodings passed at every attention layer in both the encoder and the decoder. Learned embeddings are shared between all layers. Not using spatial positional encodings leads to a significant drop in AP. Interestingly, passing them in decoder only leads to a minor AP drop. All these models use learned output positional encodings. for most of the model performance, losing only 0.7 AP to the baseline with combined losses. Using 1 without GIoU shows poor results. We only studied The points are color-coded so that green color corresponds to small boxes, red to large horizontal boxes and blue to large vertical boxes. We observe that each slot learns to specialize on certain areas and box sizes with several operating modes. We note that almost all slots have a mode of predicting large image-wide boxes that are common in COCO dataset.\nsimple ablations of different losses (using the same weighting every time), but other means of combining them may achieve different results. Decoder output slot analysis In Fig. 7 we visualize the boxes predicted by different slots for all images in COCO 2017 val set. DETR learns different specialization for each query slot. We observe that each slot has several modes of operation focusing on different areas and box sizes. In particular, all slots have the mode for predicting image-wide boxes (visible as the red dots aligned in the middle of the plot). We hypothesize that this is related to the distribution of objects in COCO.\nGeneralization to unseen numbers of instances. Some classes in COCO are not well represented with many instances of the same class in the same image. For example, there is no image with more than 13 giraffes in the training set. We create a synthetic imagefoot_2 to verify the generalization ability of DETR (see Figure 5). Our model is able to find all 24 giraffes on the image which is clearly out of distribution. This experiment confirms that there is no strong class-specialization in each object query. Panoptic segmentation [19] has recently attracted a lot of attention from the computer vision community. Similarly to the extension of Faster R-CNN [37] to Mask R-CNN [14], DETR can be naturally extended by adding a mask head on top of the decoder outputs. In this section we demonstrate that such a head can be used to produce panoptic segmentation [19] by treating stuff and thing classes\nfeatures Res5 Res4 Res3 Res2\nFig. 8: Illustration of the panoptic head. A binary mask is generated in parallel for each detected object, then the masks are merged using pixel-wise argmax. in a unified way. We perform our experiments on the panoptic annotations of the COCO dataset that has 53 stuff categories in addition to 80 things categories.\nWe train DETR to predict boxes around both stuff and things classes on COCO, using the same recipe. Predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. We also add a mask head which predicts a binary mask for each of the predicted boxes, see Figure 8. It takes as input the output of transformer decoder for each object and computes multi-head (with M heads) attention scores of this embedding over the output of the encoder, generating M attention heatmaps per object in a small resolution. To make the final prediction and increase the resolution, an FPN-like architecture is used. We describe the architecture in more details in the supplement. The final resolution of the masks has stride 4 and each mask is supervised independently using the DICE/F-1 loss [28] and Focal loss [23].\nThe mask head can be trained either jointly, or in a two steps process, where we train DETR for boxes only, then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results, we report results using the latter method since it results in a shorter total wall-clock time training. To predict the final panoptic segmentation we simply use an argmax over the mask scores at each pixel, and assign the corresponding categories to the resulting masks. This procedure guarantees that the final masks have no overlaps and, therefore, DETR does not require a heuristic [19] that is often used to align different masks.\nTraining details. We train DETR, DETR-DC5 and DETR-R101 models following the recipe for bounding box detection to predict boxes around stuff and things classes in COCO dataset. The new mask head is trained for 25 epochs (see supplementary for details). During inference we first filter out the detection with a confidence below 85%, then compute the per-pixel argmax to determine in which mask each pixel belongs. We then collapse different mask predictions of the same stuff category in one, and filter the empty ones (less than 4 pixels).\nMain results. Qualitative results are shown in Figure 9. In table 5 we compare our unified panoptic segmenation approach with several established methods that treat things and stuff differently. We report the Panoptic Quality (PQ) and the break-down on things (PQ th ) and stuff (PQ st ). We also report the mask AP (computed on the things classes), before any panoptic post-treatment (in our case, before taking the pixel-wise argmax). We show that DETR outperforms published results on COCO-val 2017, as well as our strong PanopticFPN baseline (trained with same data-augmentation as DETR, for fair comparison). The result break-down shows that DETR is especially dominant on stuff classes, and we hypothesize that the global reasoning allowed by the encoder attention is the key element to this result. For things class, despite a severe deficit of up to 8 mAP compared to the baselines on the mask AP computation, DETR obtains competitive PQ th . We also evaluated our method on the test set of the COCO dataset, and obtained 46 PQ. We hope that our approach will inspire the exploration of fully unified models for panoptic segmentation in future work. Since our model is based on the Transformer architecture, we remind here the general form of attention mechanisms we use for exhaustivity. The attention mechanism follows [47], except for the details of positional encodings (see Equation 8) that follows [7].\nMulti-head The general form of multi-head attention with M heads of dimension d is a function with the following signature (using d = d M , and giving matrix/tensors sizes in underbrace) mh-attn : X q d×Nq , X kv\nd×N kv , T M ×3×d ×d , L d×d → Xq d×Nq(3)\nwhere X q is the query sequence of length N q , X kv is the key-value sequence of length N kv (with the same number of channels d for simplicity of exposition), T is the weight tensor to compute the so-called query, key and value embeddings, and L is a projection matrix. The output is the same size as the query sequence.\nTo fix the vocabulary before giving details, multi-head self-attention (mh-s-attn) is the special case X q = X kv , i.e.\nmh-s-attn(X, T, L) = mh-attn(X, X, T, L) .\nThe multi-head attention is simply the concatenation of M single attention heads followed by a projection with L. The common practice [47] is to use residual connections, dropout and layer normalization. In other words, denoting Xq = mh-attn(X q , X kv , T, L) and X(q) the concatenation of attention heads, we have X q = [attn(X q , X kv , T 1 ); ...; attn(X q , X kv , T M )]\n(5) Xq = layernorm X q + dropout(LX q ) ,\nwhere [;] denotes concatenation on the channel axis.\nSingle head An attention head with weight tensor T ∈ R 3×d ×d , denoted by attn(X q , X kv , T ), depends on additional positional encoding P q ∈ R d×Nq and P kv ∈ R d×N kv . It starts by computing so-called query, key and value embeddings after adding the query and key positional encodings [7]:\n[Q; K; V ] = [T 1 (X q + P q ); T 2 (X kv + P kv ); T 3 X kv ] (7\n)\nwhere T is the concatenation of T 1 , T 2 , T 3 . The attention weights α are then computed based on the softmax of dot products between queries and keys, so that each element of the query sequence attends to all elements of the key-value sequence (i is a query index and j a key-value index):\nα i,j = e 1 √ d Q T i Kj Z i where Z i = N kv j=1 e 1 √ d Q T i Kj .(8)\nIn our case, the positional encodings may be learnt or fixed, but are shared across all attention layers for a given query/key-value sequence, so we do not explicitly write them as parameters of the attention. We give more details on their exact value when describing the encoder and the decoder. The final output is the aggregation of values weighted by attention weights: The i-th row is given by attn i (X q , X kv , T ) = N kv j=1 α i,j V j . Feed-forward network (FFN) layers The original transformer alternates multi-head attention and so-called FFN layers [47], which are effectively multilayer 1x1 convolutions, which have M d input and output channels in our case. The FFN we consider is composed of two-layers of 1x1 convolutions with ReLU activations. There is also a residual connection/dropout/layernorm after the two layers, similarly to equation 6. For completeness, we present in detail the losses used in our approach. All losses are normalized by the number of objects inside the batch. Extra care must be taken for distributed training: since each GPU receives a sub-batch, it is not sufficient to normalize by the number of objects in the local batch, since in general the sub-batches are not balanced across GPUs. Instead, it is important to normalize by the total number of objects in all sub-batches.\nBox loss Similarly to [41,36], we use a soft version of Intersection over Union in our loss, together with a 1 loss on b:\nL box (b σ(i) , bi ) = λ iou L iou (b σ(i) , bi ) + λ L1 ||b σ(i) -bi || 1 ,(9)\nwhere λ iou , λ L1 ∈ R are hyperparameters and L iou (•) is the generalized IoU [38]:\nL iou (b σ(i) , bi ) = 1 - |b σ(i) ∩ bi | |b σ(i) ∪ bi | - |B(b σ(i) , bi ) \\ b σ(i) ∪ bi | |B(b σ(i) , bi )| .(10)\n|.| means \"area\", and the union and intersection of box coordinates are used as shorthands for the boxes themselves. The areas of unions or intersections are computed by min / max of the linear functions of b σ(i) and bi , which makes the loss sufficiently well-behaved for stochastic gradients. B(b σ(i) , bi ) means the largest box containing b σ(i) , bi (the areas involving B are also computed based on min / max of linear functions of the box coordinates).\nDICE/F-1 loss [28] The DICE coefficient is closely related to the Intersection over Union. If we denote by m the raw mask logits prediction of the model, and m the binary target mask, the loss is defined as:\nL DICE (m, m) = 1 - 2mσ( m) + 1 σ( m) + m + 1 (11\n)\nwhere σ is the sigmoid function. This loss is normalized by the number of objects. The detailed description of the transformer used in DETR, with positional encodings passed at every attention layer, is given in Fig. 10. Image features from the CNN backbone are passed through the transformer encoder, together with spatial positional encoding that are added to queries and keys at every multihead self-attention layer. Then, the decoder receives queries (initially set to zero), output positional encoding (object queries), and encoder memory, and produces the final set of predicted class labels and bounding boxes through multiple multihead self-attention and decoder-encoder attention. The first self-attention layer in the first decoder layer can be skipped. FLOPS computation Given that the FLOPS for Faster R-CNN depends on the number of proposals in the image, we report the average number of FLOPS for the first 100 images in the COCO 2017 validation set. We compute the FLOPS with the tool flop count operators from Detectron2 [50]. We use it without modifications for Detectron2 models, and extend it to take batch matrix multiply (bmm) into account for DETR models.\nAdd & Norm FFN Add & Norm Multi-Head Self-Attention K Q V N × Image features Encoder Multi-Head Self-Attention Add & Norm Multi-Head Attention Add & Norm FFN Add & Norm K Q V K Q M × Decoder V We train DETR using AdamW [26] with improved weight decay handling, set to 10 -4 . We also apply gradient clipping, with a maximal gradient norm of 0.1. The backbone and the transformers are treated slightly differently, we now discuss the details for both.\nBackbone ImageNet pretrained backbone ResNet-50 is imported from Torchvision, discarding the last classification layer. Backbone batch normalization weights and statistics are frozen during training, following widely adopted practice in object detection. We fine-tune the backbone using learning rate of 10 -5 . We observe that having the backbone learning rate roughly an order of magnitude smaller than the rest of the network is important to stabilize training, especially in the first few epochs.\nTransformer We train the transformer with a learning rate of 10 -4 . Additive dropout of 0.1 is applied after every multi-head attention and FFN before layer normalization. The weights are randomly initialized with Xavier initialization.\nLosses We use linear combination of 1 and GIoU losses for bounding box regression with λ L1 = 5 and λ iou = 2 weights respectively. All models were trained with N = 100 decoder query slots.\nBaseline Our enhanced Faster-RCNN+ baselines use GIoU [38] loss along with the standard 1 loss for bounding box regression. We performed a grid search to find the best weights for the losses and the final models use only GIoU loss with weights 20 and 1 for box and proposal regression tasks respectively. For the baselines we adopt the same data augmentation as used in DETR and train it with 9× schedule (approximately 109 epochs). All other settings are identical to the same models in the Detectron2 model zoo [50].\nSpatial positional encoding Encoder activations are associated with corresponding spatial positions of image features. In our model we use a fixed absolute encoding to represent these spatial positions. We adopt a generalization of the original Transformer [47] encoding to the 2D case [31]. Specifically, for both spatial coordinates of each embedding we independently use d 2 sine and cosine functions with different frequencies. We then concatenate them to get the final d channel positional encoding. To demonstrate the simplicity of the approach, we include inference code with PyTorch and Torchvision libraries in Listing 1. The code runs with Python 3.6+, PyTorch 1.4 and Torchvision 0.5. Note that it does not support batching, hence it is suitable only for inference or training with DistributedDataParallel with one image per GPU. Also note that for clarity, this code uses learnt positional encodings in the encoder instead of fixed, and positional encodings are added to the input only instead of at each transformer layer. Making these changes requires going beyond PyTorch implementation of transformers, which hampers readability. The entire code to reproduce the experiments will be made available before the conference. Listing 1: DETR PyTorch inference code. For clarity it uses learnt positional encodings in the encoder instead of fixed, and positional encodings are added to the input only instead of at each transformer layer. Making these changes requires going beyond PyTorch implementation of transformers, which hampers readability. The entire code to reproduce the experiments will be made available before the conference. We thank Sainbayar Sukhbaatar, Piotr Bojanowski, Natalia Neverova, David Lopez-Paz, Guillaume Lample, Danielle Rothermel, Kaiming He, Ross Girshick, Xinlei Chen and the whole Facebook AI Research Paris team for discussions and advices without which this work would not be possible." }, { "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": 2017.0, "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan Yuille", "arxiv_di": "1606.00915", "Introduction": "Deep Convolutional Neural Networks (DCNNs) [1] have pushed the performance of computer vision systems to soaring heights on a broad array of high-level problems, including image classification [2], [3], [4], [5], [6] and object detection [7], [8], [9], [10], [11], [12], where DCNNs trained in an end-to-end manner have delivered strikingly better results than systems relying on hand-crafted features. Essential to this success is the built-in invariance of DCNNs to local image transformations, which allows them to learn increasingly abstract data representations [13]. This invariance is clearly desirable for classification tasks, but can hamper dense prediction tasks such as semantic segmentation, where abstraction of spatial information is undesired.\nIn particular we consider three challenges in the application of DCNNs to semantic image segmentation: (1) reduced feature resolution, (2) existence of objects at multiple scales, and (3) reduced localization accuracy due to DCNN invariance. Next, we discuss these challenges and our approach to overcome them in our proposed DeepLab system.\nThe first challenge is caused by the repeated combination of max-pooling and downsampling ('striding') performed at consecutive layers of DCNNs originally designed for image classification [2], [4], [5]. This results in feature maps with significantly reduced spatial resolution when the DCNN is employed in a fully convolutional fashion [14]. In order to overcome this hurdle and efficiently produce denser feature maps, we remove the downsampling operator from the last few max pooling layers of DCNNs and instead upsample the filters in subsequent convolutional layers, resulting in feature maps computed at a higher sampling rate. Filter upsampling amounts to inserting holes ('trous' in French) between nonzero filter taps. This technique has a long history in signal processing, originally developed for the efficient computation of the undecimated wavelet transform in a scheme also known as \"algorithme à trous\" [15]. We use the term atrous convolution as a shorthand for convolution with upsampled filters. Various flavors of this idea have been used before in the context of DCNNs by [3], [6], [16].\nIn practice, we recover full resolution feature maps by a combination of atrous convolution, which computes feature maps more densely, followed by simple bilinear interpolation of the feature responses to the original image size. This scheme offers a simple yet powerful alternative to using deconvolutional layers [13], [14] in dense prediction tasks.\nCompared to regular convolution with larger filters, atrous convolution allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.\nThe second challenge is caused by the existence of objects at multiple scales. A standard way to deal with this is to present to the DCNN rescaled versions of the same image and then aggregate the feature or score maps [6], [17], [18]. We show that this approach indeed increases the perfor-mance of our system, but comes at the cost of computing feature responses at all DCNN layers for multiple scaled versions of the input image. Instead, motivated by spatial pyramid pooling [19], [20], we propose a computationally efficient scheme of resampling a given feature layer at multiple rates prior to convolution. This amounts to probing the original image with multiple filters that have complementary effective fields of view, thus capturing objects as well as useful image context at multiple scales. Rather than actually resampling features, we efficiently implement this mapping using multiple parallel atrous convolutional layers with different sampling rates; we call the proposed technique \"atrous spatial pyramid pooling\" (ASPP).\nThe third challenge relates to the fact that an objectcentric classifier requires invariance to spatial transformations, inherently limiting the spatial accuracy of a DCNN. One way to mitigate this problem is to use skip-layers to extract \"hyper-column\" features from multiple network layers when computing the final segmentation result [14], [21]. Our work explores an alternative approach which we show to be highly effective. In particular, we boost our model's ability to capture fine details by employing a fullyconnected Conditional Random Field (CRF) [22]. CRFs have been broadly used in semantic segmentation to combine class scores computed by multi-way classifiers with the lowlevel information captured by the local interactions of pixels and edges [23], [24] or superpixels [25]. Even though works of increased sophistication have been proposed to model the hierarchical dependency [26], [27], [28] and/or highorder dependencies of segments [29], [30], [31], [32], [33], we use the fully connected pairwise CRF proposed by [22] for its efficient computation, and ability to capture fine edge details while also catering for long range dependencies. That model was shown in [22] to improve the performance of a boosting-based pixel-level classifier. In this work, we demonstrate that it leads to state-of-the-art results when coupled with a DCNN-based pixel-level classifier.\nA high-level illustration of the proposed DeepLab model is shown in Fig. 1. A deep convolutional neural network (VGG-16 [4] or ResNet-101 [11] in this work) trained in the task of image classification is re-purposed to the task of semantic segmentation by (1) transforming all the fully connected layers to convolutional layers (i.e., fully convolutional network [14]) and (2) increasing feature resolution through atrous convolutional layers, allowing us to compute feature responses every 8 pixels instead of every 32 pixels in the original network. We then employ bi-linear interpolation to upsample by a factor of 8 the score map to reach the original image resolution, yielding the input to a fullyconnected CRF [22] that refines the segmentation results.\nFrom a practical standpoint, the three main advantages of our DeepLab system are: (1) Speed: by virtue of atrous convolution, our dense DCNN operates at 8 FPS on an NVidia Titan X GPU, while Mean Field Inference for the fully-connected CRF requires 0.5 secs on a CPU. (2) Accuracy: we obtain state-of-art results on several challenging datasets, including the PASCAL VOC 2012 semantic segmentation benchmark [34], PASCAL-Context [35], PASCAL-Person-Part [36], and Cityscapes [37]. (3) Simplicity: our system is composed of a cascade of two very well-established modules, DCNNs and CRFs.\nThe updated DeepLab system we present in this paper features several improvements compared to its first version reported in our original conference publication [38]. Our new version can better segment objects at multiple scales, via either multi-scale input processing [17], [39], [40] or the proposed ASPP. We have built a residual net variant of DeepLab by adapting the state-of-art ResNet [11] image classification DCNN, achieving better semantic segmentation performance compared to our original model based on VGG-16 [4]. Finally, we present a more comprehensive experimental evaluation of multiple model variants and report state-of-art results not only on the PASCAL VOC 2012 benchmark but also on other challenging tasks. We have implemented the proposed methods by extending the Caffe framework [41]. We share our code and models at a companion web site http://liangchiehchen.com/projects/ DeepLab.html.", "Related_Work": "Most of the successful semantic segmentation systems developed in the previous decade relied on hand-crafted features combined with flat classifiers, such as Boosting [24], [42], Random Forests [43], or Support Vector Machines [44]. Substantial improvements have been achieved by incorporating richer information from context [45] and structured prediction techniques [22], [26], [27], [46], but the performance of these systems has always been compromised by the limited expressive power of the features. Over the past few years the breakthroughs of Deep Learning in image classification were quickly transferred to the semantic segmentation task. Since this task involves both segmentation and classification, a central question is how to combine the two tasks.\nThe first family of DCNN-based systems for semantic segmentation typically employs a cascade of bottomup image segmentation, followed by DCNN-based region classification. For instance the bounding box proposals and masked regions delivered by [47], [48] are used in [7] and [49] as inputs to a DCNN to incorporate shape information into the classification process. Similarly, the authors of [50] rely on a superpixel representation. Even though these approaches can benefit from the sharp boundaries delivered by a good segmentation, they also cannot recover from any of its errors.\nThe second family of works relies on using convolutionally computed DCNN features for dense image labeling, and couples them with segmentations that are obtained independently. Among the first have been [39] who apply DCNNs at multiple image resolutions and then employ a segmentation tree to smooth the prediction results. More recently, [21] propose to use skip layers and concatenate the computed intermediate feature maps within the DCNNs for pixel classification. Further, [51] propose to pool the intermediate feature maps by region proposals. These works still employ segmentation algorithms that are decoupled from the DCNN classifier's results, thus risking commitment to premature decisions.\nThe third family of works uses DCNNs to directly provide dense category-level pixel labels, which makes it possible to even discard segmentation altogether. The segmentation-free approaches of [14], [52] directly apply DCNNs to the whole image in a fully convolutional fashion, transforming the last fully connected layers of the DCNN into convolutional layers. In order to deal with the spatial localization issues outlined in the introduction, [14] upsample and concatenate the scores from intermediate feature maps, while [52] refine the prediction result from coarse to fine by propagating the coarse results to another DCNN. Our work builds on these works, and as described in the introduction extends them by exerting control on the feature resolution, introducing multi-scale pooling techniques and integrating the densely connected CRF of [22] on top of the DCNN. We show that this leads to significantly better segmentation results, especially along object boundaries. The combination of DCNN and CRF is of course not new but previous works only tried locally connected CRF models. Specifically, [53] use CRFs as a proposal mechanism for a DCNN-based reranking system, while [39] treat superpixels as nodes for a local pairwise CRF and use graph-cuts for discrete inference. As such their models were limited by errors in superpixel computations or ignored long-range dependencies. Our approach instead treats every pixel as a CRF node receiving unary potentials by the DCNN. Crucially, the Gaussian CRF potentials in the fully connected CRF model of [22] that we adopt can capture long-range dependencies and at the same time the model is amenable to fast mean field inference. We note that mean field inference had been extensively studied for traditional image segmentation tasks [54], [55], [56], but these older models were typically limited to shortrange connections. In independent work, [57] use a very similar densely connected CRF model to refine the results of DCNN for the problem of material classification. However, the DCNN module of [57] was only trained by sparse point supervision instead of dense supervision at every pixel.\nSince the first version of this work was made publicly available [38], the area of semantic segmentation has progressed drastically. Multiple groups have made important advances, significantly raising the bar on the PASCAL VOC 2012 semantic segmentation benchmark, as reflected to the high level of activity in the benchmark's leaderboard 1 [17], [40], [58], [59], [60], [61], [62], [63]. Interestingly, most topperforming methods have adopted one or both of the key ingredients of our DeepLab system: Atrous convolution for efficient dense feature extraction and refinement of the raw DCNN scores by means of a fully connected CRF. We outline below some of the most important and interesting advances.\nEnd-to-end training for structured prediction has more recently been explored in several related works. While we employ the CRF as a post-processing method, [40], [59], [62], [64], [65] have successfully pursued joint learning of the DCNN and CRF. In particular, [59], [65] unroll the CRF mean-field inference steps to convert the whole system into an end-to-end trainable feed-forward network, while [62] approximates one iteration of the dense CRF mean field inference [22] by convolutional layers with learnable filters. Another fruitful direction pursued by [40], [66] is to learn the pairwise terms of a CRF via a DCNN, significantly improving performance at the cost of heavier computation. In a different direction, [63] replace the bilateral filtering module used in mean field inference with a faster domain transform module [67], improving the speed and lowering the memory requirements of the overall system, while [18], [68] combine semantic segmentation with edge detection.\nWeaker supervision has been pursued in a number of papers, relaxing the assumption that pixel-level semantic annotations are available for the whole training set [58], [69], [70], [71], achieving significantly better results than weaklysupervised pre-DCNN systems such as [72]. In another line of research, [49], [73] pursue instance segmentation, jointly tackling object detection and semantic segmentation.\nWhat we call here atrous convolution was originally developed for the efficient computation of the undecimated wavelet transform in the \"algorithme à trous\" scheme of [15]. We refer the interested reader to [74] for early references from the wavelet literature. Atrous convolution is also intimately related to the \"noble identities\" in multi-rate signal processing, which builds on the same interplay of input signal and filter sampling rates [75]. Atrous convolution is a term we first used in [6]. The same operation was later called dilated convolution by [76], a term they coined motivated by the fact that the operation corresponds to regular convolution with upsampled (or dilated in the terminology of [15]) filters. Various authors have used the same operation before for denser feature extraction in DCNNs [3], [6], [16]. Beyond mere resolution enhancement, atrous convolution allows us to enlarge the field of view of filters to incorporate larger context, which we have shown in [38] to be beneficial. This approach has been pursued further by [76], who employ a series of atrous convolutional layers with increasing rates to aggregate multiscale context. The atrous spatial pyramid pooling scheme proposed here to capture multiscale objects and context also employs multiple atrous convolutional layers with different sampling rates, which we however lay out in parallel instead of in serial. Interestingly, the atrous convolution technique has also been adopted for a broader set of tasks, such as object detection [12], [77], instancelevel segmentation [78], visual question answering [79], and optical flow [80].\nWe also show that, as expected, integrating into DeepLab more advanced image classification DCNNs such as the residual net of [11] leads to better results. This has also been observed independently by [81].", "Methodology": "DeepLab-CRF-LargeFOV-COCO [58] 72.7 MERL DEEP GCRF [88] 73.2 CRF-RNN [59] 74.7 POSTECH DeconvNet CRF VOC [61] 74.8 BoxSup [60] 75.2 Context + CRF-RNN [76] 75.3 QO mres 4 [66] 75.5 DeepLab-CRF-Attention [17] 75.7 CentraleSuperBoundaries++ [18] 76.0 DeepLab-CRF-Attention-DT [63] 76.3 H-ReNet + DenseCRF [89] 76.8 LRR 4x COCO [90] 76.8 DPN [62] 77.5 Adelaide Context [40] 77.8 Oxford TVG HO CRF [91] 77.9 Context CRF + Guidance CRF [92] 78.1 Adelaide VeryDeep FCN VOC [93] 79.1\nDeepLab-CRF (ResNet-101) 79.7 [11] delivers better segmentation results along object boundaries than employing VGG-16 [4], as visualized in Fig. 9. We think the identity mapping [94] of ResNet-101 has similar effect as hyper-column features [21], which exploits the features from the intermediate layers to better localize boundaries. We further quantize this effect in Fig. 10 within the \"trimap\" [22], [31] (a narrow band along object boundaries). As shown in the figure, employing ResNet-101 before CRF has almost the same accuracy along object boundaries as employing VGG-16 in conjunction with a CRF. Post-processing the ResNet-101 result with a CRF further improves the segmentation result.", "Conclusion": "Our proposed \"DeepLab\" system re-purposes networks trained on image classification to the task of semantic segmentation by applying the 'atrous convolution' with upsampled filters for dense feature extraction. We further extend it to atrous spatial pyramid pooling, which encodes objects as well as image context at multiple scales. To produce semantically accurate predictions and detailed segmentation maps along object boundaries, we also combine ideas from deep convolutional neural networks and fully-connected conditional random fields. Our experimental results show that the proposed method significantly advances the state-ofart in several challenging datasets, including PASCAL VOC 2012 semantic image segmentation benchmark, PASCAL-Context, PASCAL-Person-Part, and Cityscapes datasets.", "Experiment_and_Results": "We finetune the model weights of the Imagenet-pretrained VGG-16 or ResNet-101 networks to adapt them to the semantic segmentation task in a straightforward fashion, following the procedure of [14]. We replace the 1000-way Imagenet classifier in the last layer with a classifier having as many targets as the number of semantic classes of our task (including the background, if applicable). Our loss function is the sum of cross-entropy terms for each spatial position in the CNN output map (subsampled by 8 compared to the original image). All positions and labels are equally weighted in the overall loss function (except for unlabeled pixels which are ignored). Our targets are the ground truth labels (subsampled by 8). We optimize the objective function with respect to the weights at all network layers by the standard SGD procedure of [2]. We decouple the DCNN and CRF training stages, assuming the DCNN unary terms are fixed when setting the CRF parameters.\nWe evaluate the proposed models on four challenging datasets: PASCAL VOC 2012, PASCAL-Context, PASCAL-Person-Part, and Cityscapes. We first report the main results of our conference version [38] on PASCAL VOC 2012, and move forward to latest results on all datasets. We employ the VGG-16 network pre-trained on Imagenet, adapted for semantic segmentation as described in Section 3.1. We use a mini-batch of 20 images and initial learning rate of 0.001 (0.01 for the final classifier layer), multiplying the learning rate by 0.1 every 2000 iterations. We use momentum of 0.9 and weight decay of 0.0005.\nAfter the DCNN has been fine-tuned on trainaug, we cross-validate the CRF parameters along the lines of [22]. We use default values of w 2 = 3 and σ γ = 3 and we search for the best values of w 1 , σ α , and σ β by cross-validation on 100 images from val. We employ a coarse-to-fine search scheme. The initial search range of the parameters are w 1 ∈ [3 : 6], σ α ∈ [30 : 10 : 100] and σ β ∈ [3 : 6] (MATLAB notation), and then we refine the search step sizes around the first round's best values. We employ 10 mean field iterations.\nField of View and CRF: In Tab. 1, we report experiments with DeepLab model variants that use different field-ofview sizes, obtained by adjusting the kernel size and atrous sampling rate r in the 'fc6' layer, as described in Sec. 3.1. We start with a direct adaptation of VGG-16 net, using the original 7 × 7 kernel size and r = 4 (since we use no stride for the last two max-pooling layers). This model yields performance of 67.64% after CRF, but is relatively slow (1.44 images per second during training). We have improved model speed to 2.9 images per second by reducing the kernel size to 4 × 4. We have experimented with two such network variants with smaller (r = 4) and larger (r = 8) FOV sizes; the latter one performs better. Finally, we employ kernel size 3×3 and even larger atrous sampling rate (r = 12), also making the network thinner by retaining a random subset of 1,024 out of the 4,096 filters in layers 'fc6' and 'fc7'. The resulting model, DeepLab-CRF-LargeFOV, matches the performance of the direct VGG-16 adaptation (7 × 7 kernel size, r = 4). At the same time, DeepLab-LargeFOV is 3.36 times faster and has significantly fewer parameters (20.5M instead of 134.3M).\nThe CRF substantially boosts performance of all model variants, offering a 3-5% absolute increase in mean IOU. as different learning hyper parameters vary. Employing \"poly\" learning policy is more effective than \"step\" when training DeepLab-LargeFOV. We have uploaded our best model to the evaluation server, obtaining performance of 70.4%. Note that our model is only trained on the train set.\nQualitative results: We visualize the results in Fig. 13.", "Extra": "The use of DCNNs for semantic segmentation, or other dense prediction tasks, has been shown to be simply and successfully addressed by deploying DCNNs in a fully convolutional fashion [3], [14]. However, the repeated combination of max-pooling and striding at consecutive layers of these networks reduces significantly the spatial resolution of the resulting feature maps, typically by a factor of 32 across each direction in recent DCNNs. A partial remedy is to use 'deconvolutional' layers as in [14], which however requires additional memory and time.\nWe advocate instead the use of atrous convolution, originally developed for the efficient computation of the undecimated wavelet transform in the \"algorithme à trous\" scheme of [15] and used before in the DCNN context by [3], [6], [16]. This algorithm allows us to compute the responses of any layer at any desirable resolution. It can be applied post-hoc, once a network has been trained, but can also be seamlessly integrated with training.\nConsidering one-dimensional signals first, the output y[i] of atrous convolution 2 of a 1-D input signal x[i] with a filter w[k] of length K is defined as:\ny[i] = K k=1 x[i + r • k]w[k].(1)\nThe rate parameter r corresponds to the stride with which we sample the input signal. Standard convolution is a special case for rate r = 1. See Fig. 2 for illustration.\n2. We follow the standard practice in the DCNN literature and use non-mirrored filters in this definition. We illustrate the algorithm's operation in 2-D through a simple example in Fig. 3: Given an image, we assume that we first have a downsampling operation that reduces the resolution by a factor of 2, and then perform a convolution with a kernel -here, the vertical Gaussian derivative. If one implants the resulting feature map in the original image coordinates, we realize that we have obtained responses at only 1/4 of the image positions. Instead, we can compute responses at all image positions if we convolve the full resolution image with a filter 'with holes', in which we upsample the original filter by a factor of 2, and introduce zeros in between filter values. Although the effective filter size increases, we only need to take into account the non-zero filter values, hence both the number of filter parameters and the number of operations per position stay constant. The resulting scheme allows us to easily and explicitly control the spatial resolution of neural network feature responses.\nIn the context of DCNNs one can use atrous convolution in a chain of layers, effectively allowing us to compute the final DCNN network responses at an arbitrarily high resolution. For example, in order to double the spatial density of computed feature responses in the VGG-16 or ResNet-101 networks, we find the last pooling or convolutional layer that decreases resolution ('pool5' or 'conv5 1' respectively), set its stride to 1 to avoid signal decimation, and replace all subsequent convolutional layers with atrous convolutional layers having rate r = 2. Pushing this approach all the way through the network could allow us to compute feature responses at the original image resolution, but this ends up being too costly. We have adopted instead a hybrid approach that strikes a good efficiency/accuracy trade-off, using atrous convolution to increase by a factor of 4 the density of computed feature maps, followed by fast bilinear interpolation by an additional factor of 8 to recover feature maps at the original image resolution. Bilinear interpolation is sufficient in this setting because the class score maps (corresponding to log-probabilities) are quite smooth, as illustrated in Fig. 5. Unlike the deconvolutional approach adopted by [14], the proposed approach converts image classification networks into dense feature extractors without requiring learning any extra parameters, leading to faster DCNN training in practice.\nAtrous convolution also allows us to arbitrarily enlarge the field-of-view of filters at any DCNN layer. State-of-theart DCNNs typically employ spatially small convolution kernels (typically 3×3) in order to keep both computation and number of parameters contained. Atrous convolution with rate r introduces r -1 zeros between consecutive filter values, effectively enlarging the kernel size of a k ×k filter to k e = k + (k -1)(r -1) without increasing the number of parameters or the amount of computation. It thus offers an efficient mechanism to control the field-of-view and finds the best trade-off between accurate localization (small field-of-view) and context assimilation (large field-of-view). We have successfully experimented with this technique: Our DeepLab-LargeFOV model variant [38] employs atrous convolution with rate r = 12 in VGG-16 'fc6' layer with significant performance gains, as detailed in Section 4.\nTurning to implementation aspects, there are two efficient ways to perform atrous convolution. The first is to implicitly upsample the filters by inserting holes (zeros), or equivalently sparsely sample the input feature maps [15]. We implemented this in our earlier work [6], [38], followed by [76], within the Caffe framework [41] by adding to the im2col function (it extracts vectorized patches from multichannel feature maps) the option to sparsely sample the underlying feature maps. The second method, originally proposed by [82] and used in [3], [16] is to subsample the input feature map by a factor equal to the atrous convolution rate r, deinterlacing it to produce r 2 reduced resolution maps, one for each of the r×r possible shifts. This is followed by applying standard convolution to these intermediate feature maps and reinterlacing them to the original image resolution. By reducing atrous convolution into regular convolution, it allows us to use off-the-shelf highly optimized convolution routines. We have implemented the second approach into the TensorFlow framework [83]. DCNNs have shown a remarkable ability to implicitly represent scale, simply by being trained on datasets that contain objects of varying size. Still, explicitly accounting for object scale can improve the DCNN's ability to successfully handle both large and small objects [6].\nWe have experimented with two approaches to handling scale variability in semantic segmentation. The first approach amounts to standard multiscale processing [17], [18]. We extract DCNN score maps from multiple (three in our experiments) rescaled versions of the original image using parallel DCNN branches that share the same parameters. To produce the final result, we bilinearly interpolate the feature maps from the parallel DCNN branches to the original image resolution and fuse them, by taking at each position the maximum response across the different scales. We do this both during training and testing. Multiscale processing significantly improves performance, but at the cost of computing feature responses at all DCNN layers for multiple scales of input.\nThe second approach is inspired by the success of the R-CNN spatial pyramid pooling method of [20], which showed that regions of an arbitrary scale can be accurately and efficiently classified by resampling convolutional features extracted at a single scale. We have implemented a variant of their scheme which uses multiple parallel atrous convolutional layers with different sampling rates. The features extracted for each sampling rate are further processed in separate branches and fused to generate the final result. The proposed \"atrous spatial pyramid pooling\" (DeepLab-ASPP) approach generalizes our DeepLab-LargeFOV variant and is illustrated in Fig. 4. A trade-off between localization accuracy and classification performance seems to be inherent in DCNNs: deeper models with multiple max-pooling layers have proven most successful in classification tasks, however the increased invariance and the large receptive fields of top-level nodes can only yield smooth responses. As illustrated in Fig. 5 score maps can predict the presence and rough position of objects but cannot really delineate their borders. Previous work has pursued two directions to address this localization challenge. The first approach is to harness information from multiple layers in the convolutional network in order to better estimate the object boundaries [14], [21], [52]. The second is to employ a super-pixel representation, essentially delegating the localization task to a lowlevel segmentation method [50].\nWe pursue an alternative direction based on coupling the recognition capacity of DCNNs and the fine-grained localization accuracy of fully connected CRFs and show that it is remarkably successful in addressing the localization challenge, producing accurate semantic segmentation results and recovering object boundaries at a level of detail that is well beyond the reach of existing methods. This direction has been extended by several follow-up papers [17], [40], [58], [59], [60], [61], [62], [63], [65], since the first version of our work was published [38].\nTraditionally, conditional random fields (CRFs) have been employed to smooth noisy segmentation maps [23], [31]. Typically these models couple neighboring nodes, favoring same-label assignments to spatially proximal pixels. Qualitatively, the primary function of these short-range CRFs is to clean up the spurious predictions of weak classifiers built on top of local hand-engineered features.\nCompared to these weaker classifiers, modern DCNN architectures such as the one we use in this work produce score maps and semantic label predictions which are qualitatively different. As illustrated in Fig. 5, the score maps are typically quite smooth and produce homogeneous classification results. In this regime, using short-range CRFs can be detrimental, as our goal should be to recover detailed local structure rather than further smooth it. Using contrastsensitive potentials [23] in conjunction to local-range CRFs can potentially improve localization but still miss thinstructures and typically requires solving an expensive discrete optimization problem.\nTo overcome these limitations of short-range CRFs, we integrate into our system the fully connected CRF model of [22]. The model employs the energy function\nE(x) = i θ i (x i ) + ij θ ij (x i , x j )(2)\nwhere x is the label assignment for pixels. We use as unary potential θ i (x i ) = -log P (x i ), where P (x i ) is the label assignment probability at pixel i as computed by a DCNN.\nThe pairwise potential has a form that allows for efficient inference while using a fully-connected graph, i.e. when connecting all pairs of image pixels, i, j. In particular, as in [22], we use the following expression:\nθ ij (x i , x j ) = µ(x i , x j ) w 1 exp - ||p i -p j || 2 2σ 2 α - ||I i -I j || 2 2σ 2 β +w 2 exp - ||p i -p j || 2 2σ 2 γ (3\n)\nwhere µ(x i , x j ) = 1 if x i = x j , and zero otherwise, which, as in the Potts model, means that only nodes with distinct labels are penalized. The remaining expression uses two Gaussian kernels in different feature spaces; the first, 'bilateral' kernel depends on both pixel positions (denoted as p) and RGB color (denoted as I), and the second kernel only depends on pixel positions. The hyper parameters σ α , σ β and σ γ control the scale of Gaussian kernels. The first kernel forces pixels with similar color and position to have similar labels, while the second kernel only considers spatial proximity when enforcing smoothness.\nCrucially, this model is amenable to efficient approximate probabilistic inference [22]. The message passing updates under a fully decomposable mean field approximation b(x) = i b i (x i ) can be expressed as Gaussian convolutions in bilateral space. High-dimensional filtering algorithms [84] significantly speed-up this computation resulting in an algorithm that is very fast in practice, requiring less that 0.5 sec on average for Pascal VOC images using the publicly available implementation of [22]. Dataset: The PASCAL VOC 2012 segmentation benchmark [34] After the conference version of this work [38], we have pursued three main improvements of our model, which we discuss below: (1) different learning policy during training, (2) atrous spatial pyramid pooling, and (3) employment of deeper networks and multi-scale processing.\nLearning rate policy: We have explored different learning rate policies when training DeepLab-LargeFOV. Similar to [86], we also found that employing a \"poly\" learning rate policy (the learning rate is multiplied by (1-iter max iter ) power ) is more effective than \"step\" learning rate (reduce the learning rate at a fixed step size). As shown in Tab. 2, employing \"poly\" (with power = 0.9) and using the same batch size and same training iterations yields 1.17% better performance than employing \"step\" policy. Fixing the batch size and increasing the training iteration to 10K improves the performance to 64.90% (1.48% gain); however, the total training time increases due to more training iterations. We then reduce the batch size to 10 and found that comparable performance is still maintained (64.90% vs. 64.71%). In the end, we employ batch size = 10 and 20K iterations in order to maintain similar training time as previous \"step\" policy. Surprisingly, this gives us the performance of 65.88% (3.63% improvement over \"step\") on val, and 67.7% on test, compared to 65.1% of the original \"step\" setting for DeepLab-LargeFOV before CRF. We employ the \"poly\" learning rate policy for all experiments reported in the rest of the paper.\nAtrous Spatial Pyramid Pooling: We have experimented with the proposed Atrous Spatial Pyramid Pooling (ASPP) scheme, described in Sec. 3.1. As shown in Fig. 7, ASPP for VGG-16 employs several parallel fc6-fc7-fc8 branches. They all use 3×3 kernels but different atrous rates r in the 'fc6' in order to capture objects of different size. In Tab. 3, we report results with several settings: (1) Our baseline LargeFOV model, having a single branch with r = 12, (2) ASPP-S, with four branches and smaller atrous rates (r = {2, 4, 8, 12}), and (3) ASPP-L, with four branches and larger rates (r = {6, 12, 18, 24}). For each variant we report results before and after CRF. As shown in the table, ASPP-S yields 1.22% improvement over the baseline LargeFOV before CRF. However, after CRF both LargeFOV and ASPP-S perform similarly. On the other hand, ASPP-L yields consistent improvements over the baseline LargeFOV both before and after CRF. We evaluate on test the proposed ASPP-L + CRF model, attaining 72.6%. We visualize the effect of the different schemes in Fig. 8.\nDeeper Networks and Multiscale Processing: We have experimented building DeepLab around the recently pro- ilar to what we did for VGG-16 net, we re-purpose ResNet-101 by atrous convolution, as described in Sec. 3.1. On top of that, we adopt several other features, following recent work of [17], [18], [39], [40], [58], [59], [62]: (1) Multi-scale inputs:\nWe separately feed to the DCNN images at scale = {0.5, 0. Qualitative results: We provide qualitative visual comparisons of DeepLab's results (our best model variant) before and after CRF in Fig. 6. The visualization results obtained by DeepLab before CRF already yields excellent segmentation results, while employing the CRF further improves the performance by removing false positives and refining object boundaries.\nTest set results: We have submitted the result of our final best model to the official server, obtaining test set performance of 79.7%, as shown in Tab. 5. The model substantially outperforms previous DeepLab variants (e.g., DeepLab-LargeFOV with VGG-16 net) and is currently the top performing method on the PASCAL VOC 2012 segmentation leaderboard. Dataset: The PASCAL-Context dataset [35] provides detailed semantic labels for the whole scene, including both object (e.g., person) and stuff (e.g., sky). Following [35], the proposed models are evaluated on the most frequent 59 classes along with one background category. The training set and validation set contain 4998 and 5105 images.\nEvaluation: We report the evaluation results in Tab. 6. Our VGG-16 based LargeFOV variant yields 37.6% before and 39.6% after CRF. Repurposing the ResNet-101 [11] for the residual net of [11] for semantic segmentation.\nQualitative results: We visualize the segmentation results of our best model with and without CRF as post processing in Fig. 11. DeepLab before CRF can already predict most of the object/stuff with high accuracy. Employing CRF, our model is able to further remove isolated false positives and improve the prediction along object/stuff boundaries. Dataset: We further perform experiments on semantic part segmentation [98], [99], using the extra PASCAL VOC 2010 annotations by [36]. We focus on the person part for the dataset, which contains more training data and large variation in object scale and human pose. Specifically, the dataset contains detailed part annotations for every person, e.g. eyes, nose. We merge the annotations to be Head, Torso, Upper/Lower Arms and Upper/Lower Legs, resulting in six person part classes and one background class. We only use those images containing persons for training (1716 images) and validation (1817 images).\nEvaluation: The human part segmentation results on PASCAL-Person-Part is reported in Tab. 7. [17] Qualitative results: We visualize the results in Fig. 12. Dataset: Cityscapes [37] is a recently released large-scale dataset, which contains high quality pixel-level annotations of 5000 images collected in street scenes from 50 different cities. Following the evaluation protocol [37], 19 semantic labels (belonging to 7 super categories: ground, construction, object, nature, sky, human, and vehicle) are used for evaluation (the void label is not considered for evaluation).\nThe training, validation, and test sets contain 2975, 500, and 1525 images respectively. Val set results: After the initial release, we further ex-plored the validation set in Tab. 9. The images of Cityscapes have resolution 2048×1024, making it a challenging problem to train deeper networks with limited GPU memory.\nDuring benchmarking the pre-release of the dataset, we downsampled the images by 2. However, we have found that it is beneficial to process the images in their original resolution. With the same training protocol, using images of original resolution significantly brings 1.9% and 1.8% improvements before and after CRF, respectively. In order to perform inference on this dataset with high resolution images, we split each image into overlapped regions, similar to [37]. We have also replaced the VGG-16 net with ResNet-101. We do not exploit multi-scale inputs due to the limited GPU memories at hand. Instead, we only explore (1) deeper networks (i.e., ResNet-101), (2) data augmentation, (3) LargeFOV or ASPP, and (4) CRF as post processing on this dataset. We first find that employing ResNet-101 alone is better than using VGG-16 net. Employing LargeFOV brings 2.6% improvement and using ASPP further improves results by 1.2%. Adopting data augmentation and CRF as post processing brings another 0.6% and 0.4%, respectively. We further qualitatively analyze some failure modes of our best model variant on PASCAL VOC 2012 val set. As shown in Fig. 14, our proposed model fails to capture the delicate boundaries of objects, such as bicycle and chair. The details could not even be recovered by the CRF post processing since the unary term is not confident enough. We hypothesize the encoder-decoder structure of [100], [102] may alleviate the problem by exploiting the high resolution feature maps in the decoder path. How to efficiently incorporate the method is left as a future work. This work was partly supported by the ARO 62250-CS, FP7-RECONFIG, FP7-MOBOT, and H2020-ISUPPORT EU projects. We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research." }, { "title": "Masked-attention mask transformer for universal image segmentation", "year": 2022.0, "authors": "Bowen Cheng; Ishan Misra; Alexander Schwing; Alexander Kirillov; Rohit Girdhar", "arxiv_di": 2112.01527, "Introduction": "Image segmentation studies the problem of grouping pixels. Different semantics for grouping pixels, e.g., category or instance membership, have led to different types of segmentation tasks, such as panoptic, instance or semantic segmentation. While these tasks differ only in semantics, current methods develop specialized architectures for each task. Per-pixel classification architectures based on Fully Convolutional Networks (FCNs) [37] are used for semantic segmentation, while mask classification architectures [5,24] that predict a set of binary masks each associated with a single category, dominate instance-level segmentation. Although such specialized architectures [6,10,24,37] have advanced each individual task, they lack the flexibility to generalize to the other tasks. For example, FCN-based architectures struggle at instance segmentation, leading to the evolution of different architectures for instance segmentation compared to semantic segmentation. Thus, duplicate research and (hardware) optimization effort is spent on each * Work done during an internship at Facebook AI Research. specialized architecture for every task.\nTo address this fragmentation, recent work [14,62] has attempted to design universal architectures, that are capable of addressing all segmentation tasks with the same architecture (i.e., universal image segmentation). These architectures are typically based on an end-to-end set prediction objective (e.g., DETR [5]), and successfully tackle multiple tasks without modifying the architecture, loss, or the training procedure. Note, universal architectures are still trained separately for different tasks and datasets, albeit having the same architecture. In addition to being flexible, universal architectures have recently shown state-of-the-art results on semantic and panoptic segmentation [14]. However, recent work still focuses on advancing specialized architectures [20,39,45], which raises the question: why haven't universal architectures replaced specialized ones?\nAlthough existing universal architectures are flexible enough to tackle any segmentation task, as shown in Figure 1, in practice their performance lags behind the best specialized architectures. For instance, the best reported performance of universal architectures [14,62], is currently lower (> 9 AP) than the SOTA specialized architecture for instance segmentation [6]. Beyond the inferior performance, universal architectures are also harder to train. They typically require more advanced hardware and a much longer training schedule. For example, training Mask-Former [14] takes 300 epochs to reach 40.1 AP and it can only fit a single image in a GPU with 32G memory. In contrast, the specialized Swin-HTC++ [6] obtains better performance in only 72 epochs. Both the performance and training efficiency issues hamper the deployment of universal architectures.\nIn this work, we propose a universal image segmentation architecture named Masked-attention Mask Transformer (Mask2Former) that outperforms specialized architectures across different segmentation tasks, while still being easy to train on every task. We build upon a simple meta architecture [14] consisting of a backbone feature extractor [25,36], a pixel decoder [33] and a Transformer decoder [51]. We propose key improvements that enable better results and efficient training. First, we use masked attention in the Transformer decoder which restricts the attention to localized features centered around predicted segments, which can be either objects or regions depending on the specific semantic for grouping. Compared to the cross-attention used in a standard Transformer decoder which attends to all locations in an image, our masked attention leads to faster convergence and improved performance. Second, we use multi-scale high-resolution features which help the model to segment small objects/regions. Third, we propose optimization improvements such as switching the order of self and cross-attention, making query features learnable, and removing dropout; all of which improve performance without additional compute. Finally, we save 3× training memory without affecting the performance by calculating mask loss on few randomly sampled points. These improvements not only boost the model performance, but also make training significantly easier, making universal architectures more accessible to users with limited compute.\nWe evaluate Mask2Former on three image segmentation tasks (panoptic, instance and semantic segmentation) using four popular datasets (COCO [35], Cityscapes [16], ADE20K [65] and Mapillary Vistas [42]). For the first time, on all these benchmarks, our single architecture performs on par or better than specialized architectures. Mask2Former sets the new state-of-the-art of 57.8 PQ on COCO panoptic segmentation [28], 50.1 AP on COCO instance segmentation [35] and 57.7 mIoU on ADE20K semantic segmentation [65] using the exact same architecture.", "Related_Work": "Specialized semantic segmentation architectures typically treat the task as a per-pixel classification problem.\nFCN-based architectures [37] independently predict a category label for every pixel. Follow-up methods find context to play an important role for precise per-pixel classification and focus on designing customized context modules [7,8,63] or self-attention variants [21,26,45,55,61,64]. Specialized instance segmentation architectures are typically based upon \"mask classification.\" They predict a set of binary masks each associated with a single class label. The pioneering work, Mask R-CNN [24], generates masks from detected bounding boxes. Follow-up methods either focus on detecting more precise bounding boxes [4,6], or finding new ways to generate a dynamic number of masks, e.g., using dynamic kernels [3,49,56] or clustering algorithms [11,29]. Although the performance has been advanced in each task, these specialized innovations lack the flexibility to generalize from one to the other, leading to duplicated research effort. For instance, although multiple approaches have been proposed for building feature pyramid representations [33], as we show in our experiments, BiFPN [47] performs better for instance segmentation while FaPN [39] performs better for semantic segmentation. Panoptic segmentation has been proposed to unify both semantic and instance segmentation tasks [28]. Architectures for panoptic segmentation either combine the best of specialized semantic and instance segmentation architectures into a single framework [11,27,31,60] or design novel objectives that equally treat semantic regions and instance objects [5,52]. Despite those new architectures, researchers continue to develop specialized architectures for different image segmentation tasks [20,45]. We find panoptic architectures usually only report performance on a single panoptic segmentation task [52], which does not guarantee good performance on other tasks (Figure 1). For example, panoptic segmentation does not measure architectures' abilities to rank predictions as instance segmentations. Thus, we refrain from referring to architectures that are only evaluated for panoptic segmentation as universal architectures. Instead, here, we evaluate our Mask2Former on all studied tasks to guarantee generalizability. Universal architectures have emerged with DETR [5] and show that mask classification architectures with an end-toend set prediction objective are general enough for any image segmentation task. MaskFormer [14] shows that mask classification based on DETR not only performs well on panoptic segmentation but also achieves state-of-the-art on semantic segmentation. K-Net [62] further extends set prediction to instance segmentation. Unfortunately, these architectures fail to replace specialized models as their performance on particular tasks or datasets is still worse than the best specialized architecture (e.g., MaskFormer [14] cannot segment instances well). To our knowledge, Mask2Former is the first architecture that outperforms state-of-the-art specialized architectures on all considered tasks and datasets.", "Dataset": "To show our Mask2Former can generalize beyond the COCO dataset, we further perform experiments on other popular image segmentation datasets. In Table 6, we show results on Cityscapes [16]. Please see Appendix B for detailed training settings on each dataset as well as more results on ADE20K [65] and Mapillary Vistas [42]. (c) Cityscapes Table 7. Limitations of Mask2Former. Although a single Mask2Former can address any segmentation task, we still need to train it on different tasks. Across three datasets we find Mask2Former trained with panoptic annotations performs slightly worse than the exact same model trained specifically for instance and semantic segmentation tasks with the corresponding data.\nWe observe that our Mask2Former is competitive to state-of-the-art methods on these datasets as well. It suggests Mask2Former can serve as a universal image segmentation model and results generalize across datasets.", "Conclusion": "We present Mask2Former for universal image segmentation. Built upon a simple meta framework [14] with a new Transformer decoder using the proposed masked attention, Mask2Former obtains top results in all three major image segmentation tasks (panoptic, instance and semantic) on four popular datasets, outperforming even the best specialized models designed for each benchmark while remaining easy to train. Mask2Former saves 3× research effort compared to designing specialized models for each task, and it is accessible to users with limited computational resources. We hope to attract interest in universal model design.\nEthical considerations: While our technical innovations do not appear to have any inherent biases, the models trained with our approach on realworld datasets should undergo ethical review to ensure the predictions do not propagate problematic stereotypes, and the approach is not used for applications including but not limited to illegal surveillance.", "Experiment_and_Results": "We demonstrate Mask2Former is an effective architecture for universal image segmentation through comparisons with specialized state-of-the-art architectures on standard benchmarks. We evaluate our proposed design decisions through ablations on all three tasks. Finally we show Mask2Former generalizes beyond the standard benchmarks, obtaining state-of-the-art results on four datasets. Datasets. We study Mask2Former using four widely used image segmentation datasets that support semantic, instance and panoptic segmentation: COCO [35] (80 \"things\" and 53 \"stuff\" categories), ADE20K [65] (100 \"things\" and 50 \"stuff\" categories), Cityscapes [16] (8 \"things\" and 11 \"stuff\" categories) and Mapillary Vistas [42] (37 \"things\" and 28 \"stuff\" categories). Panoptic and semantic seg- mentation tasks are evaluated on the union of \"things\" and \"stuff\" categories while instance segmentation is only evaluated on the \"things\" categories. Evaluation metrics. For panoptic segmentation, we use the standard PQ (panoptic quality) metric [28]. We further report AP Th pan , which is the AP evaluated on the \"thing\" categories using instance segmentation annotations, and mIoU pan , which is the mIoU for semantic segmentation by merging instance masks from the same category, of the same model trained only with panoptic segmentation annotations. For instance segmentation, we use the standard AP (average precision) metric [35]. For semantic segmentation, we use mIoU (mean Intersection-over-Union) [19]. Panoptic segmentation. We compare Mask2Former with state-of-the-art models for panoptic segmentation on the COCO panoptic [28] Beyond the PQ metric, our Mask2Former also achieves higher performance on two other metrics compared to DETR [5] and MaskFormer: AP Th pan , which is the AP evaluated on the 80 \"thing\" categories using instance segmentation annotation, and mIoU pan , which is the mIoU evaluated on the 133 categories for semantic segmentation converted from panoptic segmentation annotation. This shows Mask2Former's universality: trained only with panoptic segmentation annotations, it can be used for instance and semantic segmentation. Instance segmentation. We compare Mask2Former with state-of-the-art models on the COCO [35] dataset in Table 2. With ResNet [25] backbone, Mask2Former outperforms a strong Mask R-CNN [24] baseline using largescale jittering (LSJ) augmentation [18,23] while requiring 8× fewer training iterations. With Swin-L backbone, Mask2Former outperforms the state-of-the-art HTC++ [6]. Although we only observe +0.6 AP improvement over HTC++, the Boundary AP [12] improves by 2.1, suggesting that our predictions have a better boundary quality thanks to the high-resolution mask predictions. Note that for a fair comparison, we only consider single-scale inference and models trained with only COCO train2017 set data.\nWith a ResNet-50 backbone Mask2Former improves over MaskFormer on small objects by 7.0 AP S , while over- Table 3. Semantic segmentation on ADE20K val with 150 categories. Mask2Former consistently outperforms Mask-Former [14] by a large margin with different backbones (all Mask2Former models use MSDeformAttn [66] as pixel decoder, except Swin-L-FaPN uses FaPN [39]). Our best model outperforms the best specialized model, BEiT [2]. We report both singlescale (s.s.) and multi-scale (m.s.) inference results. Backbones pre-trained on ImageNet-22K are marked with † .\nall the highest gains come from large objects (+10.6 AP L ). The performance on AP S still lags behind other state-of-theart models. Hence there still remains room for improvement on small objects, e.g., by using dilated backbones like in DETR [5], which we leave for future work. Semantic segmentation. We compare Mask2Former with state-of-the-art models for semantic segmentation on the ADE20K [65] dataset in Table 3. Mask2Former outperforms MaskFormer [14] across different backbones, suggesting that the proposed improvements even boost semantic segmentation results where [14] was already state-ofthe-art. With Swin-L as backbone and FaPN [39] as pixel decoder, Mask2Former sets a new state-of-the-art of 57.7 mIoU. We also report the test set results in Appendix A.3. In Table VIII, we report the results of Mask2Former obtained with various backbones on ADE20K for three segmentation tasks and compare it with other state-of-the-art methods. Mask2Former with Swin-L backbone sets a new state-of-the-art performance on ADE20K for panoptic segmentation. As there are few papers reporting results on ADE20K, we hope this experiment could set up a useful benchmark for future research. Here, we provide more results of Mask2Former with different backbones on COCO panoptic [28] for panoptic segmentation, COCO [35] for instance segmentation and ADE20K [65] for semantic segmentation. More specifically, for each benckmark, we evaluate Mask2Former with ResNet [25] with 50 and 101 layers, as well as Swin [36] Tiny, Small, Base and Large variants as backbones. We use ImageNet [44] pre-trained checkpoints to initialize backbones.", "Limitation": "Our ultimate goal is to train a single model for all image segmentation tasks. In Table 7, we find Mask2Former trained on panoptic segmentation only performs slightly worse than the exact same model trained with the corresponding annotations for instance and semantic segmentation tasks across three datasets. This suggests that even though Mask2Former can generalize to different tasks, it still needs to be trained for those specific tasks. In the future, we hope to develop a model that can be trained only once for multiple tasks and even for multiple datasets.\nFurthermore, as seen in Tables 2 and4d, even though it improves over baselines, Mask2Former struggles with segmenting small objects and is unable to fully leverage multiscale features. We believe better utilization of the feature pyramid and designing losses for small objects are critical.", "Extra": "We now present Mask2Former. We first review a meta architecture for mask classification that Mask2Former is built upon. Then, we introduce our new Transformer decoder with masked attention which is the key to better convergence and results. Lastly, we propose training improvements that make Mask2Former efficient and accessible. Mask classification architectures group pixels into N segments by predicting N binary masks, along with N corresponding category labels. Mask classification is sufficiently general to address any segmentation task by assigning different semantics, e.g., categories or instances, to different segments. However, the challenge is to find good representations for each segment. For example, Mask R-CNN [24] uses bounding boxes as the representation which limits its application to semantic segmentation. Inspired by DETR [5], each segment in an image can be represented as a C-dimensional feature vector (\"object query\") and can be processed by a Transformer decoder, trained with a set prediction objective. A simple meta architecture would consist of three components. A backbone that extracts lowresolution features from an image. A pixel decoder that gradually upsamples low-resolution features from the output of the backbone to generate high-resolution per-pixel embeddings. And finally a Transformer decoder that operates on image features to process object queries. The final binary mask predictions are decoded from per-pixel embeddings with object queries. One successful instantiation of such a meta architecture is MaskFormer [14], and we refer readers to [14] for more details. Mask2Former adopts the aforementioned meta architecture, with our proposed Transformer decoder (Figure 2 right) replacing the standard one. The key components of our Transformer decoder include a masked attention operator, which extracts localized features by constraining crossattention to within the foreground region of the predicted mask for each query, instead of attending to the full feature map. To handle small objects, we propose an efficient multi-scale strategy to utilize high-resolution features. It feeds successive feature maps from the pixel decoder's feature pyramid into successive Transformer decoder layers in a round robin fashion. Finally, we incorporate optimization improvements that boost model performance without introducing additional computation. We now discuss these improvements in detail. [14] with a backbone, a pixel decoder and a Transformer decoder. We propose a new Transformer decoder with masked attention instead of the standard cross-attention (Section 3.2.1). To deal with small objects, we propose an efficient way of utilizing high-resolution features from a pixel decoder by feeding one scale of the multi-scale feature to one Transformer decoder layer at a time (Section 3.2.2). In addition, we switch the order of self and cross-attention (i.e., our masked attention), make query features learnable, and remove dropout to make computation more effective (Section 3.2.3). Note that positional embeddings and predictions from intermediate Transformer decoder layers are omitted in this figure for readability. Context features have been shown to be important for image segmentation [7,8,63]. However, recent studies [22,46] suggest that the slow convergence of Transformer-based models is due to global context in the cross-attention layer, as it takes many training epochs for cross-attention to learn to attend to localized object regions [46]. We hypothesize that local features are enough to update query features and context information can be gathered through self-attention. For this we propose masked attention, a variant of crossattention that only attends within the foreground region of the predicted mask for each query.\nStandard cross-attention (with residual path) computes\nX l = softmax(Q l K T l )V l + X l-1 .(1)\nHere, l is the layer index, X l ∈ R N ×C refers to N C-dimensional query features at the l th layer and Q l = f Q (X l-1 ) ∈ R N ×C . X 0 denotes input query features to the Transformer decoder. K l , V l ∈ R H l W l ×C are the image features under transformation f K (•) and f V (•) respectively, and H l and W l are the spatial resolution of image features that we will introduce next in Section 3.2.2. f Q , f K and f V are linear transformations.\nOur masked attention modulates the attention matrix via\nX l = softmax(M l-1 + Q l K T l )V l + X l-1 .(2)\nMoreover, the attention mask M l-1 at feature location (x, y) is\nM l-1 (x, y) = 0 if M l-1 (x, y) = 1 -∞ otherwise .(3)\nHere, M l-1 ∈ {0, 1} N ×H l W l is the binarized output (thresholded at 0.5) of the resized mask prediction of the previous (l -1)-th Transformer decoder layer. It is resized to the same resolution of K l . M 0 is the binary mask prediction obtained from X 0 , i.e., before feeding query features into the Transformer decoder. High-resolution features improve model performance, especially for small objects [5]. However, this is computationally demanding. Thus, we propose an efficient multi-scale strategy to introduce high-resolution features while controlling the increase in computation. Instead of always using the high-resolution feature map, we utilize a feature pyramid which consists of both low-and high-resolution features and feed one resolution of the multi-scale feature to one Transformer decoder layer at a time. Specifically, we use the feature pyramid produced by the pixel decoder with resolution 1/32, 1/16 and 1/8 of the original image. For each resolution, we add both a sinusoidal positional embedding e pos ∈ R H l W l ×C , following [5], and a learnable scale-level embedding e lvl ∈ R 1×C , following [66]. We use those, from lowest-resolution to highest-resolution for the corresponding Transformer decoder layer as shown in Figure 2 left. We repeat this 3-layer Transformer decoder L times. Our final Transformer decoder hence has 3L layers. More specifically, the first three layers receive a feature map of resolution\nH 1 = H/32, H 2 = H/16, H 3 = H/8 and W 1 = W/32, W 2 = W/16, W 3 = W/8\n, where H and W are the original image resolution. This pattern is repeated in a round robin fashion for all following layers. A standard Transformer decoder layer [51] consists of three modules to process query features in the following order: a self-attention module, a cross-attention and a feed-forward network (FFN). Moreover, query features (X 0 ) are zero initialized before being fed into the Transformer decoder and are associated with learnable positional embeddings. Furthermore, dropout is applied to both residual connections and attention maps.\nTo optimize the Transformer decoder design, we make the following three improvements. First, we switch the order of self-and cross-attention (our new \"masked attention\") to make computation more effective: query features to the first self-attention layer are image-independent and do not have signals from the image, thus applying selfattention is unlikely to enrich information. Second, we make query features (X 0 ) learnable as well (we still keep the learnable query positional embeddings), and learnable query features are directly supervised before being used in the Transformer decoder to predict masks (M 0 ). We find these learnable query features function like a region proposal network [43] and have the ability to generate mask proposals. Finally, we find dropout is not necessary and usually decreases performance. We thus completely remove dropout in our decoder. One limitation of training universal architectures is the large memory consumption due to high-resolution mask prediction, making them less accessible than the more memory-friendly specialized architectures [6,24]. For example, MaskFormer [14] can only fit a single image in a GPU with 32G memory. Motivated by PointRend [30] and Implicit PointRend [13], which show a segmentation model can be trained with its mask loss calculated on K randomly sampled points instead of the whole mask, we calculate the mask loss with sampled points in both the matching and the final loss calculation. More specifically, in the matching loss that constructs the cost matrix for bipartite matching, we uniformly sample the same set of K points for all prediction and ground truth masks. In the final loss between predictions and their matched ground truths, we sample different sets of K points for different pairs of prediction and ground truth using importance sampling [30]. We set K = 12544, i.e., 112 × 112 points. This new training strategy effectively reduces training memory by 3×, from 18GB to 6GB per image, making Mask2Former more accessible to users with limited computational resources. We adopt settings from [14] with the following differences: Pixel decoder. Mask2Former is compatible with any existing pixel decoder module. In MaskFormer [14], FPN [33] is chosen as the default for its simplicity. Since our goal is to demonstrate strong performance across different segmentation tasks, we use the more advanced multi-scale deformable attention Transformer (MSDeformAttn) [66] as our default pixel decoder. Specifically, we use 6 MSDefor-mAttn layers applied to feature maps with resolution 1/8, 1/16 and 1/32, and use a simple upsampling layer with lateral connection on the final 1/8 feature map to generate the feature map of resolution 1/4 as the per-pixel embedding. In our ablation study, we show that this pixel decoder provides best results across different segmentation tasks. Transformer decoder. We use our Transformer decoder proposed in Section 3.2 with L = 3 (i.e., 9 layers total) and 100 queries by default. An auxiliary loss is added to every intermediate Transformer decoder layer and to the learnable query features before the Transformer decoder. Loss weights. We use the binary cross-entropy loss (instead of focal loss [34] in [14]) and the dice loss [41] for our mask loss: L mask = λ ce L ce + λ dice L dice . We set λ ce = 5.0 and λ dice = 5.0. The final loss is a combination of mask loss and classification loss: L mask + λ cls L cls and we set λ cls = 2.0 for predictions matched with a ground truth and 0.1 for the \"no object,\" i.e., predictions that have not been matched with any ground truth. Post-processing. We use the exact same post-processing as [14] to acquire the expected output format for panoptic and semantic segmentation from pairs of binary masks and class predictions. Instance segmentation requires additional confidence scores for each prediction. We multiply class confidence and mask confidence (i.e., averaged foreground per-pixel binary mask probability) for a final confidence. Panoptic and instance segmentation. We use Detec-tron2 [57] and follow the updated Mask R-CNN [24] baseline settings 1 for the COCO dataset. More specifically, we use AdamW [38] optimizer and the step learning rate schedule. We use an initial learning rate of 0.0001 and a weight decay of 0.05 for all backbones. A learning rate multiplier of 0.1 is applied to the backbone and we decay the learning rate at 0.9 and 0.95 fractions of the total number of training steps by a factor of 10. If not stated otherwise, we train our models for 50 epochs with a batch size of 16. For data augmentation, we use the large-scale jittering (LSJ) augmentation [18,23] with a random scale sampled from range 0.1 to 2.0 followed by a fixed size crop to 1024×1024. We use the standard Mask R-CNN inference setting where we resize an image with shorter side to 800 and longer side up-to 1333. We also report FLOPs and fps. FLOPs are averaged over 100 validation images (COCO images have varying sizes). Frames-per-second (fps) is measured on a V100 GPU with a batch size of 1 by taking the average runtime on the entire validation set including post-processing time. Semantic segmentation. We follow the same settings as [14] to train our models, except: 1) a learning rate multiplier of 0.1 is applied to both CNN and Transformer backbones instead of only applying it to CNN backbones in [14], 2) both ResNet and Swin backbones use an initial learning rate of 0.0001 and a weight decay of 0.05, instead of using different learning rates in [14]. We now analyze Mask2Former through a series of ablation studies using a ResNet-50 backbone [25]. To test the generality of the proposed components for universal image segmentation, all ablations are performed on three tasks. Transformer decoder. We validate the importance of each component by removing them one at a time. As shown in Table 4a, masked attention leads to the biggest improvement across all tasks. The improvement is larger for instance and panoptic segmentation than for semantic segmentation. Moreover, using high-resolution features from the efficient multi-scale strategy is also important. Table 4b shows additional optimization improvements further improve the performance without extra computation. Masked attention. Concurrent work has proposed other variants of cross-attention [22,40] that aim to improve the convergence and performance of DETR [5] for object detection. Most recently, K-Net [62] replaced cross-attention with a mask pooling operation that averages features within mask regions. We validate the importance of our masked attention in Table 4c. While existing cross-attention variants may improve on a specific task, our masked attention performs the best on all three tasks. Feature resolution. Table 4d shows that Mask2Former benefits from using high-resolution features (e.g., a single scale of 1/8) in the Transformer decoder. However, this introduces additional computation. Our efficient multi-scale (efficient m.s.) strategy effectively reduces the FLOPs without affecting the performance. Note that, naively concatenating multi-scale features as input to every Transformer decoder layer (naïve m.s.) does not yield additional gains. Pixel decoder. As shown in Table 4e, Mask2Former is compatible with any existing pixel decoder. However, we observe different pixel decoders specialize in different tasks: while BiFPN [47] performs better on instance-level segmentation, FaPN [39] works better for semantic segmentation. Among all studied pixel decoders, the MSDefor-maAttn [66] consistently performs the best across all tasks and thus is selected as our default. This set of ablations also suggests that designing a module like a pixel decoder for a specific task does not guarantee generalization across segmentation tasks. Mask2Former, as a universal model, could serve as a testbed for a generalizable module design.\nCalculating loss with points vs. masks. In Table 5 we study the performance and memory implications when calculating the loss based on either mask or sampled points.\nCalculating the final training loss with sampled points reduces training memory by 3× without affecting the performance. Additionally, calculating the matching loss with sampled points improves performance across all three tasks. Learnable queries as region proposals. Region proposals [1,50], either in the form of boxes or masks, are regions that are likely to be \"objects.\" With learnable queries being supervised by the mask loss, predictions from learnable queries can serve as mask proposals. In Figure 3 top, we visualize mask predictions of selected learnable queries before feeding them into the Transformer decoder (the proposal generation process is shown in Figure 3 bottom right). In Figure 3 bottom left, we further perform a quantitative analysis on the quality of these proposals by calculating the class-agnostic average recall with 100 predictions (AR@100) on COCO val2017. We find these learnable queries already achieve good AR@100 compared to the fi- AR@100 on COCO val2017 Backbone mask Learnable queries P ix el D ec od er Figure 3. Learnable queries as \"region proposals\". Top: We visualize mask predictions of four selected learnable queries before feeding them into the Transformer decoder (using R50 backbone). Bottom left: We calculate the class-agnostic average recall with 100 proposals (AR@100) and observe that these learnable queries provide good proposals compared to the final predictions of Mask2Former after the Transformer decoder layers (layer 9). Bottom right: Illustration of proposal generation process. panoptic model semantic model method backbone PQ AP Th pan mIoUpan mIoU (s.s.) (m.s.) Panoptic FCN [31] Swin-L † 65.9 ----Panoptic-DeepLab [11] SWideRNet [9] 66.4 40.1 82.2 --Panoptic-DeepLab [11] SWideRNet [9] 67.5 * 43.9 * 82.9 * --SETR [64] ViT-L † nal predictions of Mask2Former after the Transformer decoder layers, i.e., layer 9, and AR@100 consistently improves with more decoder layers. Cityscapes is an urban egocentric street-view dataset with high-resolution images (1024 × 2048 pixels). It contains 2975 images for training, 500 images for validation and 1525 images for testing with a total of 19 classes. Training settings. For all three segmentation tasks: we use a crop size of 512 × 1024, a batch size of 16 and train all models for 90k iterations. During inference, we operate on the whole image (1024 × 2048). Other implementation details largely follow Section 4.1 (panoptic and instance segmentation follow semantic segmentation training settings), except that we use 200 queries for panoptic and instance segmentation models with Swin-L backbone. All other backbones or semantic segmentation models use 100 queries.\nResults. In Table VII, we report Mask2Former results obtained with various backbones on Cityscapes for three segmentation tasks and compare it with other state-of-the-art methods without using extra data. For panoptic segmentation, Mask2Former with Swin-L backbone outperforms the state-of-the-art Panoptic-DeepLab [11] with SWideRnet [9] using single-scale inference. For semantic segmentation, Mask2Former with Swin-B backbone outperforms the state-of-the-art SegFormer [59]. Besides PQ for panoptic segmentation, we also report AP Th pan (the AP evaluated on the 80 \"thing\" categories using instance segmentation annotation) and mIoUpan (the mIoU evaluated on the 133 categories for semantic segmentation converted from panoptic segmentation annotation) of the same model trained for panoptic segmentation (note: we train all our models with panoptic segmentation annotation only). Backbones pre-trained on ImageNet-22K are marked with † . Training settings. For panoptic and instance segmentation, we use the exact same training parameters as we used for semantic segmentation, except that we always use a crop size of 640 × 640 for all backbones. Other implementation details largely follow Section 4.1 , except that we use 200 queries for panoptic and instance segmentation models with Swin-L backbone. All other backbones or semantic segmentation models use 100 queries. Mapillary Vistas is a large-scale urban street-view dataset with 18k, 2k and 5k images for training, validation and testing. It contains images with a variety of resolutions, ranging from 1024 × 768 to 4000 × 6000. We only report panoptic and semantic segmentation results for this dataset. Training settings. For both panoptic and semantic segmentation, we follow the same data augmentation of [14]: standard random scale jittering between 0.5 and 2.0, random horizontal flipping, random cropping with a crop size of 1024 × 1024 as well as random color jittering. We train our model for 300k iterations with a batch size of 16 using the \"poly\" learning rate schedule [7]. During inference, we resize the longer side to 2048 pixels. Our panoptic segmentation model with a Swin-L backbone uses 200 queries. All other backbones or semantic segmentation models use 100 queries.\nResults. In Table IX, we report Mask2Former results obtained with various backbones on Mapillary Vistas for panoptic and semantic segmentation tasks and compare it with other state-of-the-art methods. Our Mask2Former is very competitive compared to state-of-the art specialized models even if it is not designed for Mapillary Vistas. We perform additional ablation studies of Mask2Former using the same settings that we used in the main paper: a single ResNet-50 backbone [25]. We train Mask2Former with 12, 25, 50 and 100 epochs with either standard scale augmentation (Standard Aug.) [57] or the more recent large-scale jittering augmentation (LSJ Aug.) [18,23]. As shown in Figure scale jittering augmentation. This shows that Mask2Former with our proposed Transformer decoder converges faster than models using the standard Transformer decoder: e.g., DETR [5] and MaskFormer [14] require 500 epochs and 300 epochs respectively. We quantitatively and qualitatively analyzed the COCO panoptic model with the R50 backbone. First, we visual- . We train our model on the union of ADE20K train and val set with ImageNet-22K pre-trained checkpoint following [14] and use multi-scale inference.\nize the last three attention maps of our model using crossattention (Figure Ia top) and masked attention (Figure Ia bottom) of a single query that predicts the \"cat.\" With cross-attention, the attention map spreads over the entire image and the region with highest response is outside the object of interest. We believe this is because the softmax used in cross-attention never attains zero, and small attention weights on large background regions start to dominate. Instead, masked attention limits the attention weights to focus on the object. We validate this hypothesis in Table Ib: we compute the cumulative attention weights on foreground (defined by the matching ground truth to each prediction) and background for all queries on the entire COCO val set. On average, only 20% of the attention weights in crossattention focus on the foreground while masked attention increases this ratio to almost 60%. Second, we plot the panoptic segmentation performance using output from each Transformer decoder layer (Figure II). We find masked attention with a single Transformer decoder layer already outperforms cross-attention with 9 layers. We hope the effectiveness of masked attention, together with this analysis, leads to better attention design. Object queries play an important role in Mask2Former. We ablate different design choices of object queries including the number of queries and making queries learnable. Number of queries. We study the effect of different num- ber of queries for three image segmentation tasks in Table Xa. For instance and semantic segmentation, using 100 queries achieves the best performance, while using 200 queries can further improve panoptic segmentation results. As panoptic segmentation is a combination of instance and semantic segmentation, it has more segments per image than the other two tasks. This ablation suggests that picking the number of queries for Mask2Former may depend on the number of segments per image for a particular task or dataset.\nLearnable queries. An object query consists of two parts: object query features and object query positional embeddings. Object query features are only used as the initial input to the Transformer decoder and are updated through decoder layers; whereas query positional embeddings are added to query features in every Transformer decoder layer when computing the attention weights. In DETR [5], query features are zero-initialized and query positional embeddings are learnable. Furthermore, there is no direct supervision on these query features before feeding them into the Transformer (since they are zero vectors). In our Mask2Former, we still make query positional embeddings learnable. In addition, we make query features learnable as well and directly apply losses on these learnable query features before feeding them into the Transformer decoder.\nIn Table Xb, we compare our learnable query features with zero-initialized query features in DETR. We find it is important to directly supervise object queries even before feeding them into the Transformer decoder. Learnable queries without supervision perform similarly well as zeroinitialized queries in DETR. Mask2Former builds upon the same meta architecture as MaskFormer [14] with two major differences: 1) We use more advanced training parameters summarized in Table XIa; and 2) we propose a new Transformer decoder with masked attention, instead of using the standard Transformer decoder, as well as some optimization improvements summarized in Table XIb. To better understand Mask2Former's improvements over MaskFormer, we perform ablation studies on training parameter improvements and Transformer decoder improvements in isolation.\nIn Table XIc, we study our new training parameters. We We train Mask2Former with different epochs using either standard scale augmentation (Standard Aug.) [57] or the more recent large-scale jittering augmentation (LSJ Aug.) [18,23]. Mask2Former converges in 25 epochs using standard augmentation and almost converges in 50 epochs using large-scale jittering augmentation. Using LSJ also improves performance with longer training epochs (i.e., with more than 25 epochs). We first provide more results for Mask2Former with different backbones as well as test-set performance on standard benchmarks (Appendix A): We use COCO panoptic [28] for panoptic, COCO [35] for instance, and ADE20K [65] for semantic segmentation. Then, we provide more detailed results on additional datasets (Appendix B). Finally, we provide additional ablation studies (Appendix C) and visualization of Mask2Former predictions for all three segmentation tasks (Appendix D). In Table I, we report Mask2Former with various backbones on COCO panoptic val2017. Mask2Former outperforms all existing panoptic segmentation models with various backbones. Our best model sets a new state-of-theart of 57.8 PQ.\nIn Table II In Table III, we report Mask2Former results obtained with various backbones on COCO val2017. Mask2Former outperforms the best single-scale model, HTC++ [6,36]. Note that it is non-trivial to do multi-scale inference for instance-level segmentation tasks without introducing complex post-processing like non-maximum suppression. Thus, we only compare Mask2Former with other single-scale inference models. We believe multi-scale inference can further improve Mask2Former performance and it remains an interesting future work.\nIn Table IV, we further report the best Mask2Former model on the test-dev set. Mask2Former achieves the absolute new state-of-the-art performance on both validation and test set. On the one hand, Mask2Former is extremely good at segmenting large objects: we can even outperform the challenge winner (which uses extra training data, model ensemble, etc.) on AP L by a large margin without any bells-and-whistles. On the other hand, the poor performance on small objects leaves room for further improvement in the future. In Table V [31] Swin-L † 65.9 -------Segmenter [45] ViT-L † -------81.3 SETR [64] ViT-L † -------82.2 SegFormer [59] MiT-B5 train the MaskFormer model with either its original training parameters in [14] or our new training parameters. We observe significant improvements of using our new training parameters for MaskFormer as well. This shows the new training parameters are also generally applicable to other models.\nIn Table XId, we study our new Transformer decoder. We train a MaskFormer model and a Mask2Former model with the exact same backbone, i.e., a ResNet-50; pixel decoder, i.e., a FPN; and training parameters. That is, the only difference is in the Transformer decoder, summarized in Table XIb. We observe improvements for all three tasks, suggesting that the new Transformer decoder itself is indeed better than the standard Transformer decoder. While computational efficiency was not our primary goal, we find that Mask2Former actually has a better compute-performance trade-off compared to MaskFormer (Figure III). Even the lightest instantiation of Mask2Former outperforms the heaviest MaskFormer instantiation, using We visualize sample predictions of the Mask2Former model with Swin-L [36]" }, { "title": "Per-pixel classification is not all you need for semantic segmentation", "year": 2021.0, "authors": "Bowen Cheng; Alex Schwing; Alexander Kirillov", "arxiv_di": "2107.06278", "Introduction": "The goal of semantic segmentation is to partition an image into regions with different semantic categories. Starting from Fully Convolutional Networks (FCNs) work of Long et al. [30], most deep learning-based semantic segmentation approaches formulate semantic segmentation as per-pixel classification (Figure 1 left), applying a classification loss to each output pixel [9,52]. Per-pixel predictions in this formulation naturally partition an image into regions of different classes.\nMask classification is an alternative paradigm that disentangles the image partitioning and classification aspects of segmentation. Instead of classifying each pixel, mask classification-based methods predict a set of binary masks, each associated with a single class prediction (Figure 1 right). The more flexible mask classification dominates the field of instance-level segmentation. Both Mask R-CNN [21] and DETR [4] yield a single class prediction per segment for instance and panoptic segmentation. In contrast, per-pixel classification assumes a static number of outputs and cannot return a variable number of predicted regions/segments, which is required for instance-level tasks.\nOur key observation: mask classification is sufficiently general to solve both semantic-and instancelevel segmentation tasks. In fact, before FCN [30], the best performing semantic segmentation methods like O2P [5] and SDS [20] used a mask classification formulation. Given this perspective, a natural question emerges: can a single mask classification model simplify the landscape of effective approaches to semantic-and instance-level segmentation tasks? And can such a mask classification model outperform existing per-pixel classification methods for semantic segmentation?\nTo address both questions we propose a simple MaskFormer approach that seamlessly converts any existing per-pixel classification model into a mask classification. Using the set prediction mechanism proposed in DETR [4], MaskFormer employs a Transformer decoder [41] to compute a set of pairs, Matching between the set of predictions and ground truth segments can be done either via bipartite matching similarly to DETR [4] or by fixed matching via direct indexing if the number of predictions and classes match, i.e., if N = K.\neach consisting of a class prediction and a mask embedding vector. The mask embedding vector is used to get the binary mask prediction via a dot product with the per-pixel embedding obtained from an underlying fully-convolutional network. The new model solves both semantic-and instance-level segmentation tasks in a unified manner: no changes to the model, losses, and training procedure are required. Specifically, for semantic and panoptic segmentation tasks alike, MaskFormer is supervised with the same per-pixel binary mask loss and a single classification loss per mask. Finally, we design a simple inference strategy to blend MaskFormer outputs into a task-dependent prediction format.\nWe evaluate MaskFormer on five semantic segmentation datasets with various numbers of categories: Cityscapes [15] (19 classes), Mapillary Vistas [34] (65 classes), ADE20K [55] (150 classes), COCO-Stuff-10K [3] (171 classes), and ADE20K-Full [55] (847 classes). While MaskFormer performs on par with per-pixel classification models for Cityscapes, which has a few diverse classes, the new model demonstrates superior performance for datasets with larger vocabulary. We hypothesize that a single class prediction per mask models fine-grained recognition better than per-pixel class predictions. MaskFormer achieves the new state-of-the-art on ADE20K (55.6 mIoU) with Swin-Transformer [29] backbone, outperforming a per-pixel classification model [29] with the same backbone by 2.1 mIoU, while being more efficient (10% reduction in parameters and 40% reduction in FLOPs).\nFinally, we study MaskFormer's ability to solve instance-level tasks using two panoptic segmentation datasets: COCO [28,24] and ADE20K [55]. MaskFormer outperforms a more complex DETR model [4] with the same backbone and the same post-processing. Moreover, MaskFormer achieves the new state-of-the-art on COCO (52.7 PQ), outperforming prior state-of-the-art [42] by 1.6 PQ.\nOur experiments highlight MaskFormer's ability to unify instance-and semantic-level segmentation.", "Related_Work": "Both per-pixel classification and mask classification have been extensively studied for semantic segmentation. In early work, Konishi and Yuille [25] apply per-pixel Bayesian classifiers based on local image statistics. Then, inspired by early works on non-semantic groupings [13,36], mask classification-based methods became popular demonstrating the best performance in PASCAL VOC challenges [18]. Methods like O2P [5] and CFM [16] have achieved state-of-the-art results by classifying mask proposals [6,40,2]. In 2015, FCN [30] extended the idea of per-pixel classification to deep nets, significantly outperforming all prior methods on mIoU (a per-pixel evaluation metric which particularly suits the per-pixel classification formulation of segmentation).\nPer-pixel classification became the dominant way for deep-net-based semantic segmentation since the seminal work of Fully Convolutional Networks (FCNs) [30]. Modern semantic segmentation models focus on aggregating long-range context in the final feature map: ASPP [7,8] uses atrous convolutions with different atrous rates; PPM [52] uses pooling operators with different kernel sizes; DANet [19], OCNet [51], and CCNet [23] use different variants of non-local blocks [43]. Recently, SETR [53] and Segmenter [37] replace traditional convolutional backbones with Vision Transformers (ViT) [17] that capture long-range context starting from the very first layer. However, these concurrent Transformer-based [41] semantic segmentation approaches still use a per-pixel classification formulation. Note, that our MaskFormer module can convert any per-pixel classification model to the mask classification setting, allowing seamless adoption of advances in per-pixel classification.\nMask classification is commonly used for instance-level segmentation tasks [20,24]. These tasks require a dynamic number of predictions, making application of per-pixel classification challenging as it assumes a static number of outputs. Omnipresent Mask R-CNN [21] uses a global classifier to classify mask proposals for instance segmentation. DETR [4] further incorporates a Transformer [41] design to handle thing and stuff segmentation simultaneously for panoptic segmentation [24]. However, these mask classification methods require predictions of bounding boxes, which may limit their usage in semantic segmentation. The recently proposed Max-DeepLab [42] removes the dependence on box predictions for panoptic segmentation with conditional convolutions [39,44]. However, in addition to the main mask classification losses it requires multiple auxiliary losses (i.e., instance discrimination loss, mask-ID cross entropy loss, and the standard per-pixel classification loss).\n3 From Per-Pixel to Mask Classification\nIn this section, we first describe how semantic segmentation can be formulated as either a per-pixel classification or a mask classification problem. Then, we introduce our instantiation of the mask classification model with the help of a Transformer decoder [41]. Finally, we describe simple inference strategies to transform mask classification outputs into task-dependent prediction formats.", "Methodology": "For per-pixel classification, a segmentation model aims to predict the probability distribution over all possible K categories for every pixel of an H×W image:\ny = {p i |p i ∈ ∆ K } H•W i=1 .\nHere ∆ K is the Kdimensional probability simplex. Training a per-pixel classification model is straight-forward: given ground truth category labels y gt = {y gt i |y gt i ∈ {1, . . . , K}} H•W i=1 for every pixel, a per-pixel crossentropy (negative log-likelihood) loss is usually applied, i.e., L pixel-cls (y,\ny gt ) = H•W i=1 -log p i (y gt i ). Mask classification splits the segmentation task into 1) partitioning/grouping the image into N regions (N does not need to equal K), represented with binary masks {m i |m i ∈ [0, 1] H×W } N i=1 ; and 2) associating each region as a whole with some distribution over K categories. To jointly group and classify a segment, i.e., to perform mask classification, we define the desired output z as a set of N probability-mask pairs, i.e., z = {(p i , m i )} N i=1 . In contrast to per-pixel class probability prediction, for mask classification the probability distribution p i ∈ ∆ K+1 contains an auxiliary \"no object\" label (∅) in addition to the K category labels. The ∅ label is predicted for masks that do not correspond to any of the K categories. Note, mask classification allows multiple mask predictions with the same associated class, making it applicable to both semantic-and instance-level segmentation tasks.\nTo train a mask classification model, a matching σ between the set of predictions z and the set of N gt ground truth segments z gt = {(c gt i , m gt i )|c gt i ∈ {1, . . . , K}, m gt i ∈ {0, 1} H×W } N gt i=1 is required. 2 Here c gt i is the ground truth class of the i th ground truth segment. Since the size of prediction set |z| = N and ground truth set |z gt | = N gt generally differ, we assume N ≥ N gt and pad the set of ground truth labels with \"no object\" tokens ∅ to allow one-to-one matching.\nFor semantic segmentation, a trivial fixed matching is possible if the number of predictions N matches the number of category labels K. In this case, the i th prediction is matched to a ground truth region with class label i and to ∅ if a region with class label i is not present in the ground truth. In our experiments, we found that a bipartite matching-based assignment demonstrates better results than the fixed matching. Unlike DETR [4] that uses bounding boxes to compute the assignment costs between prediction z i and ground truth z gt j for the matching problem, we directly use class and mask predictions, i.e., -p i (c gt j ) + L mask (m i , m gt j ), where L mask is a binary mask loss. To train model parameters, given a matching, the main mask classification loss L mask-cls is composed of a cross-entropy classification loss and a binary mask loss L mask for each predicted segment: Then, the model predicts N possibly overlapping binary mask predictions via a dot product between pixel embeddings E pixel and mask embeddings E mask followed by a sigmoid activation. For semantic segmentation task we can get the final prediction by combining N binary masks with their class predictions using a simple matrix multiplication (see Section 3.4). Note, the dimensions for multiplication are shown in gray.\nL mask-cls (z, z gt ) = N j=1 -log p σ(j) (c gt j ) + 1 c gt j =∅ L mask (m σ(j) , m gt j ) .(1\nNote, that most existing mask classification models use auxiliary losses (e.g., a bounding box loss [21,4] or an instance discrimination loss [42]) in addition to L mask-cls . In the next section we present a simple mask classification model that allows end-to-end training with L mask-cls alone.", "Dataset": "We study MaskFormer using five semantic segmentation datasets and two panoptic segmentation datasets. Here, we provide more detailed information about these datasets.\nA.1 Semantic segmentation datasets ADE20K [55] contains 20k images for training and 2k images for validation. The data comes from the ADE20K-Full dataset where 150 semantic categories are selected to be included in evaluation from the SceneParse150 challenge [54]. The images are resized such that the shortest side is no greater than 512 pixels. During inference, we resize the shorter side of the image to the corresponding crop size.\nCOCO-Stuff-10K [3] has 171 semantic-level categories. There are 9k images for training and 1k images for testing. Images in the COCO-Stuff-10K datasets are a subset of the COCO dataset [28].\nDuring inference, we resize the shorter side of the image to the corresponding crop size.\nADE20K-Full [55] contains 25k images for training and 2k images for validation. The ADE20K-Full dataset is annotated in an open-vocabulary setting with more than 3000 semantic categories. We filter these categories by selecting those that are present in both training and validation sets, resulting in a total of 847 categories. We follow the same process as ADE20K-SceneParse150 to resize images such that the shortest side is no greater than 512 pixels. During inference, we resize the shorter side of the image to the corresponding crop size.\nCityscapes [15] During training, we resize the short side of images to 2048 before applying scale augmentation. We use a crop size of 1280 × 1280, a batch size of 16 and train all models for 300k iterations. During inference, we resize the longer side of the image to 2048 and only use three scales (0.5, 1.0 and 1.5) for multi-scale testing due to GPU memory constraints.\nA.2 Panoptic segmentation datasets COCO panoptic [24] is one of the most commonly used datasets for panoptic segmentation. It has 133 categories (80 \"thing\" categories with instance-level annotation and 53 \"stuff\" categories) in 118k images for training and 5k images for validation. All images are from the COCO dataset [28].\nADE20K panoptic [55] combines the ADE20K semantic segmentation annotation for semantic segmentation from the SceneParse150 challenge [54] and ADE20K instance annotation from the COCO+Places challenge [1]. Among the 150 categories, there are 100 \"thing\" categories with instance-level annotation. We find filtering masks with a lower threshold (we use 0.7 for ADE20K) than COCO (which uses 0.8) gives slightly better performance. ADE20K-Full. We further demonstrate the benefits in large-vocabulary semantic segmentation in Table IIb. Since we are the first to report performance on this dataset, we only compare MaskFormer with our per-pixel classification baselines. MaskFormer not only achieves better performance, but is also more memory efficient on the ADE20K-Full dataset with 847 categories, thanks to decoupling the number of masks from the number of classes. These results show that our MaskFormer has the potential to deal with real-world segmentation problems with thousands of categories.\nCityscapes. In Table IIIa, we report MaskFormer performance on Cityscapes, the standard testbed for modern semantic segmentation methods. The dataset has only 19 categories and therefore, the recognition aspect of the dataset is less challenging than in other considered datasets. We", "Conclusion": "Our main goal is to show that mask classification is a general segmentation paradigm that could be a competitive alternative to per-pixel classification for semantic segmentation. To better understand its potential for segmentation tasks, we focus on exploring mask classification independently of other factors like architecture, loss design, or augmentation strategy. We pick the DETR [4] architecture as our baseline for its simplicity and deliberately make as few architectural changes as possible. Therefore, MaskFormer can be viewed as a \"box-free\" version of DETR. In this section, we discuss in detail the differences between MaskFormer and DETR and show how these changes are required to ensure that mask classification performs well. First, to achieve a pure mask classification setting we remove the box prediction head and perform matching between prediction and ground truth segments with masks instead of boxes. Secondly, we replace the computeheavy per-query mask head used in DETR with a more efficient per-image FPN-based head to make end-to-end training without box supervision feasible.\nMatching with masks is superior to matching with boxes. We compare MaskFormer models trained using matching with boxes or masks in Table 5. To do box-based matching, we add to MaskFormer an additional box prediction head as in DETR [4]. Observe that MaskFormer, which directly matches with mask predictions, has a clear advantage. We hypothesize that matching with boxes is more ambiguous than matching with masks, especially for stuff categories where completely different masks can have similar boxes as stuff regions often spread over a large area in an image.\nMaskFormer mask head reduces computation. Results in Table 5 also show that MaskFormer performs on par with DETR when the same matching strategy is used. This suggests that the difference in mask head designs between the models does not significantly influence the prediction quality. The new head, however, has significantly lower computational and memory costs in comparison with the original mask head used in DETR. In MaskFormer, we first upsample image features to get highresolution per-pixel embeddings and directly generate binary mask predictions at a high-resolution. Note, that the per-pixel embeddings from the upsampling module (i.e., pixel decoder) are shared among all queries. In contrast, DETR first generates low-resolution attention maps and applies an independent upsampling module to each query. Thus, the mask head in DETR is N times more computationally expensive than the mask head in MaskFormer (where N is the number of queries). The paradigm discrepancy between semantic-and instance-level segmentation results in entirely different models for each task, hindering development of image segmentation as a whole. We show that a simple mask classification model can outperform state-of-the-art per-pixel classification models, especially in the presence of large number of categories. Our model also remains competitive for panoptic segmentation, without a need to change model architecture, losses, or training procedure. We hope this unification spurs a joint effort across semantic-and instance-level segmentation tasks.", "Experiment_and_Results": "We demonstrate that MaskFormer seamlessly unifies semantic-and instance-level segmentation tasks by showing state-of-the-art results on both semantic segmentation and panoptic segmentation datasets. Then, we ablate the MaskFormer design confirming that observed improvements in semantic segmentation indeed stem from the shift from per-pixel classification to mask classification.\nDatasets. We study MaskFormer using four widely used semantic segmentation datasets: ADE20K [55] (150 classes) from the SceneParse150 challenge [54], COCO-Stuff-10K [3] (171 classes), Cityscapes [15] (19 classes), and Mapillary Vistas [34] (65 classes). In addition, we use the ADE20K-Full [55] dataset annotated in an open vocabulary setting (we keep 874 classes that are present in both train and validation sets). For panotic segmenation evaluation we use COCO [28,3,24] (80 \"things\" and 53 \"stuff\" categories) and ADE20K-Panoptic [55,24] (100 \"things\" and 50 \"stuff\" categories). Please see the appendix for detailed descriptions of all used datasets. Evaluation metrics. For semantic segmentation the standard metric is mIoU (mean Intersection-over-Union) [18], a per-pixel metric that directly corresponds to the per-pixel classification formulation. To better illustrate the difference between segmentation approaches, in our ablations we supplement mIoU with PQ St (PQ stuff) [24], a per-region metric that treats all classes as \"stuff\" and evaluates each segment equally, irrespective of its size. We report the median of 3 runs for all datasets, except for Cityscapes where we report the median of 5 runs. For panoptic segmentation, we use the standard PQ (panoptic quality) metric [24] and report single run results due to prohibitive training costs. Baseline models. On the right we sketch the used per-pixel classification baselines. The PerPixelBaseline uses the pixel-level module of MaskFormer and directly outputs per-pixel class scores. For a fair comparison, we design PerPixelBaseline+ which adds the transformer module and mask embedding MLP to the PerPixelBaseline. Thus, PerPixelBaseline+ and MaskFormer differ only in the formulation: per-pixel vs. mask classification. Note that these baselines are for ablation and we compare MaskFormer with state-of-the-art per-pixel classification models as well. Semantic segmentation. In Table 1, we compare MaskFormer with state-of-the-art per-pixel classification models for semantic segmentation on the ADE20K val set. With the same standard CNN backbones (e.g., ResNet [22]), MaskFormer outperforms DeepLabV3+ [9] by 1.7 mIoU. MaskFormer is also compatible with recent Vision Transformer [17] backbones (e.g., the Swin Transformer [29]), achieving a new state-of-the-art of 55.6 mIoU, which is 2.1 mIoU better than the prior state-of-theart [29]. Observe that MaskFormer outperforms the best per-pixel classification-based models while having fewer parameters and faster inference time. This result suggests that the mask classification formulation has significant potential for semantic segmentation. See appendix for results on test set.\nBeyond ADE20K, we further compare MaskFormer with our baselines on COCO-Stuff-10K, ADE20K-Full as well as Cityscapes in Table 2 and we refer to the appendix for comparison with state-of-the-art methods on these datasets. The improvement of MaskFormer over PerPixelBase-line+ is larger when the number of classes is larger: For Cityscapes, which has only 19 categories, MaskFormer performs similarly well as PerPixelBaseline+; While for ADE20K-Full, which has 847 classes, MaskFormer outperforms PerPixelBaseline+ by 3.5 mIoU.\nAlthough MaskFormer shows no improvement in mIoU for Cityscapes, the PQ St metric increases by 2.9 PQ St . We find MaskFormer performs better in terms of recognition quality (RQ St ) while lagging in per-pixel segmentation quality (SQ St ) (we refer to the appendix for detailed numbers). This observation suggests that on datasets where class recognition is relatively easy to solve, the main challenge for mask classification-based approaches is pixel-level accuracy (i.e., mask quality). Panoptic segmentation. In Table 3, we compare the same exact MaskFormer model with DETR [4] on the COCO panoptic val set. To match the standard DETR design, we add 6 additional Transformer encoder layers after the CNN backbone. Unlike DETR, our model does not predict bounding boxes but instead predicts masks directly. MaskFormer achieves better results while being simpler than DETR. To disentangle the improvements from the model itself and our post-processing inference strategy we run our model following DETR post-processing (MaskFormer (DETR)) and observe that this setup outperforms DETR by 2.2 PQ. Overall, we observe a larger improvement in PQ St compared to PQ Th . This suggests that detecting \"stuff\" with bounding boxes is suboptimal, and therefore, boxbased segmentation models (e.g., Mask R-CNN [21]) do not suit semantic segmentation. MaskFormer also outperforms recently proposed Max-DeepLab [42] without the need of special network design as well as sophisticated auxiliary losses (i.e., instance discrimination loss, mask-ID cross entropy loss, and per-pixel classification loss in [42]). MaskFormer, for the first time, unifies semantic-and instance-level segmentation with the exact same model, loss, and training pipeline.\nWe further evaluate our model on the panoptic segmentation version of the ADE20K dataset. Our model also achieves state-of-the-art performance. We refer to the appendix for detailed results.", "Extra": "We now introduce MaskFormer, the new mask classification model, which computes N probabilitymask pairs z = {(p i , m i )} N i=1 . The model contains three modules (see Fig. 2): 1) a pixel-level module that extracts per-pixel embeddings used to generate binary mask predictions; 2) a transformer module, where a stack of Transformer decoder layers [41] computes N per-segment embeddings; and 3) a segmentation module, which generates predictions {(p i , m i )} N i=1 from these embeddings. During inference, discussed in Sec. 3.4, p i and m i are assembled into the final prediction.\nPixel-level module takes an image of size H × W as input. A backbone generates a (typically) low-resolution image feature map F ∈ R C F × H S × W S , where C F is the number of channels and S is the stride of the feature map (C F depends on the specific backbone and we use S = 32 in this work). Then, a pixel decoder gradually upsamples the features to generate per-pixel embeddings E pixel ∈ R C E ×H×W , where C E is the embedding dimension. Note, that any per-pixel classificationbased segmentation model fits the pixel-level module design including recent Transformer-based models [37,53,29]. MaskFormer seamlessly converts such a model to mask classification.\nTransformer module uses the standard Transformer decoder [41] to compute from image features F and N learnable positional embeddings (i.e., queries) its output, i.e., N per-segment embeddings Q ∈ R C Q ×N of dimension C Q that encode global information about each segment MaskFormer predicts. Similarly to [4], the decoder yields all predictions in parallel.\nSegmentation module applies a linear classifier, followed by a softmax activation, on top of the per-segment embeddings Q to yield class probability predictions {p i ∈ ∆ K+1 } N i=1 for each segment. Note, that the classifier predicts an additional \"no object\" category (∅) in case the embedding does not correspond to any region. For mask prediction, a Multi-Layer Perceptron (MLP) with 2 hidden layers converts the per-segment embeddings Q to N mask embeddings E mask ∈ R C E ×N of dimension C E . Finally, we obtain each binary mask prediction m i ∈ [0, 1] H×W via a dot product between the i th mask embedding and per-pixel embeddings E pixel computed by the pixel-level module. The dot product is followed by a sigmoid activation, i.e.,\nm i [h, w] = sigmoid(E mask [:, i] T • E pixel [:, h, w]).\nNote, we empirically find it is beneficial to not enforce mask predictions to be mutually exclusive to each other by using a softmax activation. During training, the L mask-cls loss combines a cross entropy classification loss and a binary mask loss L mask for each predicted segment. For simplicity we use the same L mask as DETR [4], i.e., a linear combination of a focal loss [27] and a dice loss [33] multiplied by hyper-parameters λ focal and λ dice respectively. First, we present a simple general inference procedure that converts mask classification outputs {(p i , m i )} N i=1 to either panoptic or semantic segmentation output formats. Then, we describe a semantic inference procedure specifically designed for semantic segmentation. We note, that the specific choice of inference strategy largely depends on the evaluation metric rather than the task.\nGeneral inference partitions an image into segments by assigning each pixel [h, w] to one of the N predicted probability-mask pairs via arg max\ni:ci =∅ p i (c i ) • m i [h, w].\nHere c i is the most likely class label c i = arg max c∈{1,...,K,∅} p i (c) for each probability-mask pair i. Intuitively, this procedure assigns a pixel at location [h, w] to probability-mask pair i only if both the most likely class probability p i (c i ) and the mask prediction probability m i [h, w] are high. Pixels assigned to the same probabilitymask pair i form a segment where each pixel is labelled with c i . For semantic segmentation, segments sharing the same category label are merged; whereas for instance-level segmentation tasks, the index i of the probability-mask pair helps to distinguish different instances of the same class. Finally, to reduce false positive rates in panoptic segmentation we follow previous inference strategies [4,24]. Specifically, we filter out low-confidence predictions prior to inference and remove predicted segments that have large parts of their binary masks (m i > 0.5) occluded by other predictions.\nSemantic inference is designed specifically for semantic segmentation and is done via a simple matrix multiplication. We empirically find that marginalization over probability-mask pairs, i.e., arg max c∈{1,...,K} N i=1 p i (c) • m i [h, w], yields better results than the hard assignment of each pixel to a probability-mask pair i used in the general inference strategy. The argmax does not include the \"no object\" category (∅) as standard semantic segmentation requires each output pixel to take a label. Note, this strategy returns a per-pixel class probability\nN i=1 p i (c) • m i [h, w]\n. However, we observe that directly maximizing per-pixel class likelihood leads to poor performance. We hypothesize, that gradients are evenly distributed to every query, which complicates training. Backbone. MaskFormer is compatible with any backbone architecture. In our work we use the standard convolution-based ResNet [22] backbones (R50 and R101 with 50 and 101 layers respectively) and recently proposed Transformer-based Swin-Transformer [29] backbones. In addition, we use the R101c model [7] which replaces the first 7 × 7 convolution layer of R101 with 3 consecutive 3 × 3 convolutions and which is popular in the semantic segmentation community [52,8,9,23,50,11].\nPixel decoder. The pixel decoder in Figure 2 can be implemented using any semantic segmentation decoder (e.g., [9][10][11]). Many per-pixel classification methods use modules like ASPP [7] or PSP [52] to collect and distribute context across locations. The Transformer module attends to all image features, collecting global information to generate class predictions. This setup reduces the need of the per-pixel module for heavy context aggregation. Therefore, for MaskFormer, we design a light-weight pixel decoder based on the popular FPN [26] architecture.\nFollowing FPN, we 2× upsample the low-resolution feature map in the decoder and sum it with the projected feature map of corresponding resolution from the backbone; Projection is done to match channel dimensions of the feature maps with a 1 × 1 convolution layer followed by GroupNorm (GN) [45]. Next, we fuse the summed features with an additional 3 × 3 convolution layer followed by GN and ReLU activation. We repeat this process starting with the stride 32 feature map until we obtain a final feature map of stride 4. Finally, we apply a single 1 × 1 convolution layer to get the per-pixel embeddings. All feature maps in the pixel decoder have a dimension of 256 channels.\nTransformer decoder. We use the same Transformer decoder design as DETR [4]. The N query embeddings are initialized as zero vectors, and we associate each query with a learnable positional encoding. We use 6 Transformer decoder layers with 100 queries by default, and, following DETR, we apply the same loss after each decoder. In our experiments we observe that MaskFormer is competitive for semantic segmentation with a single decoder layer too, whereas for instance-level segmentation multiple layers are necessary to remove duplicates from the final predictions.\nSegmentation module. The multi-layer perceptron (MLP) in Figure 2 has 2 hidden layers of 256 channels to predict the mask embeddings E mask , analogously to the box head in DETR. Both per-pixel E pixel and mask E mask embeddings have 256 channels.\nLoss weights. We use focal loss [27] and dice loss [33] for our mask loss: L mask (m, m gt ) = λ focal L focal (m, m gt ) + λ dice L dice (m, m gt ), and set the hyper-parameters to λ focal = 20.0 and λ dice = 1.0. Following DETR [4], the weight for the \"no object\" (∅) in the classification loss is set to 0.1. Semantic segmentation. We use Detectron2 [46] and follow the commonly used training settings for each dataset. More specifically, we use AdamW [31] and the poly [7] learning rate schedule with an initial learning rate of 10 -4 and a weight decay of 10 -4 for ResNet [22] backbones, and an initial learning rate of 6 • 10 -5 and a weight decay of 10 -2 for Swin-Transformer [29] backbones. Backbones are pre-trained on ImageNet-1K [35] if not stated otherwise. A learning rate multiplier of 0.1 is applied to CNN backbones and 1.0 is applied to Transformer backbones. The standard random scale jittering between 0.5 and 2.0, random horizontal flipping, random cropping as well as random color jittering are used as data augmentation [14]. For the ADE20K dataset, if not stated otherwise, we use a crop size of 512 × 512, a batch size of 16 and train all models for 160k iterations. For the ADE20K-Full dataset, we use the same setting as ADE20K except that we train all models for 200k iterations. For the COCO-Stuff-10k dataset, we use a crop size of 640 × 640, a batch size of 32 and train all models for 60k iterations. All models are trained with 8 V100 GPUs. We report both performance of single scale (s.s.) inference and multi-scale (m.s.) inference with horizontal flip and scales of 0.5, 0.75, 1.0, 1.25, 1.5, 1.75. See appendix for Cityscapes and Mapillary Vistas settings.\nPanoptic segmentation. We follow exactly the same architecture, loss, and training procedure as we use for semantic segmentation. The only difference is supervision: i.e., category region masks in semantic segmentation vs. object instance masks in panoptic segmentation. We strictly follow the DETR [4] setting to train our model on the COCO panoptic segmentation dataset [24] for a fair comparison. On the ADE20K panoptic segmentation dataset, we follow the semantic segmentation setting but train for longer (720k iterations) and use a larger crop size (640 × 640). COCO models are trained using 64 V100 GPUs and ADE20K experiments are trained with 8 V100 GPUs. We use Table 1: Semantic segmentation on ADE20K val with 150 categories. Mask classification-based MaskFormer outperforms the best per-pixel classification approaches while using fewer parameters and less computation. We report both single-scale (s.s.) and multi-scale (m.s.) inference results with ±std. FLOPs are computed for the given crop size. Frames-per-second (fps) is measured on a V100 GPU with a batch size of 1. the general inference (Section 3.4) with the following parameters: we filter out masks with class confidence below 0.8 and set masks whose contribution to the final panoptic segmentation is less than 80% of its mask area to VOID. We report performance of single scale inference. We perform a series of ablation studies of MaskFormer using a single ResNet-50 backbone [22].\nPer-pixel vs. mask classification. In Table 4, we verify that the gains demonstrated by MaskFromer come from shifting the paradigm to mask classification. We start by comparing PerPixelBaseline+ and MaskFormer. The models are very similar and there are only 3 differences: 1) per-pixel vs. mask classification used by the models, 2) MaskFormer uses bipartite matching, and 3) the new model uses a combination of focal and dice losses as a mask loss, whereas PerPixelBaseline+ utilizes per-pixel cross entropy loss. First, we rule out the influence of loss differences by training PerPixelBaseline+ with exactly the same losses and observing no improvement. Next, in Table 4a, we compare PerPixelBaseline+ with MaskFormer trained using a fixed matching (MaskFormer-fixed), i.e., N = K and assignment done based on category label indices identically to the per-pixel classification setup. We observe that MaskFormer-fixed is 1.8 mIoU better than the baseline, suggesting that shifting from per-pixel classification to mask classification is indeed the main reason for the gains of MaskFormer. In Table 4b, we further compare MaskFormer-fixed with MaskFormer trained with bipartite matching (MaskFormer-bipartite) and find bipartite matching is not only more flexible (allowing to predict less masks than the total number of categories) but also produces better results. The figure to the right shows the number of unique categories predicted by each query (sorted in descending order) of our MaskFormer model on the validation sets of the corresponding datasets. Interestingly, the number of unique categories per query does not follow a uniform distribution: some queries capture more classes than others. We try to analyze how Mask-Former queries group categories, but we do not observe any obvious pattern: there are queries capturing categories with similar semantics or shapes (e.g., \"house\" and \"building\"), but there are also queries capturing completely different categories (e.g., \"water\" and \"sofa\").\nNumber of Transformer decoder layers. Interestingly, MaskFormer with even a single Transformer decoder layer already performs well for semantic segmentation and achieves better performance than our 6-layer-decoder PerPixelBaseline+. For panoptic segmentation, however, multiple decoder layers are required to achieve competitive performance. Please see the appendix for a detailed discussion. We perform additional ablation studies of MaskFormer for semantic segmentation using the same setting as that in the main paper: a single ResNet-50 backbone [22], and we report both the mIoU and the PQ St . The default setting of our MaskFormer is: 100 queries and 6 Transformer decoder layers.\nInference strategies. In Table VII, we ablate inference strategies for mask classification-based models performing semantic segmentation (discussed in Section 3.4). We compare our default semantic inference strategy and the general inference strategy which first filters out low-confidence masks (a threshold of 0.3 is used) and assigns the class labels to the remaining masks. We observe 1) general inference is only slightly better than the PerPixelBaseline+ in terms of the mIoU metric, and 2) on multiple datasets the general inference strategy performs worse in terms of the mIoU metric than the default semantic inference. However, the general inference has higher PQ St , due to better recognition quality (RQ St ). We hypothesize that the filtering step removes false positives which increases the RQ St . In contrast, the semantic inference aggregates mask predictions from multiple queries thus it has better mask quality (SQ St ). This observation suggests that semantic and instance-level segmentation can be unified with a single inference strategy (i.e., our general inference) and the choice of inference strategy largely depends on the evaluation metric instead of the task. Inference strategies for semantic segmentation. general: general inference (Section 3.4) which first filters low-confidence masks (using a threshold of 0.3) and assigns labels to the remaining ones. semantic: the default semantic inference (Section 3.4) for semantic segmentation. Number of Transformer decoder layers. In Table VIII, we ablate the effect of the number of Transformer decoder layers on ADE20K [55] for both semantic and panoptic segmentation. Surprisingly, we find a MaskFormer with even a single Transformer decoder layer already performs reasonably well for semantic segmentation and achieves better performance than our 6-layer-decoder per-pixel classification baseline PerPixelBaseline+. Whereas, for panoptic segmentation, the number of decoder layers is more important. We hypothesize that stacking more decoder layers is helpful to de-duplicate predictions which is required by the panoptic segmentation task.\nTo verify this hypothesis, we train MaskFormer models without self-attention in all 6 Transformer decoder layers. On semantic segmentation, we observe MaskFormer without self-attention performs similarly well in terms of the mIoU metric, however, the per-mask metric PQ St is slightly worse. On panoptic segmentation, MaskFormer models without self-attention performs worse across all metrics.\n\"Semantic\" queries vs. \"panoptic\" queries. In Figure I we visualize predictions for the \"car\" category from MaskFormer trained with semantic-level and instance-level ground truth data. In the case of semantic-level data, the matching cost and loss used for mask prediction force a single query to predict one mask that combines all cars together. In contrast, with instance-level ground truth, MaskFormer uses different queries to make mask predictions for each car. This observation We thank Ross Girshick for insightful comments and suggestions. Work of UIUC authors Bowen Cheng and Alexander G. Schwing was supported in part by NSF under Grant #1718221, 2008387, 2045586, 2106825, MRI #1725729, NIFA award 2020-67021-32799 and Cisco Systems Inc. (Gift Award CG 1377144 -thanks for access to Arcetri). We first provide more information regarding the datasets used in our experimental evaluation of MaskFormer (Appendix A). Then, we provide detailed results of our model on more semantic (Appendix B) and panoptic (Appendix C) segmentation datasets. Finally, we provide additional ablation studies (Appendix D) and visualization (Appendix E)." }, { "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": 2020.0, "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "arxiv_di": 2010.11929, "Introduction": "Self-attention-based architectures, in particular Transformers (Vaswani et al., 2017), have become the model of choice in natural language processing (NLP). The dominant approach is to pre-train on a large text corpus and then fine-tune on a smaller task-specific dataset (Devlin et al., 2019). Thanks to Transformers' computational efficiency and scalability, it has become possible to train models of unprecedented size, with over 100B parameters (Brown et al., 2020;Lepikhin et al., 2020). With the models and datasets growing, there is still no sign of saturating performance.\nIn computer vision, however, convolutional architectures remain dominant (LeCun et al., 1989;Krizhevsky et al., 2012;He et al., 2016). Inspired by NLP successes, multiple works try combining CNN-like architectures with self-attention (Wang et al., 2018;Carion et al., 2020), some replacing the convolutions entirely (Ramachandran et al., 2019;Wang et al., 2020a). The latter models, while theoretically efficient, have not yet been scaled effectively on modern hardware accelerators due to the use of specialized attention patterns. Therefore, in large-scale image recognition, classic ResNetlike architectures are still state of the art (Mahajan et al., 2018;Xie et al., 2020;Kolesnikov et al., 2020).\nInspired by the Transformer scaling successes in NLP, we experiment with applying a standard Transformer directly to images, with the fewest possible modifications. To do so, we split an image into patches and provide the sequence of linear embeddings of these patches as an input to a Transformer. Image patches are treated the same way as tokens (words) in an NLP application. We train the model on image classification in supervised fashion.\nWhen trained on mid-sized datasets such as ImageNet without strong regularization, these models yield modest accuracies of a few percentage points below ResNets of comparable size. This seemingly discouraging outcome may be expected: Transformers lack some of the inductive biases inherent to CNNs, such as translation equivariance and locality, and therefore do not generalize well when trained on insufficient amounts of data.\nHowever, the picture changes if the models are trained on larger datasets (14M-300M images). We find that large scale training trumps inductive bias. Our Vision Transformer (ViT) attains excellent results when pre-trained at sufficient scale and transferred to tasks with fewer datapoints. When pre-trained on the public ImageNet-21k dataset or the in-house JFT-300M dataset, ViT approaches or beats state of the art on multiple image recognition benchmarks. In particular, the best model reaches the accuracy of 88.55% on ImageNet, 90.72% on ImageNet-ReaL, 94.55% on CIFAR-100, and 77.63% on the VTAB suite of 19 tasks.", "Related_Work": "Transformers were proposed by Vaswani et al. (2017) for machine translation, and have since become the state of the art method in many NLP tasks. Large Transformer-based models are often pre-trained on large corpora and then fine-tuned for the task at hand: BERT (Devlin et al., 2019) uses a denoising self-supervised pre-training task, while the GPT line of work uses language modeling as its pre-training task (Radford et al., 2018;2019;Brown et al., 2020).\nNaive application of self-attention to images would require that each pixel attends to every other pixel. With quadratic cost in the number of pixels, this does not scale to realistic input sizes. Thus, to apply Transformers in the context of image processing, several approximations have been tried in the past. Parmar et al. (2018) applied the self-attention only in local neighborhoods for each query pixel instead of globally. Such local multi-head dot-product self attention blocks can completely replace convolutions (Hu et al., 2019;Ramachandran et al., 2019;Zhao et al., 2020). In a different line of work, Sparse Transformers (Child et al., 2019) employ scalable approximations to global selfattention in order to be applicable to images. An alternative way to scale attention is to apply it in blocks of varying sizes (Weissenborn et al., 2019), in the extreme case only along individual axes (Ho et al., 2019;Wang et al., 2020a). Many of these specialized attention architectures demonstrate promising results on computer vision tasks, but require complex engineering to be implemented efficiently on hardware accelerators.\nMost related to ours is the model of Cordonnier et al. (2020), which extracts patches of size 2 × 2 from the input image and applies full self-attention on top. This model is very similar to ViT, but our work goes further to demonstrate that large scale pre-training makes vanilla transformers competitive with (or even better than) state-of-the-art CNNs. Moreover, Cordonnier et al. (2020) use a small patch size of 2 × 2 pixels, which makes the model applicable only to small-resolution images, while we handle medium-resolution images as well.\nThere has also been a lot of interest in combining convolutional neural networks (CNNs) with forms of self-attention, e.g. by augmenting feature maps for image classification (Bello et al., 2019) or by further processing the output of a CNN using self-attention, e.g. for object detection (Hu et al., 2018;Carion et al., 2020), video processing (Wang et al., 2018;Sun et al., 2019), image classification (Wu et al., 2020), unsupervised object discovery (Locatello et al., 2020), or unified text-vision tasks (Chen et al., 2020c;Lu et al., 2019;Li et al., 2019).\nAnother recent related model is image GPT (iGPT) (Chen et al., 2020a), which applies Transformers to image pixels after reducing image resolution and color space. The model is trained in an unsupervised fashion as a generative model, and the resulting representation can then be fine-tuned or probed linearly for classification performance, achieving a maximal accuracy of 72% on ImageNet.\nOur work adds to the increasing collection of papers that explore image recognition at larger scales than the standard ImageNet dataset. The use of additional data sources allows to achieve state-ofthe-art results on standard benchmarks (Mahajan et al., 2018;Touvron et al., 2019;Xie et al., 2020). Moreover, Sun et al. (2017) study how CNN performance scales with dataset size, and Kolesnikov et al. (2020); Djolonga et al. (2020) perform an empirical exploration of CNN transfer learning from large scale datasets such as ImageNet-21k and JFT-300M. We focus on these two latter datasets as well, but train Transformers instead of ResNet-based models used in prior works.", "Methodology": "In model design we follow the original Transformer (Vaswani et al., 2017) as closely as possible.\nAn advantage of this intentionally simple setup is that scalable NLP Transformer architectures -and their efficient implementations -can be used almost out of the box.", "Dataset": "The Vision Transformer performs well when pre-trained on a large JFT-300M dataset. With fewer inductive biases for vision than ResNets, how crucial is the dataset size? We perform two series of experiments.\nFirst, we pre-train ViT models on datasets of increasing size: ImageNet, ImageNet-21k, and JFT-300M. To boost the performance on the smaller datasets, we optimize three basic regularization parameters -weight decay, dropout, and label smoothing. Figure 3 shows the results after finetuning to ImageNet (results on other datasets are shown in Second, we train our models on random subsets of 9M, 30M, and 90M as well as the full JFT-300M dataset. We do not perform additional regularization on the smaller subsets and use the same hyper-parameters for all settings. This way, we assess the intrinsic model properties, and not the effect of regularization. We do, however, use early-stopping, and report the best validation accuracy achieved during training. To save compute, we report few-shot linear accuracy instead of full finetuning accuracy. Figure 4 contains the results. Vision Transformers overfit more than ResNets with comparable computational cost on smaller datasets. For example, ViT-B/32 is slightly faster than ResNet50; it performs much worse on the 9M subset, but better on 90M+ subsets. The same is true for ResNet152x2 and ViT-L/16. This result reinforces the intuition that the convolutional inductive bias is useful for smaller datasets, but for larger ones, learning the relevant patterns directly from data is sufficient, even beneficial.\nOverall, the few-shot results on ImageNet (Figure 4), as well as the low-data results on VTAB (Table 2) seem promising for very low-data transfer. Further analysis of few-shot properties of ViT is an exciting direction of future work.\nPublished as a conference paper at ICLR 2021", "Conclusion": "We have explored the direct application of Transformers to image recognition. Unlike prior works using self-attention in computer vision, we do not introduce image-specific inductive biases into the architecture apart from the initial patch extraction step. Instead, we interpret an image as a sequence of patches and process it by a standard Transformer encoder as used in NLP. This simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large datasets. Thus, Vision Transformer matches or exceeds the state of the art on many image classification datasets, whilst being relatively cheap to pre-train.\nWhile these initial results are encouraging, many challenges remain. One is to apply ViT to other computer vision tasks, such as detection and segmentation. Our results, coupled with those in Carion et al. (2020), indicate the promise of this approach. Another challenge is to continue exploring selfsupervised pre-training methods. Our initial experiments show improvement from self-supervised pre-training, but there is still large gap between self-supervised and large-scale supervised pretraining. Finally, further scaling of ViT would likely lead to improved performance. for ResNets we also run the setup of Kolesnikov et al. (2020) and select the best results across this run and our sweep. Finally, if not mentioned otherwise, all fine-tuning experiments run at 384 resolution (running fine-tuning at different resolution than training is common practice (Kolesnikov et al., 2020)).\nWhen transferring ViT models to another dataset, we remove the whole head (two linear layers) and replace it by a single, zero-initialized linear layer outputting the number of classes required by the target dataset. We found this to be a little more robust than simply re-initializing the very last layer.\nFor VTAB we follow the protocol in Kolesnikov et al. (2020), and use the same hyperparameter setting for all tasks. We use a learning rate of 0.01 and train for 2500 steps (Tab. 4). We chose this setting by running a small sweep over two learning rates and two schedules, and selecting the setting with the highest VTAB score on the 200-example validation sets. We follow the pre-processing used in Kolesnikov et al. (2020), except that we do not use task-specific input resolutions. Instead we find that Vision Transformer benefits most from a high resolution (384 × 384) for all tasks.", "Experiment_and_Results": "We evaluate the representation learning capabilities of ResNet, Vision Transformer (ViT), and the hybrid. To understand the data requirements of each model, we pre-train on datasets of varying size and evaluate many benchmark tasks. When considering the computational cost of pre-training the model, ViT performs very favourably, attaining state of the art on most recognition benchmarks at a lower pre-training cost. Lastly, we perform a small experiment using self-supervision, and show that self-supervised ViT holds promise for the future. We report detailed results corresponding to the figures presented in the paper. Table 8 summarizes the results from this ablation study on a ViT-B/16 model. As we can see, while there is a large gap between the performances of the model with no positional embedding and models with positional embedding, there is little to no difference between different ways of encoding positional information. We speculate that since our Transformer encoder operates on patch-level inputs, as opposed to pixel-level, the differences in how to encode spatial information is less important. More precisely, in patch-level inputs, the spatial dimensions are much smaller than the original pixel-level inputs, e.g., 14 × 14 as opposed to 224 × 224, and learning to represent the spatial relations in this resolution is equally easy for these different positional encoding strategies. Even so, the specific pattern of position embedding similarity learned by the network depends on the training hyperparameters (Figure 10). We also evaluate our flagship ViT-H/14 model on the ObjectNet benchmark following the evaluation setup in Kolesnikov et al. (2020), resulting in 82.1% top-5 accuracy and 61.7% top-1 accuracy.", "Extra": "An overview of the model is depicted in Figure 1. The standard Transformer receives as input a 1D sequence of token embeddings. To handle 2D images, we reshape the image x ∈ R H×W ×C into a sequence of flattened 2D patches x p ∈ R N ×(P 2 •C) , where (H, W ) is the resolution of the original image, C is the number of channels, (P, P ) is the resolution of each image patch, and N = HW/P 2 is the resulting number of patches, which also serves as the effective input sequence length for the Transformer. The Transformer uses constant latent vector size D through all of its layers, so we flatten the patches and map to D dimensions with a trainable linear projection (Eq. 1). We refer to the output of this projection as the patch embeddings.\nSimilar to BERT's [class] token, we prepend a learnable embedding to the sequence of embedded patches (z 0 0 = x class ), whose state at the output of the Transformer encoder (z 0 L ) serves as the image representation y (Eq. 4). Both during pre-training and fine-tuning, a classification head is attached to z 0 L . The classification head is implemented by a MLP with one hidden layer at pre-training time and by a single linear layer at fine-tuning time.\nPosition embeddings are added to the patch embeddings to retain positional information. We use standard learnable 1D position embeddings, since we have not observed significant performance gains from using more advanced 2D-aware position embeddings (Appendix D.4). The resulting sequence of embedding vectors serves as input to the encoder.\nThe Transformer encoder (Vaswani et al., 2017) consists of alternating layers of multiheaded selfattention (MSA, see Appendix A) and MLP blocks (Eq. 2, 3). Layernorm (LN) is applied before every block, and residual connections after every block (Wang et al., 2019;Baevski & Auli, 2019).\nThe MLP contains two layers with a GELU non-linearity.\nz 0 = [x class ; x 1 p E; x 2 p E; • • • ; x N p E] + E pos , E ∈ R (P 2 •C)×D , E pos ∈ R (N +1)×D (1) z = MSA(LN(z -1 )) + z -1 , = 1 . . . L (2) z = MLP(LN(z )) + z , = 1 . . . L(3)\ny = LN(z 0 L )(4)\nInductive bias. We note that Vision Transformer has much less image-specific inductive bias than CNNs. In CNNs, locality, two-dimensional neighborhood structure, and translation equivariance are baked into each layer throughout the whole model. In ViT, only MLP layers are local and translationally equivariant, while the self-attention layers are global. The two-dimensional neighborhood structure is used very sparingly: in the beginning of the model by cutting the image into patches and at fine-tuning time for adjusting the position embeddings for images of different resolution (as described below). Other than that, the position embeddings at initialization time carry no information about the 2D positions of the patches and all spatial relations between the patches have to be learned from scratch.\nHybrid Architecture. As an alternative to raw image patches, the input sequence can be formed from feature maps of a CNN (LeCun et al., 1989). In this hybrid model, the patch embedding projection E (Eq. 1) is applied to patches extracted from a CNN feature map. As a special case, the patches can have spatial size 1x1, which means that the input sequence is obtained by simply flattening the spatial dimensions of the feature map and projecting to the Transformer dimension.\nThe classification input embedding and position embeddings are added as described above. Typically, we pre-train ViT on large datasets, and fine-tune to (smaller) downstream tasks. For this, we remove the pre-trained prediction head and attach a zero-initialized D × K feedforward layer, where K is the number of downstream classes. It is often beneficial to fine-tune at higher resolution than pre-training (Touvron et al., 2019;Kolesnikov et al., 2020). When feeding images of higher resolution, we keep the patch size the same, which results in a larger effective sequence length. The Vision Transformer can handle arbitrary sequence lengths (up to memory constraints), however, the pre-trained position embeddings may no longer be meaningful. We therefore perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image. Note that this resolution adjustment and patch extraction are the only points at which an inductive bias about the 2D structure of the images is manually injected into the Vision Transformer. Datasets. To explore model scalability, we use the ILSVRC-2012 ImageNet dataset with 1k classes and 1.3M images (we refer to it as ImageNet in what follows), its superset ImageNet-21k with 21k classes and 14M images (Deng et al., 2009), and JFT (Sun et al., 2017) with 18k classes and 303M high-resolution images. We de-duplicate the pre-training datasets w.r.t. the test sets of the downstream tasks following Kolesnikov et al. (2020). We transfer the models trained on these dataset to several benchmark tasks: ImageNet on the original validation labels and the cleaned-up ReaL labels (Beyer et al., 2020), CIFAR-10/100 (Krizhevsky, 2009), Oxford-IIIT Pets (Parkhi et al., 2012), and Oxford Flowers-102 (Nilsback & Zisserman, 2008). For these datasets, pre-processing follows Kolesnikov et al. (2020). We also evaluate on the 19-task VTAB classification suite (Zhai et al., 2019b). VTAB evaluates low-data transfer to diverse tasks, using 1 000 training examples per task. The tasks are divided into three groups: Natural -tasks like the above, Pets, CIFAR, etc. Specialized -medical and satellite imagery, and Structured -tasks that require geometric understanding like localization.\nModel Variants. We base ViT configurations on those used for BERT (Devlin et al., 2019), as summarized in Table 1. The \"Base\" and \"Large\" models are directly adopted from BERT and we add the larger \"Huge\" model. In what follows we use brief notation to indicate the model size and the input patch size: for instance, ViT-L/16 means the \"Large\" variant with 16 × 16 input patch size. Note that the Transformer's sequence length is inversely proportional to the square of the patch size, thus models with smaller patch size are computationally more expensive.\nFor the baseline CNNs, we use ResNet (He et al., 2016), but replace the Batch Normalization layers (Ioffe & Szegedy, 2015) with Group Normalization (Wu & He, 2018), and used standardized convolutions (Qiao et al., 2019). These modifications improve transfer (Kolesnikov et al., 2020), and we denote the modified model \"ResNet (BiT)\". For the hybrids, we feed the intermediate feature maps into ViT with patch size of one \"pixel\". To experiment with different sequence lengths, we either (i) take the output of stage 4 of a regular ResNet50 or (ii) remove stage 4, place the same number of layers in stage 3 (keeping the total number of layers), and take the output of this extended stage 3. Option (ii) results in a 4x longer sequence length, and a more expensive ViT model.\nTraining & Fine-tuning. We train all models, including ResNets, using Adam (Kingma & Ba, 2015) with β 1 = 0.9, β 2 = 0.999, a batch size of 4096 and apply a high weight decay of 0.1, which we found to be useful for transfer of all models (Appendix D.1 shows that, in contrast to common practices, Adam works slightly better than SGD for ResNets in our setting). We use a linear learning rate warmup and decay, see Appendix B.1 for details. For fine-tuning we use SGD with momentum, batch size 512, for all models, see Appendix B.1.1. For ImageNet results in Table 2, we fine-tuned at higher resolution: 512 for ViT-L/16 and 518 for ViT-H/14, and also used Polyak & Juditsky (1992) averaging with a factor of 0.9999 (Ramachandran et al., 2019;Wang et al., 2020b).\nMetrics. We report results on downstream datasets either through few-shot or fine-tuning accuracy. Fine-tuning accuracies capture the performance of each model after fine-tuning it on the respective dataset. Few-shot accuracies are obtained by solving a regularized least-squares regression problem that maps the (frozen) representation of a subset of training images to {-1, 1} K target vectors. This formulation allows us to recover the exact solution in closed form. Though we mainly focus on fine-tuning performance, we sometimes use linear few-shot accuracies for fast on-the-fly evaluation where fine-tuning would be too costly. We first compare our largest models -ViT-H/14 and ViT-L/16 -to state-of-the-art CNNs from the literature. The first comparison point is Big Transfer (BiT) (Kolesnikov et al., 2020), which performs supervised transfer learning with large ResNets. The second is Noisy Student (Xie et al., 2020), which is a large EfficientNet trained using semi-supervised learning on ImageNet and JFT-300M with the labels removed. Currently, Noisy Student is the state of the art on ImageNet and BiT-L on the other datasets reported here. All models were trained on TPUv3 hardware, and we report the number of TPUv3-core-days taken to pre-train each of them, that is, the number of TPU v3 cores (2 per chip) used for training multiplied by the training time in days. model still took substantially less compute to pre-train than prior state of the art. However, we note that pre-training efficiency may be affected not only by the architecture choice, but also other parameters, such as training schedule, optimizer, weight decay, etc. We provide a controlled study of performance vs. compute for different architectures in Section 4.4. Finally, the ViT-L/16 model pre-trained on the public ImageNet-21k dataset performs well on most datasets too, while taking fewer resources to pre-train: it could be trained using a standard cloud TPUv3 with 8 cores in approximately 30 days.\nFigure 2 decomposes the VTAB tasks into their respective groups, and compares to previous SOTA methods on this benchmark: BiT, VIVI -a ResNet co-trained on ImageNet and Youtube (Tschannen et al., 2020), and S4L -supervised plus semi-supervised learning on ImageNet (Zhai et al., 2019a).\nViT-H/14 outperforms BiT-R152x4, and other methods, on the Natural and Structured tasks. On the Specialized the performance of the top two models is similar. We perform a controlled scaling study of different models by evaluating transfer performance from JFT-300M. In this setting data size does not bottleneck the models' performances, and we assess performance versus pre-training cost of each model. The model set includes: 7 ResNets, R50x1, R50x2 R101x1, R152x1, R152x2, pre-trained for 7 epochs, plus R152x2 and R200x3 pre-trained for 14 epochs; 6 Vision Transformers, ViT-B/32, B/16, L/32, L/16, pre-trained for 7 epochs, plus L/16 and H/14 pre-trained for 14 epochs; and 5 hybrids, R50+ViT-B/32, B/16, L/32, L/16 pretrained for 7 epochs, plus R50+ViT-L/16 pre-trained for 14 epochs (for hybrids, the number at the end of the model name stands not for the patch size, but for the total dowsampling ratio in the ResNet backbone).\nFigure 5 contains the transfer performance versus total pre-training compute (see Appendix D.5 for details on computational costs). Detailed results per model are provided in Table 6 in the Appendix. A few patterns can be observed. First, Vision Transformers dominate ResNets on the performance/compute trade-off. ViT uses approximately 2 -4× less compute to attain the same performance (average over 5 datasets). Second, hybrids slightly outperform ViT at small computational budgets, but the difference vanishes for larger models. This result is somewhat surprising, since one might expect convolutional local feature processing to assist ViT at any size. Third, Vision Transformers appear not to saturate within the range tried, motivating future scaling efforts. To begin to understand how the Vision Transformer processes image data, we analyze its internal representations. The first layer of the Vision Transformer linearly projects the flattened patches into a lower-dimensional space (Eq. 1). Figure 7 (left) shows the top principal components of the the learned embedding filters. The components resemble plausible basis functions for a low-dimensional representation of the fine structure within each patch. After the projection, a learned position embedding is added to the patch representations. Figure 7 (center) shows that the model learns to encode distance within the image in the similarity of position embeddings, i.e. closer patches tend to have more similar position embeddings. Further, the row-column structure appears; patches in the same row/column have similar embeddings. Finally, a sinusoidal structure is sometimes apparent for larger grids (Appendix D). That the position embeddings learn to represent 2D image topology explains why hand-crafted 2D-aware embedding variants do not yield improvements (Appendix D.4).\nSelf-attention allows ViT to integrate information across the entire image even in the lowest layers. We investigate to what degree the network makes use of this capability. Specifically, we compute the average distance in image space across which information is integrated, based on the attention weights (Figure 7, right). This \"attention distance\" is analogous to receptive field size in CNNs. We find that some heads attend to most of the image already in the lowest layers, showing that the ability to integrate information globally is indeed used by the model. Other attention heads have consistently small attention distances in the low layers. This highly localized attention is less pronounced in hybrid models that apply a ResNet before the Transformer (Figure 7, right), suggesting that it may serve a similar function as early convolutional layers in CNNs. Further, the attention distance increases with network depth. Globally, we find that the model attends to image regions that are semantically relevant for classification (Figure 6). Transformers show impressive performance on NLP tasks. However, much of their success stems not only from their excellent scalability but also from large scale self-supervised pre-training (Devlin et al., 2019;Radford et al., 2018). We also perform a preliminary exploration on masked patch prediction for self-supervision, mimicking the masked language modeling task used in BERT. With self-supervised pre-training, our smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. Appendix B.1.2 contains further details. We leave exploration of contrastive pre-training (Chen et al., 2020b;He et al., 2020;Bachman et al., 2019;Hénaff et al., 2020) to future work. We employ the masked patch prediction objective for preliminary self-supervision experiments. To do so we corrupt 50% of patch embeddings by either replacing their embeddings with a learnable [mask] embedding (80%), a random other patch embedding (10%) or just keeping them as is (10%). This setup is very similar to the one used for language by Devlin et al. (2019). Finally, we predict the 3-bit, mean color (i.e., 512 colors in total) of every corrupted patch using their respective patch representations.\nWe trained our self-supervised model for 1M steps (ca. 14 epochs) with batch size 4096 on JFT. We use Adam, with a base learning rate of 2 • 10 -4 , warmup of 10k steps and cosine learning rate decay.\nAs prediction targets for pretraining we tried the following settings: 1) predicting only the mean, 3bit color (i.e., 1 prediction of 512 colors), 2) predicting a 4 × 4 downsized version of the 16 × 16 patch with 3bit colors in parallel (i.e., 16 predictions of 512 colors), 3) regression on the full patch using L2 (i.e., 256 regressions on the 3 RGB channels). Surprisingly, we found that all worked quite well, though L2 was slightly worse. We report final results only for option 1) because it has shown best few-shot performance. We also experimented with 15% corruption rate as used by Devlin et al. (2019) but results were also slightly worse on our few-shot metrics.\nLastly, we would like to remark that our instantiation of masked patch prediction doesn't require such an enormous amount of pretraining nor a large dataset such as JFT in order to lead to similar performance gains on ImageNet classification. That is, we observed diminishing returns on downstream performance after 100k pretraining steps, and see similar gains when pretraining on ImageNet. We are also interested in real-world speed of the architectures on our hardware, which is not always well predicted by theoretical FLOPs due to details like lane widths and cache sizes. For this purpose, The work was performed in Berlin, Zürich, and Amsterdam. We thank many colleagues at Google for their help, in particular Andreas Steiner for crucial help with the infrastructure and the opensource release of the code; Joan Puigcerver and Maxim Neumann for help with the large-scale training infrastructure; Dmitry Lepikhin, Aravindh Mahendran, Daniel Keysers, Mario Lučić, Noam Shazeer, Ashish Vaswani, and Colin Raffel for useful discussions. Standard qkv self-attention (SA, Vaswani et al. (2017)) is a popular building block for neural architectures. For each element in an input sequence z ∈ R N ×D , we compute a weighted sum over all values v in the sequence. The attention weights A ij are based on the pairwise similarity between two elements of the sequence and their respective query q i and key k j representations.\nMultihead self-attention (MSA) is an extension of SA in which we run k self-attention operations, called \"heads\", in parallel, and project their concatenated outputs. To keep compute and number of parameters constant when changing k, D h (Eq. 5) is typically set to D/k.\nTable 3 summarizes our training setups for our different models. We found strong regularization to be key when training models from scratch on ImageNet. Dropout, when used, is applied after every dense layer except for the the qkv-projections and directly after adding positional-to patch embeddings. Hybrid models are trained with the exact setup as their ViT counterparts. Finally, all training is done on resolution 224. We fine-tune all ViT models using SGD with a momentum of 0.9. We run a small grid search over learning rates, see learning rate ranges in performance of two ResNets -50x1 and 152x2 -pre-trained on JFT with SGD and Adam. For SGD, we use the hyperparameters recommended by Kolesnikov et al. (2020). Results are presented in Table 7. Adam pre-training outperforms SGD pre-training on most datasets and on average. This justifies the choice of Adam as the optimizer used to pre-train ResNets on JFT. Note that the absolute numbers are lower than those reported by Kolesnikov et al. (2020), since we pre-train only for 7 epochs, not 30. We ran ablations on scaling different dimensions of the Transformer architecture to find out which are best suited for scaling to very large models. Figure 8 shows 5-shot performance on ImageNet for different configurations. All configurations are based on a ViT model with 8 layers, D = 1024, D M LP = 2048 and a patch size of 32, the intersection of all lines. We can see that scaling the depth results in the biggest improvements which are clearly visible up until 64 layers. However, diminishing returns are already visible after 16 layers. Interestingly, scaling the width of the network seems to result in the smallest changes. Decreasing the patch size and thus increasing the effective sequence length shows surprisingly robust improvements without introducing parameters. These findings suggest that compute might be a better predictor of performance than the number of parameters, and that scaling should emphasize depth over width if any. Overall, we find that scaling all dimensions proportionally results in robust improvements. In order to stay as close as possible to the original Transformer model, we made use of an additional [class] token, which is taken as image representation. The output of this token is then transformed into a class prediction via a small multi-layer perceptron (MLP) with tanh as non-linearity in the single hidden layer.\nThis design is inherited from the Transformer model for text, and we use it throughout the main paper. An initial attempt at using only image-patch embeddings, globally average-pooling (GAP) them, followed by a linear classifier-just like ResNet's final feature map-performed very poorly. However, we found that this is neither due to the extra token, nor to the GAP operation. Instead, the difference in performance is fully explained by the requirement for a different learning-rate, see Figure 9. We ran ablations on different ways of encoding spatial information using positional embedding. We tried the following cases:\n• Providing no positional information: Considering the inputs as a bag of patches.\n• 1-dimensional positional embedding: Considering the inputs as a sequence of patches in the raster order (default across all other experiments in this paper).\n• 2-dimensional positional embedding: Considering the inputs as a grid of patches in two dimensions. In this case, two sets of embeddings are learned, each for one of the axes, X-embedding, and Y -embedding, each with size D/2. Then, based on the coordinate on the path in the input, we concatenate the X and Y embedding to get the final positional embedding for that patch.\n• Relative positional embeddings: Considering the relative distance between patches to encode the spatial information as instead of their absolute position. To do so, we use 1dimensional Relative Attention, in which we define the relative distance all possible pairs of patches. Thus, for every given pair (one as query, and the other as key/value in the attention mechanism), we have an offset p q -p k , where each offset is associated with an embedding. Then, we simply run extra attention, where we use the original query (the content of query), but use relative positional embeddings as keys. We then use the logits from the relative attention as a bias term and add it to the logits of the main attention (content-based attention) before applying the softmax.\nIn addition to different ways of encoding spatial information, we also tried different ways of incorporating this information in our model. For the 1-dimensional and 2-dimensional positional embeddings, we tried three different cases: (1) add positional embeddings to the inputs right after we perform timing of inference speed for the main models of interest, on a TPUv3 accelerator; the difference between inference and backprop speed is a constant model-independent factor.\nFigure 12 (left) shows how many images one core can handle per second, across various input sizes. Every single point refers to the peak performance measured across a wide range of batch-sizes. As can be seen, the theoretical bi-quadratic scaling of ViT with image size only barely starts happening for the largest models at the largest resolutions.\nAnother quantity of interest is the largest batch-size each model can fit onto a core, larger being better for scaling to large datasets. Figure 12 Axial Attention (Huang et al., 2020;Ho et al., 2019) is a simple, yet effective technique to run selfattention on large inputs that are organized as multidimensional tensors. The general idea of axial attention is to perform multiple attention operations, each along a single axis of the input tensor, instead of applying 1-dimensional attention to the flattened version of the input. In axial attention, each attention mixes information along a particular axis, while keeping information along the other axes independent. Along this line, Wang et al. (2020b) proposed the AxialResNet model in which all the convolutions with kernel size 3 × 3 in a ResNet50 are replaced by axial self-attention, i.e. a row and column attention, augmented by relative positional encoding. We have implemented AxialResNet as a baseline model. 3 .\nMoreover, we have modified ViT to process inputs in the 2-dimensional shape, instead of a 1dimensional sequence of patches, and incorporate Axial Transformer blocks, in which instead of a self-attention followed by an MLP, we have a a row-self-attention plus an MLP followed by a column-self-attention plus an MLP.\nFigure 13, present the performance of Axial ResNet, Axial-ViT-B/32 and Axial-ViT-B/16 on Ima-geNet 5shot linear, when pretrained on JFT dataset, verses the pretraining compute, both in terms of number of FLOPs and inference time (example per seconds). As we can see, both Axial-ViT-B/32 and Axial-ViT-B/16 do better than their ViT-B counterpart in terms of performance, but it comes at the cost of more compute. This is because in Axial-ViT models, each Transformer block with global self-attention is replaced by two Axial Transformer blocks, one with row and one with column selfattention and although the sequence length that self-attention operates on is smaller in axial case, there is a extra MLP per Axial-ViT block. For the AxialResNet, although it looks reasonable in terms of accuracy/compute trade-off (Figure 13, left), the naive implementation is extremely slow on TPUs (Figure 13, right). To understand how ViT uses self-attention to integrate information across the image, we analyzed the average distance spanned by attention weights at different layers (Figure 11). This \"attention distance\" is analogous to receptive field size in CNNs. Average attention distance is highly variable across heads in lower layers, with some heads attending to much of the image, while others attend to small regions at or near the query location. As depth increases, attention distance increases for all heads. In the second half of the network, most heads attend widely across tokens. To compute maps of the attention from the output token to the input space (Figures 6 and14), we used Attention Rollout (Abnar & Zuidema, 2020). Briefly, we averaged attention weights of ViT-L/16 across all heads and then recursively multiplied the weight matrices of all layers. This accounts for the mixing of attention across tokens through all layers. Table 9 shows the scores attained on each of the VTAB-1k tasks.\nPublished as a conference paper at ICLR 2021" }, { "title": "Simultaneous detection and segmentation", "year": 2014.0, "authors": "Bharath Hariharan; Pablo Arbeláez; Ross Girshick; Jitendra Malik", "arxiv_di": 1407.1808, "Introduction": "Object recognition comes in many flavors, two of the most popular being object detection and semantic segmentation. Starting with face detection, the task in object detection is to mark out bounding boxes around each object of a particular category in an image. In this task, a predicted bounding box is considered a true positive if it overlaps by more than 50% with a ground truth box, and different algorithms are compared based on their precision and recall. Object detection systems strive to find every instance of the category and estimate the spatial extent of each. However, the detected objects are very coarsely localized using just bounding boxes.\nIn contrast, semantic segmentation requires one to assign a category label to all pixels in an image. The MSRC dataset [30] was one of the first publicly available benchmarks geared towards this task. Later, the standard metric used to evaluate algorithms in this task converged on pixel IU (intersection over union): for each category, this metric computes the intersection over union of the predicted pixels and ground truth pixels over the entire dataset. This task deals with \"stuff\" categories (such as grass, sky, road) and \"thing\" categories (such as cow, person, car) interchangeably. For things, this means that there is no notion of object instances. A typical semantic segmentation algorithm might accurately mark out the dog pixels in the image, but would provide no indication of how many dogs there are, or of the precise spatial extent of any one particular dog.\nThese two tasks have continued to this day and were part of the PASCAL VOC challenge [11]. Although often treated as separate problems, we believe the distinction between them is artificial. For the \"thing\" categories, we can think of a unified task: detect all instances of a category in an image and, for each instance, correctly mark the pixels that belong to it. Compared to the bounding boxes output by an object detection system or the pixel-level category labels output by a semantic segmentation system, this task demands a richer, and potentially more useful, output. Our aim in this paper is to improve performance on this task, which we call Simultaneous Detection and Segmentation (SDS).\nThe SDS algorithm we propose has the following steps (Figure 1):\n1. Proposal generation: We start with category-independent bottom-up object proposals. Because we are interested in producing segmentations and not just bounding boxes, we need region proposals. We use MCG [1] to generate 2000 region candidates per image. We consider each region candidate as a putative object hypothesis. 2. Feature extraction: We use a convolutional neural network to extract features on each region. We extract features from both the bounding box of the region as well as from the region foreground. This follows work by Girshick et al. [16] (R-CNN) who achieved competitive semantic segmentation results and dramatically improved the state-of-the-art in object detection by using CNNs to classify region proposals. We consider several ways of training the CNNs. We find that, compared to using the same CNN for both inputs (image windows and region masks), using separate networks where each network is finetuned for its respective role dramatically improves performance. We improve performance further by training both networks jointly, resulting in a feature extractor that is trained end-to-end for the SDS task.", "Related_Work": "For semantic segmentation, several researchers have tried to use activations from off-the-shelf object detectors to guide the segmentation process. Yang et al. [32] use object detections from the deformable parts model [13] to segment the image, pasting figure-ground masks and reasoning about their relative depth ordering. Arbeláez et al. [2] use poselet detections [4] as features to score region candidates, in addition to appearance-based cues. Ladicky et al. [22] use object detections as higher order potentials in a CRF-based segmentation system: all pixels in the foreground of a detected object are encouraged to share the category label of the detection. In addition, their system is allowed to switch off these potentials by assigning a true/false label to each detection. This system was extended by Boix et al. [3] who added a global, image-level node in the CRF to reason about the categories present in the image, and by Kim et al. [20] who added relationships between objects. In more recent work, Tighe et al. [31] use exemplar object detectors to segment out the scene as well as individual instances.\nThere has also been work on localizing detections better using segmentation. Parkhi et al. use color models from predefined rectangles on cat and dog faces to do GrabCut and improve the predicted bounding box [26]. Dai and Hoiem generalize this to all categories and use instance and category appearance models to improve detection [7]. These approaches do well when the objects are coherent in color or texture. This is not true of many categories such as people, where each object can be made of multiple regions of different appearance. An alternative to doing segmentation post facto is to use segmentation to generate object proposals which are then classified. The proposals may be used as just bounding boxes [27] or as region proposals [6,1]. These proposals incorporate both the consistency of appearance in an object as well as the possibility of having multiple disparate regions for each object. State-of-the-art detection systems [16] and segmentation systems [5] are now based on these methods.\nIn many of these approaches, segmentation is used only to localize the detections better. Other authors have explored using segmentation as a stronger cue. Fidler et al. [14] use the output of a state-of-the-art semantic segmentation approach [5] to score detections better. Mottaghi [25] uses detectors based on non-rectangular patches to both detect and segment objects.\nThe approaches above were typically built on features such as SIFT [24] or HOG [8]. Recently the computer vision community has shifted towards using convolutional neural networks (CNNs). CNNs have their roots in the Neocognitron proposed by Fukushima [15]. Trained with the back-propagation algorithm, LeCun [23] showed that they could be used for handwritten zip code recognition. They have since been used in a variety of tasks, including detection [29,28] and semantic segmentation [12]. Krizhevsky et al. [21] showed a large increase in performance by using CNNs for classification in the ILSVRC challenge [9]. Donahue et al. [10] showed that Krizhevsky's architecture could be used as a generic feature extractor that did well across a wide variety of tasks. Girshick et al. [16] build on this and finetune Krizhevsky's architecture for detection to nearly double the state-of-the-art performance. They use a simple pipeline, using CNNs to classify bounding box proposals from [27]. Our algorithm builds on this system, and on high quality region proposals from [1].\n3 Our approach", "Experiment_and_Results": "We use the segmentation annotations from SBD [17] to train and evaluate. We train all systems on PASCAL VOC 2012 train. For all training and finetuning of the network we use the recently released Caffe framework [19]. Table 1 and Table 2 show results on the AP r and the AP r vol metrics respectively on PASCAL VOC 2012 val (ground truth segmentations are not available for test). We compute AP r vol by averaging the AP r obtained for 9 thresholds. 1. O 2 P uses features and regions from Carreira et al. [5], which is the state-ofthe-art in semantic segmentation. We train region classifiers on these features and do NMS to get detections. This baseline gets a mean AP r of 25.2% and a mean AP r vol of 23.4%. We add two baselines: R-CNN is the system of Girshick et al. taken as is, and R-CNN-MCG is R-CNN on boxes from MCG instead of Selective Search. Note that neither of these baselines uses features from the region foreground.\nTable 4 shows the mean AP b and AP b vol . We get improvements over R-CNN on both AP b and AP b vol , with improvements on the latter metric being somewhat larger. The right half of Figure 5 shows the variation in AP b as we vary the overlap threshold for counting something as correct. We plot the improvement in AP b over vanilla R-CNN. We do worse than R-CNN for low thresholds, but are much better for higher thresholds. This is also true to some extent for R-CNN-MCG, so this is partly a property of MCG, and partly a consequence of our algorithm's improved localization. Interestingly, C does worse than B. We posit that this is because now the entire network has been finetuned for SDS.\nFinally we evaluated C on PASCAL VOC 2012 test. Our mean AP b of 50.7 is an improvement over the R-CNN mean AP b of 49.6 (both without bounding box regression), and much better than other systems, such as SegDPM [14] (40.7). For the semantic segmentation task, we convert the output of our final system (C+ref) into a pixel-level category labeling using the simple pasting scheme proposed by Carreira et al. [5]. We cross validate the hyperparameters of this pasting step on the VOC11 segmentation Val set. The results are in Table 5. We compare to O 2 P [5] and R-CNN which are the current state-of-the-art on this task. We advance the state-of-the-art by about 5 points, or 10% relative.\nTo conclude, our pipeline achieves good results on the SDS task while improving state-of-the-art in object detection and semantic segmentation. Figure 7 shows examples of the output of our system.", "Extra": "We train an SVM on top of the CNN features to assign a score for each category to each candidate. 4. Region refinement: We do non-maximum suppression (NMS) on the scored candidates. Then we use the features from the CNN to produce categoryspecific coarse mask predictions to refine the surviving candidates. Combining this mask with the original region candidates provides a further boost.\nSince this task is not a standard one, we need to decide on evaluation metrics. The metric we suggest in this paper is an extension to the bounding box detection metric. It has been proposed earlier [31,32]. Given an image, we expect the algorithm to produce a set of object hypotheses, where each hypothesis comes with a predicted segmentation and a score. A hypothesis is correct if its segmentation overlaps with the segmentation of a ground truth instance by more than 50%. As in the classical bounding box task, we penalize duplicates. With this labeling, we compute a precision recall (PR) curve, and the average precision (AP), which is the area under the curve. We call the AP computed in this way AP r , to distinguish it from the traditional bounding box AP, which we call AP b (the superscripts r and b correspond to region and bounding box respectively). AP r measures the accuracy of segmentation, and also requires the algorithm to get each instance separately and completely. Our pipeline achieves an AP r of 49.5% while at the same time improving AP b from 51.0% (R-CNN) to 53.0%.\nOne can argue that the 50% threshold is itself artificial. For instance if we want to count the number of people in a crowd, we do not need to know their accurate segmentations. On the contrary, in a graphics application that seeks to matte an object into a scene, we might want extremely accurate segmentations. Thus the threshold at which we regard a detection as a true positive depends on the application. In general, we want algorithms that do well under a variety of thresholds. As the threshold varies, the PR curve traces out a PR surface. We can use the volume under this PR surface as a metric. We call this metric AP r vol and AP b vol respectively. AP r vol has the attractive property that an AP r vol of 1 implies we can perfectly detect and precisely segment all objects. Our pipeline gets an AP r vol of 41.4%. We improve AP b vol from 41.9% (R-CNN) to 44.2%. We also find that our pipeline furthers the state-of-the-art in the classic PASCAL VOC semantic segmentation task, from 47.9% to 52.6%. Last but not the least, following work in object detection [18], we also provide a set of diagnostic tools for analyzing common error modes in the SDS task. Our algorithm, the benchmark and all diagnostic tools are publicly available at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/sds. A large number of methods to generate proposals have been proposed in the literature. The methods differ on the type of outputs they produce (boxes vs segments) and the metrics they do well on. Since we are interested in the AP r metric, we care about segments, and not just boxes. Keeping our task in mind, we use candidates from MCG [1] for this paper. This approach significantly outperforms all competing approaches on the object level Jaccard index metric, which measures the average best overlap achieved by a candidate for a ground truth object. In our experiments we find that simply switching to MCG from Selective Search [27] improves AP b slightly (by 0.7 points), justifying this choice.\nWe use the proposals from MCG as is. MCG starts by computing a segmentation hierarchy at multiple image resolutions, which are then fused into a single multiscale hierarchy at the finest scale. Then candidates are produced by combinatorially grouping regions from all the single scale hierarchies and from the multiscale hierarchy. The candidates are ranked based on simple features such as size and location, shape and contour strength. We start from the R-CNN object detector proposed by Girshick et al. [16] and adapt it to the SDS task. Girshick et al. train a CNN on ImageNet Classification and then finetune the network on the PASCAL detection set. For finetuning they took bounding boxes from Selective Search, padded them, cropped them and warped them to a square and fed them to the network. Bounding boxes that overlap with the ground truth by more than 50% were taken as positives and other boxes as negatives. The class label for each positive box was taken to be the class of the ground truth box that overlaps the most with the box. The network thus learned to predict if the bounding box overlaps highly with a ground truth bounding box. We are working with MCG instead of Selective Search, so we train a similar object detection network, finetuned using bounding boxes of MCG regions instead of Selective Search boxes.\nAt test time, to extract features from a bounding box, Girshick et al. pad and crop the box, warp it to a square and pass it through the network, and extract features from one of the later layers, which is then fed into an SVM. In this paper we will use the penultimate fully connected layer.\nFor the SDS task, we can now use this network finetuned for detection to extract feature vectors from MCG bounding boxes. However these feature vectors do not contain any information about the actual region foreground, and so will be ill-equipped to decide if the region overlaps highly with a ground truth segmentation or not. To get around this, we start with the idea used by Girshick et al. for their experiment on semantic segmentation: we extract a second set of features from the region by feeding it the cropped, warped box, but with the background of the region masked out (with the mean image.) Concatenating these two feature vectors together gives us the feature vector we use. (In their experiments Girshick et al. found both sets of features to be useful.) This method of extracting features out of the region is the simplest way of extending the object detection system to the SDS task and forms our baseline. We call this feature extractor A.\nThe network we are using above has been finetuned to classify bounding boxes, so its use in extracting features from the region foreground is suboptimal. Several neurons in the network may be focussing on context in the background, which will be unavailable when the network is fed the region foreground. This suggests that we should use a different network to extract the second set of features: one that is finetuned on the kinds of inputs that it is going to see. We therefore finetune another network (starting again from the net trained on Im-ageNet) which is fed as input cropped, padded bounding boxes of MCG regions with the background masked out. Because this region sees the actual foreground, we can actually train it to predict region overlap instead, which is what we care about. Therefore we change the labeling of the MCG regions to be based on segmentation overlap of the region with a ground truth region (instead of overlap with bounding box). We call this feature extractor B.\nThe previous strategy is still suboptimal, because the two networks have been trained in isolation, while at test time the two feature sets are going to be combined and fed to the classifier. This suggests that one should train the networks jointly. We formalize this intuition as follows. We create a neural network with the architecture shown in Figure 2. This architecture is a single network with two pathways. The first pathway operates on the cropped bounding box of the region (the \"box\" pathway) while the second pathway operates on the cropped bounding box with the background masked (the \"region\" pathway). The two pathways are disjoint except at the very final classifier layer, which concatenates the features from both pathways. Both these pathways individually have the same architecture as that of Krizhevsky et al. Note that both A and B can be seen as instantiations of this architecture, but with different sets of weights. A uses the same network parameters for both pathways. For B, the box pathway gets its weights from a network finetuned separately using bounding box overlap, while the region pathway gets its parameters from a network finetuned separately using region overlap.\nInstead of using the same network in both pathways or training the two pathways in isolation, we now propose to train it as a whole directly. We use segmentation overlap as above. We initialize the box pathway with the network finetuned on boxes and the region pathway with the network finetuned on regions, and then finetune the entire network. At test time, we discard the final classification layer and use the output of the penultimate layer, which concatenates the features from the two pathways. We call this feature extractor C. We use the features from the previous step to train a linear SVM. We first train an initial SVM using ground truth as positives and regions overlapping ground truth by less than 20% as negative. Then we re-estimate the positive set: for each ground truth we pick the highest scoring MCG candidate that overlaps by more than 50%. Ground truth regions for which no such candidate exists (very few in number) are discarded. We then retrain the classifier using this new positive set. This training procedure corresponds to a multiple instance learning problem where each ground truth defines a positive bag of regions that overlap with it by more than 50%, and each negative region is its own bag. We found this training to work better than using just the ground truth as positives.\nAt test time we use the region classifiers to score each region. Because there may be multiple overlapping regions, we do a strict non-max suppression using a region overlap threshold of 0. This is because while the bounding box of two objects can in fact overlap, their pixel support in the image typically shouldn't. Post NMS, we work with only the top 20,000 detections for each category (over the whole dataset) and discard the rest for computational reasons. We confirmed that this reduction in detections has no effect on the AP r metric. We take each of the remaining regions and refine its support. This is necessary because our region candidates have been created by a purely bottom-up, class agnostic process. Since the candidate generation has not made use of categoryspecific shape information, it is prone to both undershooting (i.e. missing some part of the object) and overshooting (i.e. including extraneous stuff).\nWe first learn to predict a coarse, top-down figure-ground mask for each region. To do this, we take the bounding box of each predicted region, pad it as for feature extraction, and then discretize the resulting box into a 10 × 10 grid. For each grid cell we train a logistic regression classifier to predict the probability that the grid cell belongs to the foreground. The features we use are the features extracted from the CNN, together with the figure-ground mask of the region Fig. 3. Some examples of region refinement. We show in order the image, the original region, the coarse 10 × 10 mask, the coarse mask projected to superpixels, the output of the final classifier on superpixels and the final region after thresholding. Refinement uses top-down category specific information to fill in the body of the train and the cat and remove the road from the car.\ndiscretized to the same 10 × 10 grid. The classifiers are trained on regions from the training set that overlap by more than 70% with a ground truth region.\nThis coarse figure-ground mask makes a top-down prediction about the shape of the object but does not necessarily respect the bottom-up contours. In addition, because of its coarse nature it cannot do a good job of modeling thin structures like aircraft wings or structures that move around. This information needs to come from the bottom-up region candidate. Hence we train a second stage to combine this coarse mask with the region candidate. We project the coarse mask to superpixels by assigning to each superpixel the average value of the coarse mask in the superpixel. Then we classify each superpixel, using as features this projected value in the superpixel and a 0 or 1 encoding if the superpixel belongs to the original region candidate. Figure 3 illustrates this refinement. A is our most naive feature extractor. It uses MCG candidates and features from the bounding box and region foreground, using a single CNN finetuned using box overlaps. It achieves a mean AP r of 42.9% and a mean AP r vol of 37.0%, a large jump over O 2 P. This mirrors gains in object detection observed by Girshick et al. [16], although since O 2 P is not designed for this task the comparison is somewhat unfair. 3. B is the result of finetuning a separate network exclusively on region foregrounds with labels defined by region overlap. This gives a large jump of the AP r metric (of about 4 percentage points) and a smaller but significant jump on the AP r vol metric of about 2.5 percentage points. 4. C is the result of training a single large network with two pathways. There is a clear gain over using two isolated networks: on both metrics we gain about 0.7 percentage points. 5. C+ref is the result of refining the masks of the regions obtained from C.\nWe again gain 2 points in the AP r metric and 1.2 percentage points in the AP r vol metric. This large jump indicates that while MCG candidates we start from are very high quality, there is still a lot to be gained from refining the regions in a category specific manner.\nA paired sample t-test indicates that each of the above improvements are statistically significant at the 0.05 significance level.\nThe left part of Figure 5 plots the improvement in mean AP r over A as we vary the threshold at which a detection is considered correct. Each of our improvements increases AP r across all thresholds, indicating that we haven't overfit to a particular regime.\nClearly we get significant gains over both our naive baseline as well as O2P. However, prior approaches that reason about segmentation together with detection might do better on the AP r metric. To see if this is the case, we compare to the SegDPM work of Fidler et al. [14]. SegDPM combined DPMs [13] with O 2 P [5] and achieved a 9 point boost over DPMs in classical object detection. For this method, only the bounding boxes are available publicly, and for some boxes the algorithm may choose not to have associated segments. We therefore compute an upper bound of its performance by taking each detection, considering all MCG regions whose bounding box overlaps with the detection by more than 70%, and selecting the region which best overlaps a ground truth.\nSince SegDPM detections are only available on PASCAL VOC2010 val, we restrict our evaluations only to this set. Our upper bound on SegDPM has a mean AP r of 31.3, whereas C+ref achieves a mean AP r of 50.3. Inspired by [18], we created tools for figuring out error modes and avenues for improvement for the SDS task. As in [18], we evaluate the impact of error modes by measuring the improvement in AP r if the error mode was corrected. For localization, we assign labels to detections under two thresholds: the usual strict threshold of 0.5 and a more lenient threshold of 0.1 (note that this is a threshold on region overlap). Detections that count as true positives under the lenient threshold but as false positives under the strict threshold are considered mislocalizations. Duplicate detections are also considered mislocalizations. We then consider the performance if either a) all mislocalized instances were removed, or b) all mislocalized instances were correctly localized and duplicates removed. Figure 4 shows how the PR curve for the AP r benchmark changes if mislocalizations are corrected or removed for two categories. For the person category, removing mislocalizations brings precision up to essentially 100%, indicating that mislocalization is the predominant source of false positives. Correcting the mislocalizations provides a huge jump in recall. For the cat category the improvement provided by better localization is much less, indicating that there are still some false positives arising from misclassifications.\nWe can do this analysis for all categories. The average improvement in AP r by fixing mislocalization is a measure of the impact of mislocalization on performance. We can also measure impact in this way for other error modes: for instance, false positives on objects of other similar categories, or on background [18]. (For defining similar and non-similar categories, we divide object categories into \"animals\", \"transport\" and \"indoor\" groups.) The left subfigure in Figure 6 shows the result of such an analysis on our best system (C+ref). The dark blue bar shows the AP r improvement if we remove mislocalized detections and the light blue bar shows the improvement if we correct them. The other two bars show the improvement from removing confusion with similar categories and background. Mislocalization has a huge impact: it sets us back by about 16 percentage points. Compared to that confusion with similar categories or background is virtually non-existent.\nWe can measure the impact of mislocalization on the other algorithms in Table 1 as well, as shown in Table 3. It also shows the upper bound AP r achievable when all mislocalization is fixed. Improvements in the feature extractor improve the upper bound (indicating fewer misclassifications) but also reduce the gap due to mislocalization (indicating better localization). Refinement doesn't change the upper bound and only improves localization, as expected.\nTo get a better handle on what one needs to do to improve localization, we considered two statistics. For each detection and a ground truth, instead of just taking the overlap (i.e. intersection over union), we can compute the pixel precision (fraction of the region that lies inside the ground truth) and pixel recall (fraction of the ground truth that lies inside the region). It can be shown that having both a pixel precision > 67% and a pixel recall > 67% is guaranteed to give an overlap of greater than 50%. We assign detection labels using pixel precision or pixel recall using a threshold of 67% and compute the respective AP. Comparing these two numbers then gives us a window into the kind of localization errors: a low pixel precision AP indicates that the error mode is overshooting the region and predicting extraneous background pixels, while a low pixel recall AP indicates that the error mode is undershooting the region and missing out some ground truth pixels.\nThe second half of Figure 6 shows the difference between pixel precision AP (AP pp ) and pixel recall AP (AP pr ). Bars to the left indicate higher pixel recall AP, while bars to the right indicate higher pixel precision AP. For some categories such as person and bird we tend to miss ground truth pixels, whereas for others such as bicycle we tend to leak into the background. Fig. 5. Left: Improvement in mean AP r over A due to our 3 variants for a variety of overlap thresholds. We get improvements for all overlap thresholds. Right: A similar plot for AP b . Improvements are relative to R-CNN with Selective Search proposals [16]. As the threshold becomes stricter, the better localization of our approach is apparent. performance on the individual tasks. To compare on AP b , we retrain our final region classifiers for the bounding box detection task. This is because the ranking of regions based on bounding box overlap is different from that based on segmentation overlap. As in [16], we use ground truth boxes as positive, and MCG boxes overlapping by less than 50% as negative. At test time we do not do any region refinement." }, { "title": "Deep residual learning for image recognition", "year": 2016.0, "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "arxiv_di": "1512.03385", "Introduction": "Deep convolutional neural networks [22,21] have led to a series of breakthroughs for image classification [21,50,40]. Deep networks naturally integrate low/mid/highlevel features [50] and classifiers in an end-to-end multilayer fashion, and the \"levels\" of features can be enriched by the number of stacked layers (depth). Recent evidence [41,44] reveals that network depth is of crucial importance, and the leading results [41,44,13,16] on the challenging ImageNet dataset [36] all exploit \"very deep\" [41] models, with a depth of sixteen [41] to thirty [16]. Many other nontrivial visual recognition tasks [8,12,7,32,27] have also 1 http://image-net.org/challenges/LSVRC/2015/ and http://mscoco.org/dataset/#detections-challenge2015. greatly benefited from very deep models.\nDriven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1,9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23,9,37,13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].\nWhen deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11,42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.\nThe degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).\nIn this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.\nThe formulation of F(x) + x can be realized by feedforward neural networks with \"shortcut connections\" (Fig. 2). Shortcut connections [2,34,49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.\nWe present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart \"plain\" nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.\nSimilar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.\nOn the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.", "Related_Work": "Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4,48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.\nIn low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45,46], which relies on variables that represent residual vectors between two scales. It has been shown [3,45,46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. Shortcut Connections. Practices and theories that lead to shortcut connections [2,34,49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34,49]. In [44,24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39,38,31,47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an \"inception\" layer is composed of a shortcut branch and a few deeper branches.\nConcurrent with our work, \"highway networks\" [42,43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is \"closed\" (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, high-way networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).", "Extra": "Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functionsfoot_0 , then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x)x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) := H(x)x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.\nThis reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1,left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.\nIn real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning. We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:\ny = F(x, {W i }) + x.(1)\nHere x and y are the input and output vectors of the layers considered. The function F(x, {W i }) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W 2 σ(W 1 x) in which σ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2). The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).\nThe dimensions of x and F must be equal in Eqn.( 1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection W s by the shortcut connections to match the dimensions:\ny = F(x, {W i }) + W s x.(2)\nWe can also use a square matrix W s in Eqn. (1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus W s is only used when matching dimensions.\nThe form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.( 1) is similar to a linear layer: y = W 1 x + x, for which we have not observed advantages.\nWe also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x, {W i }) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel. We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.\nPlain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3 Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.( 1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.( 2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2. Our implementation for ImageNet follows the practice in [21,41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 10 4 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].\nIn testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [41,13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}). We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.\nPlain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.\nThe results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem -the Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.\n34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.\nWe argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error 3 . The reason for such optimization difficulties will be studied in the future.\nResidual Networks. Next we evaluate 18-layer and 34layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.\nWe have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning -the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.\nSecond, compared to its plain counterpart, the 34-layer 6.8 PReLU-net [13] 4.94 BN-inception [16] 4.82 ResNet (ILSVRC '15) 3.57\nTable 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.\nResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems. Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is \"not overly deep\" (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.\nIdentity vs. Projection Shortcuts. We have shown that 2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.\nDeeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design 4 . For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.\nThe parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs. The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and4).\nComparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015. We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.\nThe plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), method error (%) Maxout [10] 9.38 NIN [25] 8.81 DSN [24] 8.22 # layers # params FitNet [ so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.\nWe compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difficulty is a fundamental problem.\nFig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.\nWe further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging 5 . So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).\nAnalysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.\nExploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 10 3 -layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [10] or dropout [14] is applied to obtain the best results ( [10,25,24,35]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future. Our method has good generalization performance on other recognition tasks. Table 7 and8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [41] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO's standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.\nBased on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: Im-ageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix. In this section we introduce our detection method based on the baseline Faster R-CNN [32] system. The models are initialized by the ImageNet classification models, and then fine-tuned on the object detection data. We have experimented with ResNet-50/101 at the time of the ILSVRC & COCO 2015 detection competitions.\nUnlike VGG-16 used in [32], our ResNet has no hidden fc layers. We adopt the idea of \"Networks on Conv feature maps\" (NoC) [33] to address this issue. We compute the full-image shared conv feature maps using those layers whose strides on the image are no greater than 16 pixels (i.e., conv1, conv2 x, conv3 x, and conv4 x, totally 91 conv layers in ResNet-101; Table 1). We consider these layers as analogous to the 13 conv layers in VGG-16, and by doing so, both ResNet and VGG-16 have conv feature maps of the same total stride (16 pixels). These layers are shared by a region proposal network (RPN, generating 300 proposals) [32] and a Fast R-CNN detection network [7]. RoI pooling [7] is performed before conv5 1. On this RoI-pooled feature, all layers of conv5 x and up are adopted for each region, playing the roles of VGG-16's fc layers. The final classification layer is replaced by two sibling layers (classification and box regression [7]).\nFor the usage of BN layers, after pre-training, we compute the BN statistics (means and variances) for each layer on the ImageNet training set. Then the BN layers are fixed during fine-tuning for object detection. As such, the BN layers become linear activations with constant offsets and scales, and BN statistics are not updated by fine-tuning. We fix the BN layers mainly for reducing memory consumption in Faster R-CNN training. Following [7,32], for the PASCAL VOC 2007 test set, we use the 5k trainval images in VOC 2007 and 16k trainval images in VOC 2012 for training (\"07+12\"). For the PASCAL VOC 2012 test set, we use the 10k trainval+test images in VOC 2007 and 16k trainval images in VOC 2012 for training (\"07++12\"). The hyper-parameters for training Faster R-CNN are the same as in [32]. Table 7 shows the results. ResNet-101 improves the mAP by >3% over VGG-16. This gain is solely because of the improved features learned by ResNet. The MS COCO dataset [26] involves 80 object categories. We evaluate the PASCAL VOC metric (mAP @ IoU = 0.5) and the standard COCO metric (mAP @ IoU = .5:.05:.95). We use the 80k images on the train set for training and the 40k images on the val set for evaluation. Our detection system for COCO is similar to that for PASCAL VOC. We train the COCO models with an 8-GPU implementation, and thus the RPN step has a mini-batch size of 8 images (i.e., 1 per GPU) and the Fast R-CNN step has a mini-batch size of 16 images. The RPN step and Fast R-CNN step are both trained for 240k iterations with a learning rate of 0.001 and then for 80k iterations with 0.0001.\nTable 8 shows the results on the MS COCO validation set. ResNet-101 has a 6% increase of mAP@[.5, .95] over VGG-16, which is a 28% relative improvement, solely contributed by the features learned by the better network. Remarkably, the mAP@[.5, .95]'s absolute increase (6.0%) is nearly as big as mAP@.5's (6.9%). This suggests that a deeper network can improve both recognition and localization. For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should benefit from residual learning.\nMS COCO Box refinement. Our box refinement partially follows the iterative localization in [6]. In Faster R-CNN, the final output is a regressed box that is different from its proposal box. So for inference, we pool a new feature from the regressed box and obtain a new classification score and a new regressed box. We combine these 300 new predictions with the original 300 predictions. Non-maximum suppression (NMS) is applied on the union set of predicted boxes using an IoU threshold of 0.3 [8], followed by box voting [6]. Box refinement improves mAP by about 2 points (Table 9).\nGlobal context. We combine global context in the Fast R-CNN step. Given the full-image conv feature map, we pool a feature by global Spatial Pyramid Pooling [12] (with a \"single-level\" pyramid) which can be implemented as \"RoI\" pooling using the entire image's bounding box as the RoI. This pooled feature is fed into the post-RoI layers to obtain a global context feature. This global feature is concatenated with the original per-region feature, followed by the sibling classification and box regression layers. This new structure is trained end-to-end. Global context improves mAP@.5 by about 1 point (Table 9).\nMulti-scale testing. In the above, all results are obtained by single-scale training/testing as in [32], where the image's shorter side is s = 600 pixels. Multi-scale training/testing has been developed in [12,7] by selecting a scale from a feature pyramid, and in [33] by using maxout layers. In our current implementation, we have performed multi-scale testing following [33]; we have not performed multi-scale training because of limited time. In addition, we have performed multi-scale testing only for the Fast R-CNN step (but not yet for the RPN step). With a trained model, we compute conv feature maps on an image pyramid, where the image's shorter sides are s ∈ {200, 400, 600, 800, 1000}. ). The baseline is the Faster R-CNN system. The system \"baseline+++\" include box refinement, context, and multi-scale testing in Table 9.\nWe select two adjacent scales from the pyramid following [33]. RoI pooling and subsequent layers are performed on the feature maps of these two scales [33], which are merged by maxout as in [33]. Multi-scale testing improves the mAP by over 2 points (Table 9).\nUsing validation data. Next we use the 80k+40k trainval set for training and the 20k test-dev set for evaluation. The testdev set has no publicly available ground truth and the result is reported by the evaluation server. Under this setting, the results are an mAP@.5 of 55.7% and an mAP@[.5, .95] of 34.9% (Table 9). This is our single-model result.\nEnsemble. In Faster R-CNN, the system is designed to learn region proposals and also object classifiers, so an ensemble can be used to boost both tasks. We use an ensemble for proposing regions, and the union set of proposals are processed by an ensemble of per-region classifiers. Table 9 shows our result based on an ensemble of 3 networks. The mAP is 59.0% and 37.4% on the test-dev set. This result won the 1st place in the detection task in COCO 2015. We revisit the PASCAL VOC dataset based on the above model. With the single model on the COCO dataset (55.7% mAP@.5 in Table 9), we fine-tune this model on the PAS-CAL VOC sets. The improvements of box refinement, context, and multi-scale testing are also adopted. By doing so Table 12. Our results (mAP, %) on the ImageNet detection dataset.\nOur detection system is Faster R-CNN [32] with the improvements in Table 9, using ResNet-101.\nwe achieve 85.6% mAP on PASCAL VOC 2007 (Table 10) and 83.8% on PASCAL VOC 2012 (Table 11) 6 . The result on PASCAL VOC 2012 is 10 points higher than the previous state-of-the-art result [6]. The ImageNet Detection (DET) task involves 200 object categories. The accuracy is evaluated by mAP@.5. Our object detection algorithm for ImageNet DET is the same as that for MS COCO in Table 9. The networks are pretrained on the 1000-class ImageNet classification set, and are fine-tuned on the DET data. We split the validation set into two parts (val1/val2) following [8]. We fine-tune the detection models using the DET training set and the val1 set. The val2 set is used for validation. We do not use other ILSVRC 2015 data. Our single model with ResNet-101 has The ImageNet Localization (LOC) task [36] requires to classify and localize the objects. Following [40,41], we assume that the image-level classifiers are first adopted for predicting the class labels of an image, and the localization algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the \"per-class regression\" (PCR) strategy [40,41], learning a bounding box regressor for each class. We pre-train the networks for Im-ageNet classification and then fine-tune them for localization. We train networks on the provided 1000-class Ima-geNet training set.\nOur localization algorithm is based on the RPN framework of [32] with a few modifications. Unlike the way in [32] that is category-agnostic, our RPN for localization is designed in a per-class form. This RPN ends with two sibling 1×1 convolutional layers for binary classification (cls) and box regression (reg), as in [32]. The cls and reg layers are both in a per-class from, in contrast to [32]. Specifically, the cls layer has a 1000-d output, and each dimension is binary logistic regression for predicting being or not being an object class; the reg layer has a 1000×4-d output consisting of box regressors for 1000 classes. As in [32], our bounding box regression is with reference to multiple translation-invariant \"anchor\" boxes at each position.\nAs in our ImageNet classification training (Sec. 3.4), we randomly sample 224×224 crops for data augmentation. We use a mini-batch size of 256 images for fine-tuning. To avoid negative samples being dominate, 8 anchors are randomly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 [32]. For testing, the network is applied on the image fully-convolutionally.\nTable 13 compares the localization results. Following [41], we first perform \"oracle\" testing using the ground truth class as the classification prediction. VGG's paper [41] ports a center-crop error of 33.1% (Table 13) using ground truth classes. Under the same setting, our RPN method using ResNet-101 net significantly reduces the center-crop error to 13.3%. This comparison demonstrates the excellent performance of our framework. With dense (fully convolutional) and multi-scale testing, our ResNet-101 has an error of 11.7% using ground truth classes. Using ResNet-101 for predicting classes (4.6% top-5 classification error, Table 4), the top-5 localization error is 14.4%. The above results are only based on the proposal network (RPN) in Faster R-CNN [32]. One may use the detection network (Fast R-CNN [7]) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN [7] generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original R-CNN [8] that is RoI-centric, in place of Fast R-CNN.\nOur R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals. For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classifier. The image region is cropped from a proposal, warped to 224×224 pixels, and fed into the classification network as in R-CNN [8]. The outputs of this network consist of two sibling fc layers for cls and reg, also in a per-class form. This R-CNN network is fine-tuned on the training set using a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposals' scores and box positions.\nThis method reduces the top-5 localization error to 10.6% (Table 13). This is our single-model result on the validation set. Using an ensemble of networks for both classification and localization, we achieve a top-5 localization error of 9.0% on the test set. This number significantly outperforms the ILSVRC 14 results (Table 14), showing a 64% relative reduction of error. This result won the 1st place in the ImageNet localization task in ILSVRC 2015." }, { "title": "Ccnet: Criss-cross attention for semantic segmentation", "year": 2019.0, "authors": "Zilong Huang; Xinggang Wang; Lichao Huang; Chang Huang; Yunchao Wei; Wenyu Liu", "arxiv_di": "1811.11721", "Introduction": "S EMANTIC segmentation, which is a fundamental prob- lem in the computer vision community, aims at assigning semantic class labels to each pixel in a given image. It has been extensively and actively studied in many recent works and is also critical for various significant applications such as autonomous driving [1], augmented reality [2],image editing [3], civil engineering [4], remote sensing imagery [5] and agricultural pattern analysis [6], [7]. Specifically, current state-of-the-art semantic segmentation approaches based on the fully convolutional network (FCN) [8] have made remarkable progress. However, due to the fixed geometric structures, the conventional FCN is inherently limited to local receptive fields that only provide short-range contextual information. The limitation of insufficient contextual information imposes a great adverse effect on its segmentation accuracy.\nTo make up for the above deficiency of FCN, some works have been proposed to introduce useful contextual information to benefit the semantic segmentation task. Specifically, Chen et al. [10] proposed atrous spatial pyramid pooling (a) For each position (e.g., blue), the Non-local module [9] generates a dense attention map which has N weights (in green). (b) For each position (e.g., blue), the criss-cross attention module generates a sparse attention map which only has about 2 √ N weights. After the recurrent operation, each position (e.g., red) in the final output feature maps can collect information from all pixels. For clear display, residual connections are ignored.\nmodule with multi-scale dilation convolutions for contextual information aggregation. Zhao et al. [11] further introduced PSPNet with pyramid pooling module to capture contextual information. However, the dilated convolution based methods [10], [12], [13] collect information from a few surrounding pixels and cannot generate dense contextual information actually. Meanwhile, the pooling based methods [11], [14] aggregate contextual information in a nonadaptive manner and the homogeneous context extraction procedure is adopted by all image pixels, which does not satisfy the requirement that different pixels need different contextual dependencies.\nTo incorporate dense and pixel-wise contextual information, some fully-connected graph neural network (GNN) [15] methods were proposed to augments traditional convolutional features with an estimated full-image context representation. PSANet [16] learns to aggregate contextual information for each position via a predicted attention map. Non-local Networks [9] utilizes a self-attention mechanism [17], [18], which enables a single feature from any position to perceive features of all the other positions, thus harvesting full-image contextual information, see Fig. 1 (a). These non-local operations could be viewed as a denselyconnected GNN module based on attention mechanism [18]. This feature augmentation method allows a flexible way to represent non-local relations between features and has led to significant improvements in several vision recognition tasks. However, these GNN-based non-local neural networks need to generate huge attention maps to measure the relationships for each pixel-pair, leading to a very high complexity of O(N 2 ) for both time and space, where N is the number of input features. Since the dense prediction tasks, such as semantic segmentation, inherently require high resolution feature maps, the non-local based methods will often with high computation complexity and occupy a huge number of GPU memory. Thus, is there an alternative solution to achieve such a target in a more efficient way?\nTo address the above mentioned issue, our motivation is to replace the common single densely-connected graph with several consecutive sparsely-connected graphs, which usually require much lower computational resources. Without loss of generality, we use two consecutive criss-cross attention modules, in which each one only has sparse connections (about √ N ) for each position in the feature map. For each pixel/position, the criss-cross attention module aggregates contextual information in its horizontal and vertical directions. By serially stacking two criss-cross attention modules, each position can collect contextual information from all pixels in the given image. The above decomposition strategy will greatly reduce the complexities of both time and space from O(N 2 ) to O(N √ N ). We compare the differences between the non-local module [9] and our criss-cross attention module in Fig. 1. Concretely, both non-local module and criss-cross attention module feed the input feature map to generate an attention map for each position and transform the input feature map into an adapted feature map. Then, a weighted sum is adopted to collecting contextual information from other positions in the adapted feature map based on the attention maps. Different from the dense connections adopted by the non-local module, each position (e.g., blue) in the feature map is sparsely connected with other ones which are in the same row and the same column in our criss-cross attention module, leading to the predicted attention map only has about 2", "Methodology": "In this section, we give the details of the proposed Criss-Cross Network (CCNet) for semantic segmentation. We first present a general framework of our CCNet. Then, the 2D criss-cross attention module which captures contextual information in horizontal and vertical directions will be introduced. To capture the dense and global contextual information, we propose to adopt a recurrent operation for the criss-cross attention module. To further improve RCCA, we introduce a discriminative loss function to drive RCCA to learn category consistent features. Finally we propose the 3D criss-cross attention module for leveraging temporal and spatial contextual information simultaneously. We compare the performance of several different context aggregation approaches on the Cityscapes validation set with ResNet-50 and ResNet-101 as backbone networks. Specifically, the baselines of context aggregation mainly include: 1) Peng et al. [59] utilized global convolution filters for contextual information aggregation, donated as \"+GCN\". 2) Zhao et al. [11] proposed Pyramid pooling which is the simple and effective way to capture global contextual information, donated as \"+PSP\"; 3) Chen et al. [12] used different dilation convolutions to harvest pixel-wise contextual information at the different range, donated as \"+ASPP\"; 4) Wang et al. [9] introduced non-local network for context aggregation, donated as \"+NL\".\nIn Tab. 4, both \"+NL\" and \"+RCCA\" achieve better performance compared with the other context aggregation approaches, which demonstrates the importance of capturing full-image contextual information. More interestingly, our method achieves better performance than \"+NL\". This reason may be attributed to the sequentially recurrent operation of criss-cross attention. Concretely, \"+NL\" generates an attention map directly from the feature which has limit receptive field and short-range dependencies. In contrast, our \"+RCCA\" takes two steps to form dense contextual information, leading to that the latter step can learn a better attention map benefiting from the feature map produced by the first step in which some long-range dependencies has already been embedded.\nTo prove the effectiveness of attention with criss-cross shape, we compare criss-cross shape with other shapes in Tab. 4. \"+HV\" means stacking horizontal attention and vertical attention. \"+HV&VH\" means summing up features We further explore the amount of computation and memory footprint of RCCA. As shown in Tab. 5, compared with \"+NL\" method, the proposed \"+RCCA\" requires 11× less GPU memory usage and significantly reduces FLOPs by about 85% of non-local block in computing full-image dependencies, which shows that CCNet is an efficient way to capture full-image contextual information in the least amount of computation and memory footprint. To further prove the effectiveness of the recurrent operation, we also run non-local module in the recurrent way, donated as \"+NL(R=2)\". As we can seen, the recurrent operation can bring more than 1 point gain. Because the recurrent operation leads to that the latter step can learn a better attention map benefiting from the feature map produced by the first step in which some long-range dependencies has already been embedded. However, compared with \"+RCCA\", \"+NL(R=2)\" needs huge GPU memory usage, which limits the use of self-attention. Backbone mIOU(%) RefineNet [31] ResNet-152 40.70 SAC [33] ResNet-101 44.30 PSPNet [11] ResNet-101 43.29 PSANet [16] ResNet-101 43.77 DSSPN [75] ResNet-101 43.68 UperNet [76] ResNet-101 42.66 EncNet [14] ResNet-101 44.", "Dataset": "We adopt Mean IoU (mIOU, mean of class-wise intersection over union) for Cityscapes, ADE20K, LIP and CamVid and the standard COCO metrics Average Precision (AP) for COCO.", "Conclusion": "In this paper, we have presented a Criss-Cross Network (CCNet) for deep learning based dense prediction tasks, 1. https://github.com/facebookresearch/maskrcnn-benchmark", "Experiment_and_Results": "To evaluate the effectiveness of the CCNet, we carry out comprehensive experiments on the Cityscapes dataset [19], the ADE20K dataset [20], the COCO dataset [70], the LIP dataset [21] and the CamVid dataset [71]. Experimental results demonstrate that CCNet achieves state-of-the-art performance on Cityscapes, ADE20K and LIP. Meanwhile, CCNet can bring constant performance gain on COCO for instance segmentation. In the following subsections, we first introduce the datasets and implementation details, then we perform a series of ablation experiments on Cityscapes dataset. Finally, we report our results on ADE20K, LIP, COCO and CamVid datasets. In this subsection, we conduct experiments on the AED20K dataset, which is a very challenging scene parsing dataset. As shown in Tab. 6, CCNet with CCL achieves the stateof-the-art performance of 45.76%, outperforms the previous state-of-the-art methods by more than 1.1% and also outperforms the conference version CCNet by 0.5%. Some successful segmentation results are given in Fig 8 . Among the approaches, most of methods [11], [14], [16], [33], [75], [76] adopt the ResNet-101 as backbone and RefineNet [31] adopts a more powerful network, i.e., ResNet-152, as the backbone. EncNet [14] achieves previous best performance among the methods and utilizes global pooling with imagelevel supervision to collect image-level context information. In contrast, our CCNet adopts an alternative way to integrate contextual information by capture full-image dependencies and achieve better performance. In this subsection, we conduct experiments on the LIP dataset, which is a very challenging human parsing dataset. The framework of CE2P [41] is utilized, with ImageNet pretrained ResNet-101 as bockbone and using RCCA (R=2) rather than PSP [11] as context embedding module. The category consistent loss is used to boost the performance. The hyper-parameter setting strictly follows that in the CE2P [41]. Among the approaches, Deeplab (VGG-16) [25], Attention [51] and SAN [42] adopt the VGG-16 as backbone and Deeplab (ResNet-101) [10], JPPNet [21], CE2P [41] and CCNet adopt ResNet-101 as the backbone. As shown in Tab. 7, CCNet achieves the state-of-the-art performance of 55.47%, outperforms the previous state-of-the-art methods by more than 2.3%. This significant improvement demonstrates the effectiveness of proposed method on human parsing task. segmentation even for complicated poses. The third row shows a failure segmentation result where the \"skirt\" is misclassified as \"pants\". But it's difficult to recognize even for humans. To further demonstrate the generality of CCNet, we conduct the instance segmentation task on COCO [70] using the competitive Mask R-CNN model [72] as the baseline. Following [9], we modify the Mask R-CNN backbone by adding the RCCA module right before the last convolutional residual block of res4. We evaluate a standard baseline of ResNet-50/101. All models are fine-tuned from ImageNet pre-training. We use the official implementation 1 with endto-end joint training whose performance is almost the same as the baseline reported in [9]. For fair comparison, we do not use the category consistent loss in our method. We report the results in terms of box AP and mask AP in Tab. 8 on COCO. The results demonstrate that our method substantially outperforms the baseline in all metrics. Some segmentation results for comparing baseline with \"+RCCA\" are given in Fig 10 . Meanwhile, the network with \"+RCCA\" also achieves the better performance than the network with one non-local block \"+NL\". To further demonstrate the effectiveness of 3D-RCCA, we carry out the experiments on CamVid [71], which is one of the first datasets focusing on video semantic segmentation for driving scenarios. We follow the standard protocol proposed in [77] to split the dataset into 367 training, 101 validation and 233 test images. For fair comparison, we only report single-scale evaluation scores. As can be seen in Tab. 9, we achieve an mIoU of 79.1%, outperforming all other methods by a large margin.\nTo demonstrate the effectiveness of our proposed techniques, we perform training under the same settings with the different length of input frames. We apply the CNNs on each frame for extracting features and then concatenate and reshape them to satisfy the required shape of 3D Criss-Coss Attention module. We use the R = 3 for collecting dense spatial and temporal contextual information. Here, to make a training sample, we try two kinds of length (T ) of input frames. For T = 1, we randomly sample 1 frame from a training video, donated as \"CCNet3D (T = 1)\". For T = 5, we sample 5 temporally ordered frames from a training video, donated as \"CCNet3D (T = 5)\". As can be seen in Tab. 9, \"CCNet3D (T = 5)\" outperforms \"CCNet3D (T = 1)\" by 1.2%.", "Extra": "N weights rather than N in non-local module. To achieve the goal of capturing the full-image dependencies, we innovatively and simply take a recurrent operation for the criss-cross attention module. In particular, the local features are firstly passed through one criss-cross attention module to collect the contextual information in horizontal and vertical directions. Then, by feeding the feature map produced by the first criss-cross attention module into the second one, the additional contextual information obtained from the criss-cross path finally enables the fullimage dependencies for all positions. As demonstrated in Fig. 1 (b), each position (e.g.red) in the second feature map can collect information from all others to augment the position-wise representations. We share parameters of the criss-cross modules to keep our model slim. Since the input and output are both convolutional feature maps, crisscross attention module can be easily plugged into any fully convolutional neural network, named as CCNet, for learning full-image contextual information in an end-to-end manner. Thanks to the good usability of criss-cross attention module, CCNet is straight forward to extend to 3D networks for capturing long-range temporal context information.\nIn addition, to drive the proposed recurrent criss-cross attention method to learn more discriminative features, we introduce a category consistent loss to augment CCNet. Particularly, the category consistent loss enforces the network to map each pixel in the image to an n-dimensional vector in the feature space, such that feature vectors of pixels that belong to the same category lie close together while feature vectors of pixels that belong to different categories lie far apart.\nWe have carried out extensive experiments on multiple large-scale datasets. Our proposed CCNet achieves top performance on four most competitive semantic segmentation datasets, i.e., Cityscapes [19], ADE20K [20], LIP [21] and CamVid [22]. In addition, the proposed criss-cross attention even improves the state-of-the-art instance segmentation method, i.e., Mask R-CNN with ResNet-101 [23]. These results well demonstrate that our criss-cross attention module is generally beneficial to the dense prediction tasks. In summary, our main contributions are three-fold: We propose a novel criss-cross attention module in this work, which can be leveraged to capture contextual information from full-image dependencies in a more efficient and effective way. We propose category consistent loss which can enforce criss-cross attention module to produce more discriminative features. We propose CCNet by taking advantages of recurrent criss-cross attention module, achieving leading performance on segmentation-based benchmarks, including Cityscapes, ADE20K, LIP, CamVid and COCO.\nCompare with our original conference version [24], the following improvements are conducted: 1) We further enhance the segmentation ability of CCNet by augmenting a simple yet effective category consistent loss; 2) we propose a more generic CCNet by extending the criss-cross attention module from 2D to 3D; 3) we include more extensive experiments on the LIP, CamVid and COCO datasets to verify the effectiveness and generalization ability of our CCNet.\nThe rest of this paper is organized as follows. We first review related work in Section 2 and describe the architecture of our network in Section 3. In Section 4, ablation studies are given and experimental results are analyzed. Section 5 presents our conclusion and future work. The last years have seen a renewal of interest on semantic segmentation. FCN [8] is the first approach to adopt fully convolutional network for semantic segmentation. Later, FCN-based methods have made remarkable progress in image semantic segmentation. Chen et al. [25] and Yu et al. [26] removed the last two downsample layers to obtain dense prediction and utilized dilated convolutions to enlarge the receptive field. Unet [27], DeepLabv3+ [28], MSCI [29], SPGNet [30], RefineNet [31] and DFN [32] adopted encoderdecoder structures that fuse the information in low-level and high-level layers to make dense predictions. The scaleadaptive convolutions (SAC) [33] and deformable convolutional networks (DCN) [34] methods improved the standard convolutional operator to handle the deformation and various scales of objects. CRF-RNN [26] and DPN [35] used Graph model, i.e., CRF, MRF, for semantic segmentation. AAF [36] used adversarial learning to capture and match the semantic relations between neighboring pixels in the label space. BiSeNet [37] was designed for real-time semantic segmentation. DenseDecoder [38] built feature-level long-range skip connections on cascaded architecture. VideoGCRF [39] used a densely-connected spatio-temporal graph for video semantic segmentation. RTA [40] proposed the region-based temporal aggregation for leveraging the temporal information in videos. In addition, some works focus on human parsing task. JPPNet [21] embed pose estimation into human parsing task. CE2P [41] proposed a simple yet effective framework for computing context embedding while preserving edges. SANet [42] used parallel branches with scale attention to handle large scale variance in human parsing. Semantic segmentation is also actively studied in the context of domain adaptation and dstillation [43], [44], [45] and weakly supervised setting [46], [47], [48], etc. It is a common practice to aggregate contextual information to augment the feature representation in semantic segmentation networks. Deeplabv2 [10] proposed atrous spatial pyramid pooling (ASPP) to use different dilation convolutions to capture contextual information. DenseASPP [49] brought dense connections into ASPP to generate features with various scale. DPC [50] utilized architecture search techniques to build multi-scale architectures for semantic segmentation. Chen et al. [51] made use of several attention masks to fuse feature maps or prediction maps from different branches. PSPNet [11] utilized pyramid spatial pooling to aggregate contextual information. Recently, Zhao et al. [16] proposed the point-wise spatial attention network which uses predicted attention map to guide contextual information collection. Auto-Deeplab [52] utilized neural architecture search to search an effective context modeling. He et al. [53] proposed an adaptive pyramid context module for semantic segmentation. Liu et al. [54] utilized recurrent neural networks (RNNs) to capture long-range dependencies.\nThere are some works use graph models to model the contextual information. Conditional random field (CRF) [25], [40], [55], Markov random field (MRF) [35] were also utilized to capture long-range dependencies for semantic segmentation. Vaswani et al. [18] applied a self-attention model on machine translation. Wang et al. [9] proposed the non-local module to generate the huge attention map by calculating the correlation matrix between each spatial point on the feature maps, then the attention map guided dense contextual information aggregation. OCNet [56] and DANet [57] utilized Non-local module [9] to harvest the contextual information. PSA [16] learned an attention map to aggregate contextual information for each individual point adaptively and specifically. Chen et al. [58] proposed graph-based global reasoning networks which implements relation reasoning via graph convolution on a small graph.\nCCNet vs. Non-Local vs. GCN. Here, we specifically discuss the differences among GCN [59], Non-local Network [9] and CCNet. In term of contextual information aggregation, only the center point can perceive the contextual information from all pixels by the global convolution filters in GCN [59]. In contrast, Non-local Network [9] and CCNet guarantee that a pixel at any position perceives contextual information from all pixels. Though GCN [59] alternatively decomposes the square-shape convolutional operation to horizontal and vertical linear convolutional operations which is related to CCNet, CCNet takes the crisscross way to harvest contextual information which is more effective than the horizontal-vertical separate way. Moreover, CCNet is proposed to mimic Non-local Network [9] for obtaining dense contextual information through a more effective and efficient recurrent criss-cross attention module, in which dissimilar features get low attention weights and features with high attention weights are similar ones. GCN [59] is a conventional convolution neural network, while CCNet is a graph neural network in which each pixel in the convolutional feature map is considered as a node and the relation/context among nodes can be utilized to generate better node features. Our work is related to deep graph neural network (GNN). Prior to graph neural networks, graphical models, such as the conditional random field (CRF) [25], [40], [55], markov random field (MRF) [35], were widely used to model the long-range dependencies for image understanding. GNNs were early studied in [15], [60], [61]. Inspired by the success of CNNs, a large number of methods adapt graph structure into CNNs. These methods could be divided into two main steams, the spectral-based approaches [62], [63], [64], [65] and the spatial-based approaches [9], [66], [67], [68]. The proposed CCNet belongs to the latter. Recurrent The network architecture is given in Fig. 2. An input image is passed through a deep convolutional neural network (DCNN), which is designed in a fully convolutional fashion [10], to produce feature map X with the spatial size of H × W . In order to retain more details and efficiently produce dense feature maps, we remove the last two downsampling operations and employ dilation convolutions in the subsequent convolutional layers, leading to enlarging the width/height of the output feature map X to 1/8 of the input image. Given X, we first apply a convolutional layer to obtain the feature map H of dimension reduction. Then, H is fed into the criss-cross attention module to generate a new feature map H which aggregate contextual information together for each pixel in its criss-cross path. The feature map H only contains the contextual information in horizontal and vertical directions which are not powerful enough for accurate semantic segmentation. To obtain richer and denser context information, we feed the feature map H into the criss-cross attention module again and output the feature map H . Thus, each position in H actually gathers the information from all pixels. Two criss-cross attention modules before and after share the same parameters to avoid adding too many extra parameters. We name this recurrent structure as recurrent criss-cross attention (RCCA) module.\nThen, we concatenate the dense contextual feature H with the local representation feature X. It is followed by one or several convolutional layers with batch normalization and activation for feature fusion. Finally, the fused features are fed into the segmentation layer to predict the final segmentation result. To model full-image dependencies over local feature representations using light-weight computation and memory, we introduce a criss-cross attention module. The criss-cross attention module collects contextual information in horizontal and vertical directions to enhance pixel-wise representative capability. As shown in Fig. 3, given a local feature map H ∈ R C×W ×H , the module first applies two convolutional layers with 1 × 1 filters on H to generate two feature maps Q and K, respectively, where {Q, K} ∈ R C ×W ×H . C is the number of channel, which is less than C for dimension reduction.\nAfter obtaining Q and K, we further generate an attention map A ∈ R (H+W -1)×(W ×H) via Affinity operation. At each position u in the spatial dimension of Q, we can obtain a vector Q u ∈ R C . Meanwhile, we can also obtain the set Ω u ∈ R (H+W -1)×C by extracting feature vectors from K which are in the same row or column with position u. Ω i,u ∈ R C is the i-th element of Ω u . The Affinity operation is then defined as follows.\nd i,u = Q u Ω i,u ,(1)\nwhere\nd i,u ∈ D is the degree of correlation between fea- tures Q u and Ω i,u , i = [1, ..., H + W -1], and D ∈ R (H+W -1)×(W ×H)\n. Then, we apply a softmax layer on D over the channel dimension to calculate the attention map A.\nAnother convolutional layer with 1 × 1 filters is applied on H to generate V ∈ R C×W ×H for feature adaptation. At each position u in the spatial dimension of V, we can obtain a vector V u ∈ R C and a set Φ u ∈ R (H+W -1)×C . The set Φ u is a collection of feature vectors in V which are in the same row or column with position u. The contextual information is collected by an Aggregation operation defined as follows.\nH u = H+W -1 i=0 A i,u Φ i,u + H u ,(2)\nwhere H u is a feature vector in H ∈ R C×W ×H at position u and A i,u is a scalar value at channel i and position u in Fig. 4. An example of information propagation when the loop number is 2.\nA. The contextual information is added to local feature H to augment the pixel-wise representation. Therefore, it has a wide contextual view and selectively aggregates contexts according to the spatial attention map. These feature representations achieve mutual gains and are more robust for semantic segmentation. Despite the criss-cross attention module can capture contextual information in horizontal and vertical directions, the connections between one pixel and its around ones that are not in the criss-cross path are still absent. To tackle this problem, we innovatively and simply introduce a RCCA operation based on the criss-cross attention. The RCCA module can be unrolled into R loops. In the first loop, the criss-cross attention takes the feature map H extracted from a CNN model as the input and output the feature map H , where H and H are with the same shape. In the second loop, the criss-cross attention takes the feature map H as the input and output the feature map H . As shown in Fig. 2, the RCCA module is equipped with two loops (R = 2) which is able to harvest full-image contextual information from all pixels to generate new features with dense and rich contextual information. We denote A and A as the attention maps in loop 1 and loop 2, respectively. Since we are interested only in contextual information spreads in spatial dimension rather than in channel dimension, the convolutional layer with 1 × 1 filters can be view as the identical connection. In the case of R = 2, the connections between any two spatial positions in the feature map built up by the RCCA module can be clearly and quantitatively described by introducing function f defined as follows.\n∃i ∈ R H+W -1 , s.t. A i,u = f (A, u CC x , u CC y , u x , u y ),\nwhere u(u x , u y ) ∈ R H×W is any spatial position in H and u CC (u CC x , u CC y ) ∈ R H+W -1 is a position in the crisscross structure centered at u. The function f is actually an one-to-one mapping from the position pair (u CC , u) ∈ R (H+W -1)×(H×W ) in the feature map to a particular element A i,u ∈ R (H+W -1)×(H×W ) in the attention map A ⊂ R (H+W -1)×(H×W ) , where u CC maps to a particular row i in A and u maps to a particular column in A.\nWith the help of function f , we can easily describe the information propagation between any position u in H and any position θ in H. It is obvious that information could flow from θ to u when θ is in the criss-cross path of u.\nThen, we focus on another situation in which θ(θ x , θ y ) is NOT in the criss-cross path of u(u x , u y ). To make it easier to understand, we visualize the information propagation in Fig. 4. The position (θ x , θ y ), which is blue, firstly passes the information into the (u x , θ y ) and (θ x , u y ) (light green) in the loop 1. The propagation could be quantified by function f . It should be noted that these two points (u x , θ y ) and (θ x , u y ) are in the criss-cross path of u(u x , u y ). Then, the positions (u x , θ y ) and (θ x , u y ) pass the information into the (u x , u y ) (dark green) in the loop 2. Thus, the information in θ(θ x , θ y ) could eventually flow into u(u x , u y ) even if θ(θ x , θ y ) is NOT in the criss-cross path of u(u x , u y ).\nIn general, our RCCA module makes up for the deficiency of criss-cross attention that cannot obtain the dense contextual information from all pixels. Compared with crisscross attention, the RCCA module (R = 2) does not bring extra parameters and can achieve better performance with the cost of a minor computation increment. For semantic segmentation tasks, the pixels belonging to the same category should have the similar features, while the pixels from different categories should have far apart features. We name such a characteristic as category consistency. The deep features produced by RCCA have fullimage context; however, the aggregated feature may have the problem of over-smoothing, which is a common issue in graph neural networks. To address this potential issue, beside the cross-entropy loss seg to penalize the mismatch between the final predicted segmentation maps and ground truth, we further introduce the category consistent loss to drive RCCA module to learn category consistent features directly.\nIn [69], a discriminative loss function with three competing terms is proposed for instance segmentation. In particular, the three terms, denoted as var , dis , reg , are adopted to 1) penalize large distances between features with the same label for each instance, 2) penalize small distances between the mean features of different labels, and 3) draw mean features of all categories towards the origin, respectively.\nMotivated by [69], we first adapt a discriminative loss for semantic segmentation rather than instance segmentation, then replace the first term with more robust one: instead of using quadratic function as the distance function to penalize mismatch all along, we design a piece-wise distance function to make the optimization more robust.\nLet C be the set of classes that are present in the minibatch images. N c is the number of valid elements belonging to category c ∈ C. h i ∈ H is the feature vector at spatial position i. µ c is the mean feature of category c ∈ C (the cluster center). ϕ is a piece-wise distance function. δ v and δ d are respectively the margins. In particular, Eq. 6 is a piecewise distance function and the function ϕ var will be zero, quadratic, and linear function when the distance from the center µ c is within d v , in range of (δ v , δ d ], and exceeds δ d , respectively.\nvar = 1 |C| c∈C 1 N c Nc i=1 ϕ var (h i , µ c ),(3)\ndis = 1 |C|(|C| -1) ca∈C c b ∈C ca =c b ϕ dis (µ ca , µ c b ),(4) reg\n= 1 |C| c∈C µ c ,(5)\nϕ var =    µ c -h i -δ d + (δ d -δ v ) 2 , µ c -h i > δ d ( µ c -h i -δ v ) 2 , δ v < µ c -h i ≤ δ d 0, µ c -h i ≤ δ v(6)\nϕ dis = (2δ d -µ ca -µ c b ) 2 , µ ca -µ c b ≤ 2δ d 0, µ ca -µ c b > 2δ d(7)\nTo reduce the computation load, we first apply a convolutional layer with 1 × 1 filters on the output of RCCA module for dimension reduction and then apply these three loss on the feature map with fewer channels. The final loss is weighted sum of all losses.\n= seg + α var + β dis + γ reg ,(8)\nwhere α, β and are the weight parameters. In our experiments we set δ v = 0.5, δ d = 1.5, α = β = 1, γ = 0.001 and 16 as the number of channels for dimension reduction. To adapt our method from 2D applications to 3D dense prediction tasks, we introduce 3D Criss-Cross Attention.\nIn general, the architecture of 3D Criss-Cross Attention is an extension the 2D version by additional collecting more contextual information from the temporal dimension. As shown in Fig. 5, given a local feature map H ∈ R C×T ×W ×H , where T is axial dimension (i.e., temporal dimension in video data). The module firstly applies two convolutional layers with 1 × 1 × 1 filters on H to generate two feature maps Q and K, respectively, where {Q, K} ∈ R C ×T ×W ×H . After obtaining the feature maps Q and K, we further generate an attention map A ∈ R (T +H+W -2)×T ×W ×H via the Affinity operation. At each position u of Q, we can obtain a vector Q u ∈ R C . u contains three coordinate values (t, x, y). We can also obtain the set Ω u ∈ R (T +H+W -2)×C by extracting feature vectors from K with at least two coordinate values equal to u. Ω i,u ∈ R C is the i-th element of Ω u . The Affinity operation is then defined as follows.\nd i,u = Q u Ω i,u ,(9)\nwhere d i,u ∈ D is the degree of correlation between feature Q u and Ω i,u , i = [1, ..., (T + H + W -2)], D ∈ R (T +H+W -2)×T ×W ×H . Then, we apply a softmax layer on D over the first dimension to calculate the attention map A.\nAnother convolutional layer with 1 × 1 × 1 filters is applied on H to generate V ∈ R C×T ×W ×H for feature adaptation. At each position u in the spatial dimension of V, we can obtain a vector V u ∈ R C and a set Φ u ∈ R (T +H+W -2)×C . The the set Φ u is a collection of feature vectors in V which are in the criss-cross structure centered at u. The contextual information is collected by the Aggregation operation:\nH u = T +H+W -2 i=0 A i,u Φ i,u + H u ,(10)\nwhere H u is a feature vector in the output feature map H ∈ R C×T ×W ×H at position u. A i,u is a scalar value at channel i and position u in A. Network Structure For semantic segmentation, we choose the ImageNet pre-trained ResNet-101 [23] as our backbone network, remove its last two down-sampling operations, and employ dilated convolutions in the subsequent convolutional layers following the previous work [25], resulting in the output stride as 8. For human parsing, we choose CE2P [41] as our baseline and replace the Context Embedding module with RCCA. For instance segmentation, we choose Mask-RCNN [72] as our baseline. For video semantic segmentation, we also choose Cityscapes pre-trained ResNet-101 [23] as our backbone network with 3D RCCA.\nTraining settings SGD with mini-batch is used for training. For semantic segmentation, the initial learning rate is 1e-2 for Cityscapes and ADE20K. Following the prior works [10], [14], we employ a poly learning rate policy where the initial learning rate is multiplied by 1-( iter max iter ) power with power = 0.9. We use the momentum of 0.9 and a weight decay of 0.0001. For Cityscapes, the training images are augmented by randomly scaling (from 0.75 to 2.0), then randomly cropping out high-resolution patches (769 × 769) from the resulting images. Since the images from ADE20K are with various sizes, we adopt an augmentation strategy of resizing the short side of input image to a length randomly chosen from the set {300, 375, 450, 525, 600}. For human parsing, the model are trained and tested with the input size of 473 × 473. For instance segmentation, we take the same training settings as that of Mask-RCNN [72]. For video semantic segmentation, we sample 5 temporally ordered frames from a training video as training data and the input size is 504 × 504. Results of other state-of-the-art semantic segmentation solutions on Cityscapes are summarized in Tab. 1. For val set, we provide these results for reference and emphasize that these results should not be simply compared with our method, since these methods are trained on different (even larger) training sets or different basic network. Among these approaches, Deeplabv3 [12] adopts multi-scale testing strategy. Deeplabv3+ [28] and DPC [50] both use a more stronger backbone (i.e., Xception-65 & 71 vs. ResNet-101). In addition, DPC [50] makes use of additional dataset, i.e., COCO, for pre-training beyond the training set of Cityscapes. The results show that the proposed CCNet with single-scale testing still achieve comparable performance without bells and whistles.\nAdditionally, we also train the best learned CCNet with ResNet-101 as the backbone using both training and validation sets and make the evaluation on the test set by submitting our test results to the official evaluation server. Most of methods [10], [11], [16], [31], [32], [33], [36], [37], [59], [73] adopt the same backbone as ours and the others [49], [74] utilize stronger backbones. From Tab. 1, it can be observed that our CCNet substantially outperforms all the previous state-of-the-arts on test set. Among the approaches, PSANet [16] is the most related to our method which generates sub attention map for each pixel. One of the differences is that the sub attention map has 2 × H × W weights in PSANet and H + W -1 weights in CCNet. Even with lower computation cost and memory usage, our method still achieves better performance. Baseline R=1 R=2 Ground Truth To verify the rationality of the CCNet, we conduct extensive ablation experiments on the validation set of Cityscapes with different settings for CCNet.\nThe effect of the RCCA module Tab. 2 shows the performance on the Cityscapes validation set by adopting different number of loop in RCCA. All experiments are conducted using ResNet-101 as the backbone. Besides, the input size of training images is 769 × 769 and the size of the input feature map H of RCCA is 97 × 97. Our baseline network is the ResNet-based FCN with dilated convolutional module incorporated at stage 4 and 5, i.e., dilation rates are set to 2 and 4 for these two stages respectively. The increment of FLOPs and memory usage are estimated when R = 1, 2, 3, respectively. We observe that adding a criss-cross attention module into the baseline, donated as R = 1, improves the performance by 2.9%, which can effectively demonstrates the significance of criss-cross attention. Furthermore, increasing the number of loops from 1 to 2 can further improve the performance by 1.8%, demonstrating the effectiveness of dense contextual information. Finally, increasing loops from 2 to 3 slightly improves the performance by 0.4%. Meanwhile, with the increasing the number of loops, the FLOPs and usage of GPU memory keep increasing. These results prove that the proposed criss-cross attention can significantly improve the performance by capturing contextual information in horizontal and vertical direction. In addition, the proposed RCCA is effective in capturing the dense and global contextual information, which can finally benefit the performance of semantic segmentation. To balance the performance and resource usage, we choose R = 2 as default settings in all the following experiments.\nTo further validate the effectiveness of the criss-cross module, we provide the qualitative comparisons in Fig. 6. We leverage the white circles to indicate those challenging regions that are easily to be misclassified. It can be seen that these challenging regions are progressively corrected with the increasing the number of loops, which can well prove the effectiveness of dense contextual information aggregation for semantic segmentation.\nThe effect of the category consistent loss Tab. 4 also shows the performance on the Cityscapes validation set by adopting the proposed category consistent loss. The category consistent loss is donated as \"CCL\" in the table. As we can see, adopting the category consistent loss could stably bring 0.7% mIoU gain with both Resnet-101 and Resnet-50, which prove the effectiveness of the proposed category consistent loss for semantic segmentation. To prove that the proposed piece-wise function is more robust than the original one, we conduct 10 times of the training processes using ResNet-50 for each kind of loss function. The training is deemed to fail when the loss value is NaN, thus we can calculate the success rate (number of successful training / total number of training). The experimental results in Table 3 demonstrate that using the piece-wise function has higher training success rate than using the original one. Besides, using the piece-wise function could achieve slightly better performance than a single quadratic function. Because we relax the punishment in the Eq. 6 to reduce the numerical values and gradients especially when the distance from the center exceeds δ d . This relaxation makes the optimization much more stable. To get a deeper understanding of our RCCA, we visualize the learned attention masks as shown in Fig. 7. For each input image, we select one point (cross in green) and show its corresponding attention maps when R = 1 and R = 2 in columns 2 and 3, respectively. It can be observed that only contextual information from the criss-cross path of the target point is captured when R = 1. By adopting one more criss-cross module, i.e., R = 2, RCCA can finally aggregate denser and richer contextual information compared with that of R = 1. Besides, we observe that the attention module could capture semantic similarity and full-image dependencies. Comparison with state-of-the-arts on ADE20K (val). This work was in part supported by NSFC (No. 61733007 and No. 61876212), ARC DECRA DE190101315, ARC DP200100938, HUST-Horizon Computer Vision Research Center, and IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as part of the IBM AI Horizons Network." }, { "title": "Adaptive prototype learning and allocation for few-shot segmentation", "year": 2021.0, "authors": "Gen Li; Varun Jampani; Laura Sevilla-Lara; Deqing Sun; Jonghyun Kim; Joongkyu Kim", "arxiv_di": "2104.01893", "Introduction": "Humans have a remarkable ability to learn how to recognize novel objects after seeing only a handful of exemplars. On the other hand, deep learning based computer vision systems have made tremendous progress, but have largely depended on large-scale training sets. Also, deep networks mostly work with predefined classes and are incapable of generalizing to new ones. The field of few-shot learning studies the development of such learning ability in artificial learning systems, where only a few examples of the new category are available.\nIn this work, we tackle the few-shot segmentation problem, where the target is learning to segment objects in a given query image while only a few support images with ground-truth segmentation masks are available. This is a challenging problem as the test data are novel categories which do not exist in the training set, and there are usually large variations in appearance and shape between the support and query images.\nCurrent few-shot segmentation networks usually extract features from both query and support images, and then propose different approaches for feature matching and object mask transfer from support to query image. This feature matching and mask transfer are usually performed in one of two ways: prototypical feature learning or affinity learning. Prototypical learning techniques condense the masked object features in a support image into a single or few prototypical feature vectors. Then these techniques find the pixel locations of similar features in the query image to segment the desired object. A key advantage of prototype learning is that the prototypical features are more robust to noise than pixel features. However, prototypical features inevitably drop spatial information, which is important when there is a large variation in the object appearance between the support and query images. In addition, most prototypical learning networks [41,43,33,30,4] merely generate a single prototype by masked average pooling as shown in Figure 1(a), thus losing information as well as discriminability.\nAffinity learning techniques [40,37,32] on the other hand, directly try to match object pixels in a support image to query image pixels thereby transferring the object mask. These techniques predict cross-image pixel affinities (also called connection strengths) using learned features, which perform feature matching while preserving spatial information better than prototypical learning approaches. However, affinity learning techniques are prone to over-fitting on training data as they try to solve an under-constrained pixel-matching problem with dense affinity matrices.\nIn this work, we propose a novel prototypical learning technique that addresses some of the main shortcomings of existing ones. In particular, we want to adaptively change the number of prototypes and their spatial extent based on the image content, making the prototypes content-adaptive and spatially-aware. This adaptive, multi-prototype strategy is important to deal with large variations in object scales and shapes across different images. Intuitively, when an object occupies a large portion of the image, it carries more information and thus requires more prototypes to represent all the necessary information. On the contrary, if the object is fairly small and the proportion of the background is large, then a single or few prototypes are sufficient. In addition, we want the support region (spatial extent) for each of the prototypes to be adaptive to object information that is present in the support image. Concretely, we aim to divide the support feature into several representative areas according to the feature similarity. We also want to adaptively choose more important prototypes while finding similar features in a query image. As different object parts are visible in different image regions and in different query images, we want to dynamically allocate different prototypes across query image for feature matching. For example, some parts of the object can be occluded in a query image and we want to dynamically choose the prototypes that are corresponding to the visible parts in the query image.\nWe achieve this adaptive, multi-prototype learning and allocation with our Adaptive Superpixel-guided Network (ASGNet) that leverages superpixels for adapting both the number and support regions of the prototypes. The schematic illustration is presented in Figure 1(b). In particular, we propose two modules, Superpixel-guided Clustering (SGC) and Guided Prototype Allocation (GPA), which form the core of ASGNet. The SGC module does fast featurebased superpixel extraction on the support image and the resulting superpixel centroids are considered as prototypical features. Since superpixel shapes and numbers are adaptive to the image content, the resulting prototypes also become adaptive. The GPA module uses an attention-like mechanism to allocate most relevant support prototype features to each pixel in a query image. In summary, the SGC module provides adaptive prototype learning both in terms of the number of prototypes and their spatial extents, and the GPA module provides adaptive allocation of the learned prototypes when processing query features. These two modules make ASGNet highly flexible and adaptive to varying object shapes and sizes, allowing it to generalize better to unseen object categories. We make the following contributions:\n• We propose the Adaptive Superpixel-guided Network (ASGNet), a flexible prototypical learning approach for few-shot segmentation that is adaptive to different object scales, shapes and occlusions.\n• We introduce two novel modules, namely Superpixelguided Clustering (SGC) and Guided prototype allocation (GPA), for adaptive prototype extraction and allocation respectively. They can serve as effective plugand-play components on feature matching.\n• ASGNet achieves top-performing results with fewer parameters and less computation. Specifically, the proposed method obtains mIoUs of 64.36%/42.48% in the 5-shot setting on Pascal-5 i /COCO-20 i , exceeding the state-of-the-art by 2.40%/5.08%.", "Related_Work": "Semantic Segmentation. Most existing semantic segmentation methods are based on fully convolutional networks (FCNs) [19], which replace fully connected layers with fully convolutional ones for pixel-level prediction. Recent breakthroughs in semantic segmentation have mainly come from multi-scale feature aggregation [44,2,36,10] or attention mechanisms [7,39,15,29,3,45,42]. These methods often use dilated convolution kernels [38] and set up an encoder-decoder structure to obtain a large receptive field while maintaining the feature resolution. Although these methods achieve tremendous success, they need long training time and a large amount of pixel-level labeled ground truth to fully supervise the network. Also, in the inference stage, trained models cannot recognize new classes that do not exist in the training set. Few-shot Learning. Few-shot learning focuses on the generalization ability of models, so that they can learn to predict new classes given only a few annotated examples. Existing methods mainly concentrate on metric-learning [31,28,26] and meta-learning [34,6,23]. The core idea of metric learning is distance measurement, and it is generally formulated as an optimization of distance/similarity between images or regions. In meta-learning approaches, the main idea is to define specific optimization or loss functions to achieve fast learning capability. Among these methods, the concept of prototypical networks has been extensively adopted in few-shot segmentation, which largely reduces computational budget while maintaining high performance. Most methods focused on image classification, while recently few-shot segmentation has received growing attention.\nFew-shot Segmentation. Few-shot segmentation is an extension of few-shot classification, and it tackles a more challenging task of predicting a label at each pixel instead of predicting a single label for the entire image. This research problem was introduced by Shaban et al. [24], who proposed a classical two-branch network. Later on, PL [4] introduced the idea of using prototypes. In that work, each prediction is generated by measuring the similarity between prototypes and pixels in query image. SG-One [43] proposed masked average pooling to obtain the object-related prototype, which has been widely used in subsequent works. PANet [33] introduced a novel prototype alignment regularization to fully exploit the support knowledge, making the final prediction with only the measurement of cosine distance. Based on masked average pooling, CANet [41] expanded the prototype to the same size of the query feature and concatenated them together. This work also used an iterative optimization module to refine the segmentation result. PGNet, BriNet and DAN [40,37,32] introduced dense pixel-to-pixel connections between support and query features to maintain spatial information. More recently, PMMs [35] leveraged the expectation-maximization (EM) algorithm to generate multiple prototypes. However, all prototypes have the same relevance in this model, which can potentially be sensitive to poorly matched prototypes. Instead, in our work we utilize the similarity between each prototype and the query feature to select the most relevant prototype at each pixel location.\nSuperpixel Segmentation. A superpixel is defined as a set of pixels with similar characteristics (color, texture, category). Superpixels are effective in many computer vision tasks, and have recently been used as the basic units for few-shot segmentation [18,21]. Superpixels carry more information than pixels, and can provide more compact and convenient image representations for the downstream vision tasks. For more details about traditional and existing methods on superpixels, please refer to [1,27].\nOur research is inspired by the maskSLIC [12] and superpixel sampling network (SSN) [13]. MaskSLIC adapts SLIC [1] to a defined region of interest (RoI), and the main contribution is in the placement of seed points within the RoI. SSN [13] proposed the first end-to-end trainable superpixel algorithm by making the SLIC algorithm differentiable. Inspired by the insights of these two techniques, we propose the masked superpixel clustering in feature space, which can gather similar features together and generate superpixel centroids as prototypes. Instead of representing the information of the entire object, superpixel cenroids stand for the parts of the object with similar characteristics.", "Methodology": "The key difference between few-shot segmentation and a general semantic segmentation is that the categories in training and testing set do not intersect. It means that, in the inference stage, the testing set has classes totally unseen in the training. Specifically, given a training set S train = {(I S/Q , M S/Q )} and testing set S test = {(I S/Q , M S/Q )}, the categories of the two sets do not intersect (S train ∩ S test = ∅). Here I ∈ R H×W ×3 indicates the RGB image and M ∈ R H×W denotes the segmentation mask. Subscripts S and Q represent support and query, respectively. Following the first one-shot segmentation work [24], we align training and testing with the episodic paradigm [31]. In each episode, the input to the model is composed by a query image I Q and K samples (I i S , M i S ), i ∈ {1, ..., K} from the support set. All support and query images have the same class c. We estimate the query mask MQ to approximate the ground truth mask M Q . In this section, we first introduce the proposed two modules for prototype generation and matching, which are superpixel-guided clustering (SGC) and guided prototype allocation (GPA). Then, we discuss the adaptive ability of these two modules. After that, we introduce the overall network architecture, named Adaptive Superpixel-guided Network (ASGNet), which integrates the SGC and GPA modules in one model. The overall structure is shown in Figure 2. Finally, we elaborate the k-shot setting in ASGNet.", "Dataset": "We choose Pascal-5 i and COCO-20 i , two widely used datasets for few-shot semantic segmentation, to analyze our model performance. Pascal-5 i [24] includes images from the PASCAL VOC 2012 [5] and extra annotations from SBD [9]. The total 20 categories are evenly partitioned into four splits, and the model training is conducted in a crossvalidation manner. Specifically, three splits are selected in the training process and the remaining one is used for testing. During inference, 1000 support-query pairs are randomly sampled for evaluation [24]. Different from Pascal-5 i , MSCOCO [17] is a large-scale dataset with 82,081 images in the training set. Following FWB [20], the overall 80 classes from MSCOCO [17] are evenly divided into four splits with the same cross-validation strategy. For more stable results, we randomly sample 20,000 pairs during evaluation [30].\nWe use mean intersection-over-union (mIoU) as a primary evaluation metric for the ablation study, as it is commonly recognized in image segmentation. In addition, for more consistent comparisons, results of foregroundbackground IoU (FB-IoU) are also reported.", "Conclusion": "In this paper we propose ASGNet for few-shot image segmentation. Targeting the limitations of existing single prototype based models, we introduce two new named Superpixel-guided Clustering (SGC) and Guided Prototype Allocation (GPA), for adaptive prototype learning and allocation. Concretely, SGC aggregates similar feature vectors with feature-based superpixel clustering, and GPA aims to allocate the most relevant prototype to each query feature element by measuring the similarity with cosine distance. Extensive experiments and ablation studies have demonstrated the superiority of ASGNet, and we achieve state-of-the-art performance on both Pascal-5 i and COCO-20 i without any additional post-processing steps.", "Experiment_and_Results": "In Figure 9, we present more qualitative results in comparison to the single prototype baseline. These qualitative results demonstrate that our model is capable of handling large variations in appearance, scale and shape between support and query images. Compared with the baseline, we perform particularly better in occluded cases, e.g, column 3-6 of Figure 9.", "Extra": "The core idea of SGC is inspired by the superpixel sampling network (SSN) [13] and MaskSLIC [12]. SSN was the first end-to-end trainable deep network for superpixel segmentation. The key contribution in SSN is converting the nearest-neighbor operation in SLIC [1] into a differentiable one. The traditional SLIC superpixel algorithm uses k-means clustering iteratively with two steps: Pixel-superpixel association and Superpixel centroid update. Based on the color similarity and proximity, pixels are assigned to different superpixel centroids. Specifically, the input image I ∈ R n×5 is usually in a five-dimensional space (labxy) with n pixels, where lab represents the pixel vector in CIELAB color space and xy indicates the pixel location. After iterative clustering, the algorithm outputs the association map where each pixel n is assigned to one of the m superpixels.\nThis simple method inspires us with an insightful idea, which is to aggregate the feature map into multiple superpixel centroids in a clustering way, and here superpixel centroids can serve as prototypes. Therefore, instead of computing the superpixel centroids in image space, we estimate Algorithm 1 Superpixel-guided Clustering (SGC) Input: Support feature F s , support mask M s , and initial superpixel seeds S 0 concatenate the absolute coordinates to F s extract masked features F s via support mask M s for each iteration t do Compute association between each pixel p and super-\npixel i, Q t pi = e -F p -S t-1 i 2 Update superpixel centroids, S t i = 1 Z t i Nm p=1 Q t pi F p ; Z t i = p Q t pi\nend for remove coordinates information return final superpixel centroids S (N sp , C) them in feature space by clustering similar feature vectors.\nThe whole SGC process is delineated in Algorithm 1.\nGiven support feature F s ∈ R c×h×w , support mask M s ∈ R h×w and initial superpixel seeds S 0 ∈ R c×Nsp (N sp is the number of superpixels), we aim to obtain the final superpixel centroids, which act as multiple compact prototypes. First, we concatenate the coordinates of each pixel with the support feature map to introduce positional information. Then, we define the distance function D following SLIC [1], which consists of feature and spatial distance:\nD = (d f ) 2 + (d s /r) 2 ,(1)\nwhere d f , d s are the Euclidean distance for features and coordinate values, and r is a weighting factor. We filter out the background information with the support mask and only keep the masked features, compressing the feature map from F s ∈ R c×h×w to F s ∈ R c×Nm , where N m is the number of pixels inside the support mask.\nThen we compute superpixel-based prototypes in an iterative fashion. For each iteration t, we first compute the association map Q t between each pixel p and all superpix-\nQ t pi = e -D(F p ,S t-1 i ) = e -F p -S t-1 i 2 .(2)\nThen, new superpixel centroids are updated as the weighted sum of masked features:\nS t i = 1 Z t i Nm p=1 Q t pi F p ,(3)\nwhere Z t i = p Q t pi is a constant for normalization. The above process is visualized in Figure 3.\nHere, we elaborate the selection of initial seeds. Generally, in superpixel algorithm, a H ×W image is evenly partitioned into regular grid cells of size h×w, and each grid cell is considered as an initial seed (i.e., superpixel). However, this initialization is not suitable for our purposes where a foreground mask is given for the support image and we only need to initialize the seeds inside this foreground region. To uniformly initialize seeds in the masked region, we refer to MaskSLIC [12] for iteratively placing each initial seed, and the pipeline is depicted in Figure 5. This seed initialization results in faster convergence of superpixel-guided clustering with only a few iterations. After extracting prototypes, previous methods mostly follow the design of CANet [41], expanding a single prototype to the same size of the query feature and concatenating them together. However, this operation results in equal Step 1 Step 2 Final coordinates guidance to every location in the query feature. To make prototype matching more adaptive to the query image content, we propose the Guided Prototype Allocation (GPA), illustrated in Figure 4. We first compute the cosine distance to measure the similarity between each prototype and each query feature element:\nC x,y i = S i • F x,y q S i • F x,y q i ∈ {1, 2, ..., N sp },(4)\nwhere S i ∈ R c×1 is the i th superpixel centroid (prototype), and F x,y q ∈ R c×1 is the feature vector at location (x, y) of the query feature. We use this similarity information as input to a two-branch structure. The first branch computes which prototype is the most similar at each pixel location as follows:\nG x,y = arg max i∈{0,..,Nsp} C x,y i ,(5)\nwhere the argmax operator is adopted to obtain G x,y , which is a single index value representing a particular prototype. Putting all index values together, we get a guide map G ∈ R h×w . Then, by placing corresponding prototype in each position of the guide map, we obtain the guide feature F G ∈ R c×h×w to achieve pixel-wise guidance. While in the other branch, all the similarity information C is added up across all the superpixels to get the probability map P . Finally, we concatenate the probability map and the guide feature with the original query feature F Q to provide the guiding information, and thus obtain the refined query feature F Q :\nF Q = f (F Q ⊕ F G ⊕ P ),(6)\nwhere ⊕ indicates the concatenation operation along channel dimension, and f (•) is a 1 × 1 convolution. As mentioned before, we argue that one of the key attributes in the proposed network is its adaptive ability for few-shot semantic segmentation. In Figure 6, we provide some examples illustrating the adaptive ability of SGC and GPA. In SGC, to make it adaptive to object scale, we define a criterion to regulate the number of superpixel centroids as\nN sp = min( N m S sp , N max ),(7)\nwhere N m is the number of pixels in the support mask; S sp is the average area assigned to each initial superpixel seed, and we set to 100 empirically. When the foreground is fairly small, N sp = 0 or 1, this method degrades to a general masked average pooling as shown in Figure 6(a).\nIn addition, to reduce the computational burden, we set a hyper-parameter N max to constrain the maximum number of prototypes. In GPA, we can observe its adaptability to object shape. In other words, it is resilient to occlusion. When severe occlusion exists in query image, e.g., Figure 6(b), GPA can choose the best matched prototype for each query feature location. Based on the above SGC and GPA modules, we propose the Adaptive Superpixel-guided Network (ASGNet) for few-shot semantic segmentation as illustrated in Figure 2. First, the support and query images are fed into a shared CNN (pretrained on ImageNet [14]) to extract features. Then, by passing the support features through SGC with support mask, we obtain the superpixel centroids, which are considered as prototypes. After that, for more accurate pixel-wise guidance, we adopt GPA module to match the prototypes with the query feature. Finally, we use the feature enrichment module [30] and set up a FPNlike [16] top-down structure to introduce multi-scale information. As demonstrated in [30], transferring features from fine to coarse promotes the feature interaction, so we follow their design for fast multi-scale aggregation. Finally, all different scales are concatenated, and each scale yields a segmentation result for computing the loss. In previous work, the k-shot setting is usually tackled via feature averaging or attention-based fusion. However, it turns out that the improvements from such strategies is minor, while requiring heavy computation. In contrast, based on the proposed SGC and GPA, our ASGNet can easily adopt an efficient k-shot strategy without collapsing support features. Specifically, in each support image and mask pair, we implement SGC to obtain the superpixel centroids. By collecting all superpixel centroids together, we get the overall superpixel centroids S from k shots:\nS = (S 1 , S 2 , ..., S k ),(8)\nwhere the number of superpixel centroids is N sp = k i=1 N i sp . By doing so, the GPA can receive a larger range of selections from multiple shots, and thus yield more accurate guidance to segment the object in the query image. We adopt ResNet [11] as the backbone network [41], and we concatenate block2 and block3 to generate feature maps. We train the model with a SGD optimizer on Pascal-5 i for 200 epochs and COCO-20 i for 50 epochs. We set the initial learning rate to 0.0025 with batch size 4 on Pascal-5 i , and 0.005 with batch size 8 on COCO-20 i . The number of iterations in the SGC module is set to 10 during training and 5 during inference. We use data augmentation during training -input images are transformed with random scale, horizontal flip and rotation from [-10, 10], and then all images are cropped to 473 × 473 (Pascal) or 641 × 641 (COCO) as training samples. We implement our model using Pytorch and run our experiments on a workstation with Nvidia Tesla V100 GPU. To increase the variance of the cosine measurement, we remove the ReLU layer before both support and query features, so the similarity metric is bounded in [-1, 1] rather than [0, 1]. To verify the effectiveness of the proposed modules, we implement extensive ablation studies with a ResNet-50 backbone on Pascal-5 i . We use floating point operations (FLOPs) to represent the amount of computation, and take both addition and multiplication into account.\nNumber of Superpixel Centroids. To explore the effect of the number of superpixel centroids, we conduct experiments with different N max in 1-shot segmentation. As stated in Section 4.3, N max is a hyper-parameter set to regulate the maximum number of prototypes. Particularly, when N max = 1, the prototype generation process degrades to the masked average pooling. As shown in Table 1, when N max equals 5, ASGNet achieves the best results in split 0 and 2, as well as the best mean performance, which demonstrates the validity of superpixel-guided clustering. The results get improved gradually when N max increases from 1 to 5, and then decrease slightly after that, denoting that excessive prototypes bring negative effect and are prone to overfitting. Finally, we choose N max as 5 for both 1-shot and 5-shot segmentation. In addition, we analyzed the impact of the proposed adaptive scheme on the number of superpixel centroids N sp and show the results in Table 2. Compared with using the fixed number (5) of superpixels, the results show that the adaptive setting (Eqn. 7) can reduce redundant computation while obtaining better performance. SGC and GPA. To demonstrate the effectiveness of proposed SGC and GPA modules, we conduct diagnostic experiments on prototype generation and matching. We first implement a baseline model with the single prototype learning proposed in PFENet [30]. Then, we introduce SGC to generate multiple prototypes, and fuse them in a dense manner with reference to PMMs [35]. Finally, we replace the prototype expansion with the proposed allocation scheme. For ablations on the SGC module, as shown in Table 3, in the 1-shot setting we observe that replacing mean average pooling (MAP) with SGC can worsen the representations and lead to degraded performance, but the model benefits in the 5-shot scenario. We deem the main reason is that excessive prototypes become highly similar on a single support sample, and cosine distance can not distinguish them apart. Finally, when GPA module is adopted for prototype matching, the performance improves by 2.70% compared to the prototype expansion, and also the computational overhead is much lower. K-shot Fusion Setting. As mentioned in Section 4.5, AS-GNet is able to tackle the k-shot learning problem without collapsing the support features. To explore the effect of different fusion methods, we compare our k-shot setting with two other commonly used solutions: 1) feature average fusion [22] and 2) attention weighted summation [41] 3. Ablation study on prototype generation (Masked average pooling (MAP) vs. SGC) and matching (Expand vs. GPA). FLOPs∆ denotes the computational cost from prototype matching process, and K is the adaptive number of prototype (K≤ 5).\nN max s-0 s-1 s-2 s-\nconducted in Pascal-5 0 . As reported in Table 4, our simple strategy achieves the best performance and the largest increment over the 1-shot baseline (4.82%) without additional computation. On the contrary, attention-based fusion requires a large amount of computation, and has limited performance improvement. This demonstrates that the GPA module is highly effective when given a large number of selections. Pascal-5 i . Comparisons to state-of-the-art methods are shown in Table 5 and6, where two different metrics are adopted. In Table 5, with ResNet-101 as the backbone, AS-GNet outperforms recent methods with a considerable margin of 2.40% in 5-shot segmentation, while being on par with state-of-the-arts under the 1-shot setting. In Table 6, we further make comparisons on FB-IoU and the number of trainable parameters. Once again, the proposed ASGNet achieves significant improvement over state-of-the-arts in 5-shot setting (75.2% vs.73.9%), and it also has the largest performance increment of 5.0% over 1-shot result. In addition, we have far fewer trainable parameters than other methods. In Figure 7, we show some representative segmentation examples. We observe that the proposed ASGNet can generate accurate segmentation results even when there are large variations in appearance and pose between support and query images. COCO-20 i . In Table 7, we present the performance comparison of mean IoU and FB-IoU on COCO-20 i . As can be seen, ASGNet achieves state-of-the-art results in both 1-shot and 5-shot setting in terms of the mean IoU, and it substantially outperforms previous methods. Particularly, ASGNet achieves a margin of 3.98% and 6.96% higher Methods FB-IoU ∆ #Params 1-shot 5-shot OSLSM [24] 61.3 61.5 0.2 272.6M co-FCN [22] 60.1 60.2 0.1 34.2M AMP [25] 62.2 63.8 1.6 14.9M SG-One [43] 63.1 65.9 2.8 19.0M CANet † [41] 66.2 69.6 3.4 10.5M PGNet † [40] 69.9 70.5 0.6 -PANet [33] 66.5 70.7 4.2 14.7M DAN † [32] 71.9 72.3 0.4 -SimPropNet [8] 73.0 72.9 -0.1 -PFENet [30] 73. mean IoU over RPMMs [35] in 1-shot and 5-shot segmentation respectively. Also, our ASGNet obtains competitive 1-shot results and top-performing 5-shot results with respect to FB-IoU. These results demonstrate that the proposed method is capable of handling more complex cases, as MSCOCO is a much more challenging dataset with diverse samples and categories. Thank you for reading the supplementary material, in which we introduce more experimental details in Section A, and provide more qualitative visualization examples on Pascal-5 i and COCO-20 i in Section B. In Table 8, we present the detailed per-split results in terms of mean IoU. As can be seen in the table, we achieve the best performance in every split, which demonstrates the superiority of our method. In the ablation study, we use floating point operations (FLOPs) to evaluate the amount of computation and model complexity. Here, we describe the calculation in detail. For a general convolution layer, the operations of one pixel in the output feature map are calculated as follows:\nwhere the first item is multiplication, and the second one denotes addition. C in is the number of input channels and K is the kernel size. Then, extending to the whole feature map, we get the number of FLOPs as:\nwhere H, W is the size of output feature, and C out is the number of output feature channels. For example, the FLOPs are 0.9G when using 256 1×1 convolution filters to process the merged feature F Q ∈ R 513×60×60 . To explore the effect of the number of iterations, we implement experiments of 1-shot setting with different iteration numbers on Pascal-5 0 . As shown in Figure 8, both FB-IoU and mIoU increase monotonically with more iterations, and it takes around 5 iterations to obtain the converged result. To better understand the proposed method, we visualize each similarity map, which is obtained by computing the cosine distance between each prototype and query feature. As presented in Figure 10, prototypes represent parts of the object with similar characteristics, which make the network more adaptive and discriminative." }, { "title": "Microsoft coco: Common objects in context", "year": 2014.0, "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick", "arxiv_di": "1405.0312", "Introduction": "One of the primary goals of computer vision is the understanding of visual scenes. Scene understanding involves numerous tasks including recognizing what objects are present, localizing the objects in 2D and 3D, determining the objects' and scene's attributes, characterizing relationships between objects and providing a semantic description of the scene. The current object classification and detection datasets [1], [2], [3], [4] help us explore the first challenges related to scene understanding. For instance the ImageNet dataset [1], which contains an unprecedented number of images, has recently enabled breakthroughs in both object classification and detection research [5], [6], [7]. The community has also created datasets containing object attributes [8], scene attributes [9], keypoints [10], and 3D scene information [11]. This leads us to the obvious question: what datasets will best continue our advance towards our ultimate goal of scene understanding?\nWe introduce a new large-scale dataset that addresses three core research problems in scene understanding: detecting non-iconic views (or non-canonical perspectives [12]) of objects, contextual reasoning between objects and the precise 2D localization of objects. For many categories of objects, there exists an iconic view. For example, when performing a web-based image search for the object category \"bike,\" the top-ranked retrieved examples appear in profile, unobstructed near the center of a neatly composed photo. We posit that current recognition systems perform fairly well on iconic views, but struggle to recognize objects otherwise -in the We introduce a large, richly-annotated dataset comprised of images depicting complex everyday scenes of common objects in their natural context. background, partially occluded, amid clutter [13] -reflecting the composition of actual everyday scenes. We verify this experimentally; when evaluated on everyday scenes, models trained on our data perform better than those trained with prior datasets. A challenge is finding natural images that contain multiple objects. The identity of many objects can only be resolved using context, due to small size or ambiguous appearance in the image. To push research in contextual reasoning, images depicting scenes [3] rather than objects in isolation are necessary. Finally, we argue that detailed spatial understanding of object layout will be a core component of scene analysis. An object's spatial location can be defined coarsely using a bounding box [2] or with a precise pixel-level segmentation [14], [15], [16]. As we demonstrate, to measure either kind of localization performance it is essential for the dataset to have every instance of every object category labeled and fully segmented. Our dataset is unique in its annotation of instance-level segmentation masks, Fig. 1.\nTo create a large-scale dataset that accomplishes these three goals we employed a novel pipeline for gathering data with extensive use of Amazon Mechanical Turk. First and most importantly, we harvested a large set of images containing contextual relationships and noniconic object views. We accomplished this using a surprisingly simple yet effective technique that queries for pairs of objects in conjunction with images retrieved via scene-based queries [17], [3]. Next, each image was labeled as containing particular object categories using a hierarchical labeling approach [18]. For each category found, the individual instances were labeled, verified, and finally segmented. Given the inherent ambiguity of labeling, each of these stages has numerous tradeoffs that we explored in detail.\nThe Microsoft Common Objects in COntext (MS COCO) dataset contains 91 common object categories with 82 of them having more than 5,000 labeled instances, Fig. 6. In total the dataset has 2,500,000 labeled instances in 328,000 images. In contrast to the popular ImageNet dataset [1], COCO has fewer categories but more instances per category. This can aid in learning detailed object models capable of precise 2D localization. The dataset is also significantly larger in number of instances per category than the PASCAL VOC [2] and SUN [3] datasets. Additionally, a critical distinction between our dataset and others is the number of labeled instances per image which may aid in learning contextual information, Fig. 5. MS COCO contains considerably more object instances per image (7.7) as compared to ImageNet (3.0) and PASCAL (2.3). In contrast, the SUN dataset, which contains significant contextual information, has over 17 objects and \"stuff\" per image but considerably fewer object instances overall.\nAn abridged version of this work appeared in [19].", "Related_Work": "Throughout the history of computer vision research datasets have played a critical role. They not only provide a means to train and evaluate algorithms, they drive research in new and more challenging directions. The creation of ground truth stereo and optical flow datasets [20], [21] helped stimulate a flood of interest in these areas. The early evolution of object recognition datasets [22], [23], [24] facilitated the direct comparison of hundreds of image recognition algorithms while simultaneously pushing the field towards more complex problems. Recently, the ImageNet dataset [1] containing millions of images has enabled breakthroughs in both object classification and detection research using a new class of deep learning algorithms [5], [6], [7]. Datasets related to object recognition can be roughly split into three groups: those that primarily address object classification, object detection and semantic scene labeling. We address each in turn.", "Dataset": "Datasets have spurred the advancement of numerous fields in computer vision. Some notable datasets include the Middlebury datasets for stereo vision [20], multi-view stereo [36] and optical flow [21]. The Berkeley Segmentation Data Set (BSDS500) [37] has been used extensively to evaluate both segmentation and edge detection algorithms. Datasets have also been created to recognize both scene [9] and object attributes [8], [38]. Indeed, numerous areas of vision have benefited from challenging datasets that helped catalyze progress. Next, we analyze the properties of the Microsoft Common Objects in COntext (MS COCO) dataset in comparison to several other popular datasets. These include ImageNet [1], PASCAL VOC 2012 [2], and SUN [3]. Each of these datasets varies significantly in size, list of labeled categories and types of images. ImageNet was created to capture a large number of object categories, many of which are fine-grained. SUN focuses on labeling scene types and the objects that commonly occur in them. Finally, PASCAL VOC's primary application is object detection in natural images. MS COCO is designed for the detection and segmentation of objects occurring in their natural context.\nThe number of instances per category for all 91 categories is shown in Fig. 5(a). A summary of the datasets showing the number of object categories and the number of instances per category is shown in Fig. 5(d). While MS COCO has fewer categories than ImageNet and SUN, it has more instances per category which we hypothesize will be useful for learning complex models capable of precise localization. In comparison to PASCAL VOC, MS COCO has both more categories and instances.\nAn important property of our dataset is we strive to find non-iconic images containing objects in their natural context. The amount of contextual information present in an image can be estimated by examining the average number of object categories and instances per image, Fig. 5(b,c). For ImageNet we plot the object detection validation set, since the training data only has a single object labeled. On average our dataset contains 3.5 categories and 7.7 instances per image. In comparison ImageNet and PASCAL VOC both have less than 2 categories and 3 instances per image on average. Another interesting observation is only 10% of the images in MS COCO have only one category per image, in comparison, over 60% of images contain a single object category in ImageNet and PASCAL VOC. As expected, the SUN dataset has the most contextual information since it is scene-based and uses an unrestricted set of categories.\nFinally, we analyze the average size of objects in the datasets. Generally smaller objects are harder to recognize and require more contextual reasoning to recognize. As shown in Fig. 5(e), the average sizes of objects is smaller for both MS COCO and SUN. To accommodate a faster release schedule, we split the MS COCO dataset into two roughly equal parts. The first half of the dataset was released in 2014, the second half will be released in 2015. We took care to minimize the chance of near-duplicate images existing across splits by explicitly removing near duplicates (detected with [43]) and grouping images by photographer and date taken.\nFollowing established protocol, annotations for train and validation data will be released, but not for test. We are currently finalizing the evaluation server for automatic evaluation on the test set. A full discussion of evaluation metrics will be added once the evaluation server is complete.\nNote that we have limited the 2014 release to a subset of 80 categories. We did not collect segmentations for the following 11 categories: hat, shoe, eyeglasses (too many instances), mirror, window, door, street sign (ambiguous and difficult to label), plate, desk (due to confusion with bowl and dining table, respectively) and blender, hair brush (too few instances). We may add segmentations for some of these categories in the cumulative 2015 release.", "Conclusion": "We introduced a new dataset for detecting and segmenting objects found in everyday life in their natural environments. Utilizing over 70,000 worker hours, a vast collection of object instances was gathered, annotated and organized to drive the advancement of object detection and segmentation algorithms. Emphasis was placed on finding non-iconic images of objects in natural environments and varied viewpoints. Dataset statistics indicate the images contain rich contextual information with many objects present per image. There are several promising directions for future annotations on our dataset. We currently only label \"things\", but labeling \"stuff\" may also provide significant contextual information that may be useful for detection. Many object detection algorithms benefit from additional annotations, such as the amount an instance is occluded [4] or the location of keypoints on the object [10]. Finally, our dataset could provide a good benchmark for other types of labels, including scene types [3], attributes [9], [8] and full sentence written descriptions [51]. We are actively exploring adding various such annotations.\nTo download and learn more about MS COCO please see the project website 2 . MS COCO will evolve and grow over time; up to date information is available online. Instance Spotting Fig. 12(b) depicts our interface for labeling all instances of a given category. The interface is initialized with a blinking icon specifying a single instance obtained from the previous category-labeling stage. Workers are then asked to spot and click on up to 10 total instances of the given category, placing a single cross anywhere within the region of each instance. In order to spot small objects, we found it crucial to include a \"magnifying glass\" feature that doubles the resolution of a worker's currently selected region.\nInstance Segmentation Fig. 12(c) shows our user interface for instance segmentation. We modified source code from the OpenSurfaces project [16], which defines a single AMT task for segmenting multiple regions of a homogenous material in real-scenes. In our case, we define a single task for segmenting a single object instance labeled from the previous annotation stage. To aid the segmentation process, we added a visualization of the object category icon to remind workers of the category to be segmented. Crucially, we also added zoom-in functionality to allow for efficient annotation of small objects and curved boundaries. In the previous annotation stage, to ensure high coverage of all object instances, we used multiple workers to label all instances per image. We would like to segment all such object instances, but instance annotations across different workers may refer to different or redundant instances. To resolve this correspondence ambiguity, we sequentially post AMT segmentation tasks, ignoring instance annotations that are already covered by an existing segmentation mask.\nSegmentation Verification Fig. 12(d) shows our user interface for segmentation verification. Due to the time consuming nature of the previous task, each object instance is segmented only once. The purpose of the veri-fication stage is therefore to ensure that each segmented instance from the previous stage is of sufficiently high quality. Workers are shown a grid of 64 segmentations and asked to select poor quality segmentations. Four of the 64 segmentation are known to be bad; a worker must identify 3 of the 4 known bad segmentations to complete the task. Each segmentation is initially shown to 3 annotators. If any of the annotators indicates the segmentation is bad, it is shown to 2 additional workers. At this point, any segmentation that doesn't receive at least 4 of 5 favorable votes is discarded and the corresponding instance added back to the pool of unsegmented objects. Examples of borderline cases that either passed (4/5 votes) or were rejected (3/5 votes) are shown in Fig. 15.\nCrowd Labeling Fig. 12(e) shows our user interface for crowd labeling. As discussed, for images containing ten object instances or fewer of a given category, every object instance was individually segmented. In some images, however, the number of instances of a given category is much higher. In such cases crowd labeling provided a more efficient method for annotation. Rather than requiring workers to draw exact polygonal masks around each object instance, we allow workers to \"paint\" all pixels belonging to the category in question. Crowd labeling is similar to semantic segmentation as object instance are not individually identified. We emphasize that crowd labeling is only necessary for images containing more than ten object instances of a given category.", "Extra": "The task of object classification requires binary labels indicating whether objects are present in an image; see Fig. 1(a). Early datasets of this type comprised images containing a single object with blank backgrounds, such as the MNIST handwritten digits [25] or COIL household objects [26]. Caltech 101 [22] and Caltech 256 [23] marked the transition to more realistic object images retrieved from the internet while also increasing the number of object categories to 101 and 256, respectively. Popular datasets in the machine learning community due to the larger number of training examples, CIFAR-10 and CIFAR-100 [27] offered 10 and 100 categories from a dataset of tiny 32 × 32 images [28]. While these datasets contained up to 60,000 images and hundreds of categories, they still only captured a small fraction of our visual world.\nRecently, ImageNet [1] made a striking departure from the incremental increase in dataset sizes. They proposed the creation of a dataset containing 22k categories with 500-1000 images each. Unlike previous datasets containing entry-level categories [29], such as \"dog\" or \"chair,\" like [28], ImageNet used the WordNet Hierarchy [30] to obtain both entry-level and fine-grained [31] categories. Currently, the ImageNet dataset contains over 14 million labeled images and has enabled significant advances in image classification [5], [6], [7].\nObject detection Detecting an object entails both stating that an object belonging to a specified class is present, and localizing it in the image. The location of an object is typically represented by a bounding box, Fig. 1(b). Early algorithms focused on face detection [32] using various ad hoc datasets. Later, more realistic and challenging face detection datasets were created [33]. Another popular challenge is the detection of pedestrians for which several datasets have been created [24], [4]. The Caltech Pedestrian Dataset [4] contains 350,000 labeled instances with bounding boxes.\nFor the detection of basic object categories, a multiyear effort from 2005 to 2012 was devoted to the creation and maintenance of a series of benchmark datasets that were widely adopted. The PASCAL VOC [2] datasets contained 20 object categories spread over 11,000 images. Over 27,000 object instance bounding boxes were labeled, of which almost 7,000 had detailed segmentations. Recently, a detection challenge has been created from 200 object categories using a subset of 400,000 images from ImageNet [34]. An impressive 350,000 objects have been labeled using bounding boxes.\nSince the detection of many objects such as sunglasses, cellphones or chairs is highly dependent on contextual information, it is important that detection datasets contain objects in their natural environments. In our dataset we strive to collect images rich in contextual information. The use of bounding boxes also limits the accuracy for which detection algorithms may be evaluated. We propose the use of fully segmented instances to enable more accurate detector evaluation. The task of labeling semantic objects in a scene requires that each pixel of an image be labeled as belonging to a category, such as sky, chair, floor, street, etc. In contrast to the detection task, individual instances of objects do not need to be segmented, Fig. 1(c). This enables the labeling of objects for which individual instances are hard to define, such as grass, streets, or walls. Datasets exist for both indoor [11] and outdoor [35], [14] scenes. Some datasets also include depth information [11]. Similar to semantic scene labeling, our goal is to measure the pixel-wise accuracy of object labels. However, we also aim to distinguish between individual instances of an object, which requires a solid understanding of each object's extent.\nA novel dataset that combines many of the properties of both object detection and semantic scene labeling datasets is the SUN dataset [3] for scene understanding. SUN contains 908 scene categories from the WordNet dictionary [30] with segmented objects. The 3,819 object categories span those common to object detection datasets (person, chair, car) and to semantic scene labeling (wall, sky, floor). Since the dataset was collected by finding images depicting various scene types, the number of instances per object category exhibits the long tail phenomenon. That is, a few categories have a large number of instances (wall: 20,213, window: 16,080, chair: 7,971) while most have a relatively modest number of instances (boat: 349, airplane: 179, floor lamp: 276). In our dataset, we ensure that each object category has a significant number of instances, Fig. 5. We next describe how the object categories and candidate images are selected. The selection of object categories is a non-trivial exercise. The categories must form a representative set of all categories, be relevant to practical applications and occur with high enough frequency to enable the collection of a large dataset. Other important decisions are whether to include both \"thing\" and \"stuff\" categories [39] and whether fine-grained [31], [1] and object-part categories should be included. \"Thing\" categories include objects for which individual instances may be easily labeled (person, chair, car) where \"stuff\" categories include materials and objects with no clear boundaries (sky, street, grass). Since we are primarily interested in precise localization of object instances, we decided to only include \"thing\" categories and not \"stuff.\" However, since \"stuff\" categories can provide significant contextual information, we believe the future labeling of \"stuff\" categories would be beneficial.\nThe specificity of object categories can vary significantly. For instance, a dog could be a member of the \"mammal\", \"dog\", or \"German shepherd\" categories. To enable the practical collection of a significant number of instances per category, we chose to limit our dataset to entry-level categories, i.e. category labels that are commonly used by humans when describing objects (dog, chair, person). It is also possible that some object categories may be parts of other object categories. For instance, a face may be part of a person. We anticipate the inclusion of object-part categories (face, hands, wheels) would be beneficial for many real-world applications.\nWe used several sources to collect entry-level object categories of \"things.\" We first compiled a list of categories by combining categories from PASCAL VOC [2] and a subset of the 1200 most frequently used words that denote visually identifiable objects [40]. To further augment our set of candidate categories, several children ranging in ages from 4 to 8 were asked to name every object they see in indoor and outdoor environments. The final 272 candidates may be found in the appendix. Finally, the co-authors voted on a 1 to 5 scale for each category taking into account how commonly they occur, their usefulness for practical applications, and their diversity relative to other categories. The final selection of categories attempts to pick categories with high votes, while keeping the number of categories per supercategory (animals, vehicles, furniture, etc.) balanced. Categories for which obtaining a large number of instances (greater than 5,000) was difficult were also removed. To ensure backwards compatibility all categories from PASCAL VOC [2] are also included. Our final list of 91 proposed categories is in Fig. 5(a). Given the list of object categories, our next goal was to collect a set of candidate images. We may roughly group images into three types, Fig. 2: iconic-object images [41], iconic-scene images [3] and non-iconic images. Typical iconic-object images have a single large object in a canonical perspective centered in the image, Fig. 2(a). Iconic-scene images are shot from canonical viewpoints and commonly lack people, Fig. 2(b). Iconic images have the benefit that they may be easily found by directly searching for specific categories using Google or Bing image search. While iconic images generally provide high quality object instances, they can lack important contextual information and non-canonical viewpoints.\nOur goal was to collect a dataset such that a majority of images are non-iconic, Fig. 2(c). It has been shown that datasets containing more non-iconic images are better at generalizing [42]. We collected non-iconic images using two strategies. First as popularized by PASCAL VOC [2], we collected images from Flickr which tends to have fewer iconic images. Flickr contains photos uploaded by amateur photographers with searchable metadata and keywords. Second, we did not search for object categories in isolation. A search for \"dog\" will tend to return iconic images of large, centered dogs. However, if we searched for pairwise combinations of object categories, such as \"dog + car\" we found many more non-iconic images. Surprisingly, these images typically do not just contain the two categories specified in the search, but numerous other categories as well. To further supplement our dataset we also searched for scene/object category pairs, see the appendix. We downloaded at most 5 photos taken by a single photographer within a short time window. In the rare cases in which enough images could not be found, we searched for single categories and performed an explicit filtering stage to remove iconic images. The result is a collection of 328,000 images with rich contextual relationships between objects as shown in Figs. 2(c) and 6. We next describe how we annotated our image collection. Due to our desire to label over 2.5 million object instances, the design of a cost efficient yet high quality annotation pipeline was critical. The annotation pipeline is outlined in Fig. 3. For all crowdsourcing tasks we used workers on Amazon's Mechanical Turk (AMT). Our user interfaces are described in detail in the appendix. Note that, since the original version of this work [19], we have taken a number of steps to further improve the quality of the annotations. In particular, we have increased the number of annotators for the category labeling and instance spotting stages to eight. We also added a stage to verify the instance segmentations. The first task in annotating our dataset is determining which object categories are present in each image, Fig. 3(a). Since we have 91 categories and a large number of images, asking workers to answer 91 binary classification questions per image would be prohibitively expensive. Instead, we used a hierarchical approach [18]. We group the object categories into 11 super-categories (see the appendix). For a given image, a worker was presented with each group of categories in turn and asked to indicate whether any instances exist for that super-category. This greatly reduces the time needed to classify the various categories. For example, a worker may easily determine no animals are present in the image without having to specifically look for cats, dogs, etc. If a worker determines instances from the super-category (animal) are present, for each subordinate category (dog, cat, etc.) present, the worker must drag the category's icon onto the image over one instance of the category. The placement of these icons is critical for the following stage. We emphasize that only a single instance of each category needs to be annotated in this stage. To ensure high recall, 8 workers were asked to label each image. A category is considered present if any worker indicated the category; false positives are handled in subsequent stages. A detailed analysis of performance is presented in §4.4. This stage took ∼20k worker hours to complete. In the next stage all instances of the object categories in an image were labeled, Fig. 3(b). In the previous stage each worker labeled one instance of a category, but multiple object instances may exist. Therefore, for each image, a worker was asked to place a cross on top of each instance of a specific category found in the previous stage. To boost recall, the location of the instance found by a worker in the previous stage was shown to the current worker. Such priming helped workers quickly find an initial instance upon first seeing the image. The workers could also use a magnifying glass to find small instances. Each worker was asked to label at most 10 instances of a given category per image. Each image was labeled by 8 workers for a total of ∼10k worker hours. Our final stage is the laborious task of segmenting each object instance, Fig. 3(c). For this stage we modified the excellent user interface developed by Bell et al. [16] for image segmentation. Our interface asks the worker to segment an object instance specified by a worker in the previous stage. If other instances have already been segmented in the image, those segmentations are shown to the worker. A worker may also indicate there are no object instances of the given category in the image (implying a false positive label from the previous stage) or that all object instances are already segmented.\nSegmenting 2,500,000 object instances is an extremely time consuming task requiring over 22 worker hours per 1,000 segmentations. To minimize cost we only had a single worker segment each instance. However, when first completing the task, most workers produced only coarse instance outlines. As a consequence, we required all workers to complete a training task for each object category. The training task required workers to segment an object instance. Workers could not complete the task until their segmentation adequately matched the ground truth. The use of a training task vastly improved the quality of the workers (approximately 1 in 3 workers passed the training stage) and resulting segmentations. Example segmentations may be viewed in Fig. 6.\nWhile the training task filtered out most bad workers, we also performed an explicit verification step on each segmented instance to ensure good quality. Multiple workers (3 to 5) were asked to judge each segmentation and indicate whether it matched the instance well or not. Segmentations of insufficient quality were discarded and the corresponding instances added back to the pool of unsegmented objects. Finally, some approved workers consistently produced poor segmentations; all work obtained from such workers was discarded.\nFor images containing 10 object instances or fewer of a given category, every instance was individually segmented (note that in some images up to 15 instances were segmented). Occasionally the number of instances is drastically higher; for example, consider a dense crowd of people or a truckload of bananas. In such cases, many instances of the same category may be tightly grouped together and distinguishing individual instances is difficult. After 10-15 instances of a category were segmented in an image, the remaining instances were marked as \"crowds\" using a single (possibly multipart) segment. For the purpose of evaluation, areas marked as crowds will be ignored and not affect a detector's score. Details are given in the appendix. We analyzed crowd worker quality on the category labeling task by comparing to dedicated expert workers, see Fig. 4(a). We compared precision and recall of seven expert workers (co-authors of the paper) with the results obtained by taking the union of one to ten AMT workers. Ground truth was computed using majority vote of the experts. For this task recall is of primary importance as false positives could be removed in later stages. Fig. 4(a) shows that the union of 8 AMT workers, the same number as was used to collect our labels, achieved greater recall than any of the expert workers. Note that worker recall saturates at around 9-10 AMT workers.\nObject category presence is often ambiguous. Indeed as Fig. 4(a) indicates, even dedicated experts often disagree on object presence, e.g. due to inherent ambiguity in the image or disagreement about category definitions. For any unambiguous examples having a probability of over 50% of being annotated, the probability all 8 annotators missing such a case is at most .5 8 ≈ .004. Additionally, by observing how recall increased as we added annotators, we estimate that in practice over 99% of all object categories not later rejected as false positives are detected given 8 annotators. Note that a similar analysis may be done for instance spotting in which 8 annotators were also used.\nFinally, Fig. 4(b) re-examines precision and recall of AMT workers on category labeling on a much larger set of images. The number of workers (circle size) and average number of jobs per worker (circle color) is shown for each precision/recall range. Unlike in Fig. 4(a), we used a leave-one-out evaluation procedure where a category was considered present if any of the remaining workers named the category. Therefore, overall worker precision is substantially higher. Workers who completed the most jobs also have the highest precision; all jobs from workers below the black line were rejected. We added five written caption descriptions to each image in MS COCO. A full description of the caption statistics and how they were gathered will be provided shortly in a separate publication. Bounding-box detection For the following experiments we take a subset of 55,000 images from our dataset 1 and obtain tight-fitting bounding boxes from the annotated segmentation masks. We evaluate models tested on both MS COCO and PASCAL, see Table 1. We evaluate two different models. DPMv5-P: the latest implementation 1. These preliminary experiments were performed before our final split of the dataset intro train, val, and test. Baselines on the actual test set will be added once the evaluation server is complete.\nof [44] (release 5 [45]) trained on PASCAL VOC 2012. DPMv5-C: the same implementation trained on COCO (5000 positive and 10000 negative images). We use the default parameter settings for training COCO models.\nIf we compare the average performance of DPMv5-P on PASCAL VOC and MS COCO, we find that average performance on MS COCO drops by nearly a factor of 2, suggesting that MS COCO does include more difficult (non-iconic) images of objects that are partially occluded, amid clutter, etc. We notice a similar drop in performance Table 1 shows DPMv5-C still outperforms DPMv5-P in 6 out of 20 categories when testing on PASCAL VOC. In some categories (e.g., dog, cat, people), models trained on MS COCO perform worse, while on others (e.g., bus, tv, horse), models trained on our data are better.\nConsistent with past observations [46], we find that including difficult (non-iconic) images during training may not always help. Such examples may act as noise and pollute the learned model if the model is not rich enough to capture such appearance variability. Our dataset allows for the exploration of such issues.\nTorralba and Efros [42] proposed a metric to measure cross-dataset generalization which computes the 'performance drop' for models that train on one dataset and test on another. The performance difference of the DPMv5-P models across the two datasets is 12.7 AP while the DPMv5-C models only have 7.7 AP difference. Moreover, overall performance is much lower on MS COCO. These observations support two hypotheses: 1) MS COCO is significantly more difficult than PASCAL VOC and 2) models trained on MS COCO can generalize better to easier datasets such as PASCAL VOC given more training data. To gain insight into the differences between the datasets, see the appendix for visualizations of person and chair examples from the two datasets.\nGenerating segmentations from detections We now describe a simple method for generating object bounding boxes and segmentation masks, following prior work that produces segmentations from object detections [47], [48], [49], [50]. We learn aspect-specific pixel-level segmentation masks for different categories. These are readily learned by averaging together segmentation masks from aligned training instances. We learn different masks corresponding to the different mixtures in our DPM detector. Sample masks are visualized in Fig. 7.\nDetection evaluated by segmentation Segmentation is a challenging task even assuming a detector reports correct results as it requires fine localization of object part Fig. 7: We visualize our mixture-specific shape masks. We paste thresholded shape masks on each candidate detection to generate candidate segments. Fig. 8: Evaluating instance detections with segmentation masks versus bounding boxes. Bounding boxes are a particularly crude approximation for articulated objects; in this case, the majority of the pixels in the (blue) tightfitting bounding-box do not lie on the object. Our (green) instance-level segmentation allows for a more accurate measure of object detection and localization. boundaries. To decouple segmentation evaluation from detection correctness, we benchmark segmentation quality using only correct detections. Specifically, given that the detector reports a correct bounding box, how well does the predicted segmentation of that object match the ground truth segmentation? As criterion for correct detection, we impose the standard requirement that intersection over union between predicted and ground truth boxes is at least 0.5. We then measure the intersection over union of the predicted and ground truth segmentation masks, see Fig. 8. To establish a baseline for our dataset, we project learned DPM part masks onto the image to create segmentation masks. Fig. 9 shows results of this segmentation baseline for the DPM learned on the 20 PASCAL categories and tested on our dataset. Our dataset contains 91 object categories (the 2014 release contains segmentation masks for 80 of these categories). We began with a list of frequent object categories taken from WordNet, LabelMe, SUN and other sources as well as categories derived from a free recall experiment with young children. The authors then voted on the resulting 272 categories with the aim of sampling a diverse and computationally challenging set of categories; see §3 for details. The list in Table 2 enumerates those 272 categories in descending order of votes. As discussed, the final selection of 91 categories attempts to pick categories with high votes, while keeping the number of categories per super-category (animals, vehicles, furniture, etc.) balanced.\nAs discussed in §3, in addition to using object-object queries to gather non-iconic images, object-scene queries also proved effective. For this task we selected a subset of 40 scene categories from the SUN dataset that frequently co-occurred with object categories of interest. Table 3 enumerates the 40 scene categories (evenly split between indoor and outdoor scenes). In the appendix, we provide detailed descriptions of the AMT user interfaces and the full list of 272 candidate categories (from which our final 91 were selected) and 40 scene categories (used for scene-object queries). We describe and visualize our user interfaces for collecting non-iconic images, category labeling, instance spotting, instance segmentation, segmentation verification and finally crowd labeling.\nNon-iconic Image Collection Flickr provides a rich image collection associated with text captions. However, captions might be inaccurate and images may be iconic. To construct a high-quality set of non-iconic images, we first collected candidate images by searching for pairs of object categories, or pairs of object and scene categories. We then created an AMT filtering task that allowed users to remove invalid or iconic images from a grid of 128 candidates, Fig. 10. We found the choice of instructions to be crucial, and so provided users with examples of iconic and non-iconic images. Some categories rarely co-occurred with others. In such cases, we collected candidates using only the object category as the search term, but apply a similar filtering step, Fig. 10(b).\nCategory Labeling Fig. 12(a) shows our interface for category labeling. We designed the labeling task to encourage workers to annotate all categories present in the image. Workers annotate categories by dragging and dropping icons from the bottom category panel onto a corresponding object instance. Only a single instance of each object category needs to be annotated in the image. We group icons by the super-categories from Fig. 11, allowing workers to quickly skip categories that are unlikely to be present." }, { "title": "Anti-aliasing semantic reconstruction for few-shot semantic segmentation", "year": 2021.0, "authors": "Binghao Liu; Yao Ding; Jianbin Jiao; Xiangyang Ji; Qixiang Ye", "arxiv_di": "2106.00184", "Introduction": "Over the past few years, we have witnessed the substantial progress of object detection and semantic segmentation [45,46,28,48,1,14]. This can be attributed to convolutional neural networks (CNNs) with excellent representation capability and the availability of large datasets with concise mask annotations, especially. However, annotating a large number of object masks is expensive and infeasible in some scenarios (e.g., computer-aided diagnosis systems). Few-shot semantic segmentation, which aims to generalize a model pre-trained on base classes of suffi-Figure 1. Comparison of conventional methods and our ASR method. While conventional methods represent novel classes (e.g., cat and dog) within the feature space specified for base classes without considering the semantic aliasing, ASR implements semantic reconstruction by constructing a class-level semantic space where basis vectors are orthogonal and the semantic interference is reduced. cient data to novel classes with only a few examples, has emerged as a promising technique.\nIn few-shot segmentation, the generalization process is to utilize features learned upon base classes with sufficient training data to represent novel classes. However, for the overlapped semantics among features, the intricate manyto-many correspondence between features and classes inevitably causes semantic aliasing 1 between novel classes when they have similar compositions of semantic concepts. For example, a cat and a dog appear in the same query im-age are confused because they correspond to the similar features of the base classes for bears and sheep, which results in false segmentation, Fig. 1(left).\nIn this paper, we reformulate the few-shot segmentation task as a semantic reconstruction problem and propose an anti-aliasing semantic reconstruction (ASR) approach. To fulfil semantic reconstruction, we first span a class-level semantic space. During the training phase, convolutional feature channels are categorized into channel groups, each of which is optimized for constructing a basis vector corresponding to a base class. This suppresses the semantic overlap between feature channels. We further introduce a contrastive loss to enhance the orthogonality of basis vectors and improve their representation capability. In the space, the semantic vectors of novel classes are represented by weighted basis vector reconstruction. Due to the potential class-level semantic similarity, the novel class will be reconstructed by its semantic-proximal base classes. In this way, novel classes inherit the orthogonality of base classes and are distinguishable, Fig. 1(middle right).\nTo suppress interfering semantics from the background or other classes within the same query image, we further propose the semantic filtering module, which projects query feature vectors to the reconstructed support vector. As the support images have precise semantics guided by the ground-truth annotations, the projection operation divorces interfering semantics, which facilities the activation of target object classes, Fig. 1(bottom right). In the metric learning framework, ASR implements semantic anti-aliasing between novel classes and within query images, providing a systematic solution for few-shot learning, Fig. 2. Such antialiasing can be analyzed from perspectives of vector orthogonality and sparse reconstruction, making ASR an interpretable approach.\nThe contributions of this study include:\n• We propose a systematic and interpretable antialiasing semantic reconstruction (ASR) approach for few-shot semantic segmentation, by converting the base class features into a series of basis vectors for semantic reconstruction.\n• We propose semantic span, which reduces the semantic aliasing between base classes for precise novel class reconstruction. Based on semantic span, we further propose semantic filtering, to eliminate interfering semantics within the query image.\n• ASR improves the prior approaches with significant margins when applied to commonly used datasets. It also achieves good performance under the two-way few-shot segmentation settings.", "Related_Work": "Semantic Segmentation. Benefiting from the superiority of fully convolutional networks, semantic segmentation [2,39,48] has progressed substantially in recent years. Relevant research has also provided some fundamental techniques, such as multi-scale feature aggregation [48] and atrous spatial pyramid pooling (ASPP) [2], which enhance few-shot semantic segmentation. However, these methods generally require large amounts of pixel-level annotations, which hinders their application in many realworld scenarios.\nFew-shot Learning. While meta-learning [36,27,8,15,44,38,21] contributed important optimization methods and data augmentation [13,35] aggregated performance, metric learning [32,30,11,5] with prototype models [23,4,6,41,40,18] represent the majority of few-shot learning approaches. In metric learning frameworks, prototypical models convert spatial semantic information of objects to convolutional channels. With prototypes, metric algorithms aim to obtain a high similarity score for similar sample pairs while a low similarity score for dissimilar pairs. For example, Ref. [3] replaced the fully connected layer with cosine similarity. Ref. [10] devises a few-shot visual learning system that performs well on both base and novel classes. DeepEMD [41] proposed the structural distance between dense image representations. Extra margin constraints [20,17] are absorbed into metric learning to further adjust the inter-class diversity and intra-class variance. Despite the popularity of metric learning, the semantic aliasing issue caused by the feature sharing mechanism is unfortunately ignored.\nFew-shot Segmentation. Early methods generally utilized a parametric module, which uses features learned through support image(s) to segment the query image. In [26] support features were concatenated with the query image to activate features within object regions for segmentation. PGNet [42] and DAN [33] tackled semantic segmentation with graphs and used graph reasoning to propagate label information to the query image.\nFollowing few-shot classification, prototype vectors have been used as semantic representation across feature channels. In [47], masked average pooling was utilized to squeeze foreground information within the support image(s) to prototype vectors. CANet [43] consisted of a twobranch model which performs feature comparison between the support image(s) and the query image guided by prototypes. PANet [34] offered highly representative prototypes for each semantic class and performs segmentation over the query image based on pixel-wise matching. CR-Net [22] proposed a cross-reference mechanism to concurrently make predictions for both the support image(s) and the query image, enforcing co-occurrence of objects and thereby improving the semantic transfer. PMMs [37] and PPNet [24] proposed to decompose objects into parts and represent such parts with mixed prototype vectors to counter semantic mixing. Despite the aforementioned progress, existing methods remain ignorant of the semantic aliasing issue, which causes false (or missing) segmentation of object parts. SST [49] and SimProp [9] respectively introduced self-supervised finetuning and similarity propagation, which leverage the category-specific semantic constraints to reduce semantic aliasing. However, without considering the orthogonality of base class features, they remain challenged by the semantic aliasing issue.", "Methodology": "Few-shot semantic segmentation aims to learn a model (e.g., a network) which can generalize to previously unseen classes. Given two image sets D base and D novel , classes in D novel do not appear in D base , it requires to train the feature representation on D base (which has sufficient data) and test on D novel (which has only a few annotations). Both D base and D novel contain several episodes, each of which consists of a support set (A s i , M s i ) K i=1 and a query set (A q , M q ), where K, A s i , M s i , A q and M q respectively represent the shot number, the support image, the support mask, the query image, and the query mask. For each training episode, the model is optimized to segment A q driven by the segmentation loss L seg . Segmentation performance is evaluated on D novel across all the test episodes.", "Conclusion": "We proposed Anti-aliasing Semantic Reconstruction (ASR), by converting base class features to a series of basis vectors, which span a semantic space. During training, ASR maximized the orthogonality while minimize the semantic aliasing of base classes, which facilities novel class reconstruction. During inference, ASR further suppresses interfering semantics for precise activation of target object areas. On the large-scale MS COCO dataset, ASR improved the performance of few-shot segmentation, in striking contrast with the prior approaches. As a systematic yet interpretable method for semantic representation and semantic anti-aliasing, ASR provides a fresh insight for the few-shot learning problem.", "Experiment_and_Results": "In this section, we first describe the experimental settings. We then report the performance of ASR and compare it with state-of-the-art methods. We finally present ablation studies with experimental analysis and test the effectiveness of ASR on other few-shot learning tasks. Datasets. The experiments are conducted on PASCAL VOC 2012 [7] and MS COCO [19] datasets. We combine the PASCAL VOC 2012 with SBD [12] and separate the combined dataset into four splits. The cross-validation method is used to evaluate the proposed approach by sampling one split as test categories C test = 4i + 1, . . . , 4i + 5, where i is the index of a split. The remaining three splits are set as base classes for training. The reorganized dataset is termed as Pascal-5 i [33,37]. Following the settings in [25,33,37] we construct the COCO-20 i dataset. MS COCO is divided into four splits, each of which contains 20 categories. We follow the same scheme for training and evaluation as on the Pascal-5 i . The category labels for the Method 1-shot 5-shot SG-One [47] 63.9 65.9 PANet [34] four splits are included in the supplementary material. For each split, 1000 pairs of support and query images are randomly selected for performance evaluation.\nTraining and Evaluation. We use CANet [43] without attention modules as the baseline. In training, we set the learning rate as 0.00045. The segmentation model (network) is trained for 200000 steps with the poly de- scent training strategy and the stochastic gradient descent (SGD) optimizer. Several data augmentation strategies including normalization, horizontal flipping, gaussian filtering, random cropping, random rotation and random resizing are used. We adopt both the single-scale and multiscale [43,42,22] evaluation strategies during testing. Our approach is implemented upon the PyTorch 1.3 and run on Nvidia Tesla V100 GPUs. Evaluation Metric. Following [34,25,43], we use the mean Intersection over Union (mIoU) and binary Intersection over Union (FB-IoU) as the performance evaluation metrics. The mIoU calculates the per-class foreground IoU and averages the IoU for all classes to obtain the final evaluation metric. The FB-IoU calculates the mean of foreground IoU and background IoU over all images regardless of category. For category k, IoU is defined as IoU k = T P k /(T P k + F P k + F N k ), where the T P k , F P k and F N k are the number of true positives, false positives and false negatives in segmentation masks. mIoU is the average of IoUs for all the test categories and FB-IoU is the average of IoUs for all the test categories and the background. We report the segmentation performance by averaging the mIoUs on the four cross-validation splits.", "Extra": "We propose a semantic reconstruction framework, where the semantics of novel classes are explicitly reconstructed by those of base classes, Fig. 2. Given support and query images, after extracting convolutional features through a CNN, the ground-truth mask is multiplied with support features in a pixel-wised fashion to filter out background features [47,43,22,37,24]. With a convolutional block we reduce the number of feature channels and obtain support features F s c ∈ R H×W ×(B×D) and query features F q ∈ R H×W ×(B×D) , where H × W , B, and D respectively denote the size of feature maps, base class number, and feature channel number. During training phase, c denotes the base class. And c denotes the novel class during testing phase. The convolutional block consists of pyramid convolution layers, which captures features from coarse to fine. To explicitly encode class-related semantics, we averagely partition the feature channels to B groups, corresponding to B base classes. The grouped features F s c and F q are further spatially squeezed into two vectors v s c and v q , termed semantic vectors, by global average pooling, Fig. 2.\nCorresponding to B base classes, the semantic vectors v s c and v q consists of B sub-vectors {v s c,b } {b=1,2,...,B} ∈ R D and {v q b } {b=1,2,...,B} ∈ R D . During the training phase, the sub-vectors are used to construct basis vectors in the B-dimensional class-level semantic space by the semantic span module, as explained in Section 3.3, Fig. 2. In the space, a basis vector (v b ) corresponding to the b-th base class is defined as\nv b = v s c,b /||v s c,b || = v q b /||v q b ||.\nIn the inference phase, the semantic vector for the c-th class in support branch can be linearly reconstructed [16], as\nvs c = B b=1 w s c,b • v b ,(1)\nwhere vs c denotes the reconstructed support semantic vector (reconstructed support vector for short), and Consistently, given a query image, the corresponding query features F q are reconstructed by regarding each location on the feature maps as a feature vector. Each location of feature map is reconstructed as Fq (x, y) = B b=1 W q b (x, y) • v b , where (x, y) denotes the coordinates of pixels on the feature map, and W q b (x, y) is defined as the norm of sub-vector F q b (x, y). Considering that the query image contains objects not only belonging to the target class but also other classes, we exploit the semantic filtering module, as illustrated in Section 3.4, to filter out the interfering components in the reconstructed query features for the c-th target class semantic segmentation. Within the origin feature space, when base class features are close to each other, there could be semantic aliasing among novel classes. To minimize semantic aliasing, we propose to span a class-level semantic space in the training phase. To construct a group of basis vectors which tends to be orthogonal and representative, we propose the semantic span module (semantic span for short). As shown in Fig. 3, the semantic span is driven by two loss functions, i.e., semantic decoupling and contrastive losses.\nOn the one hand, the semantic span targets at constructing basis vectors by regularizing the feature maps so that each group of features is correlated to a special object class. To fulfill this purpose, we propose the following semantic decoupling loss, as where w s c denotes the reconstruction weight vector, and y ∈ R B denotes the one-hot class label of a support image. Obviously, minimizing L dec is equivalent to maximize the reconstruction weights related to the specific class (e.g., the c-th class), while minimizing those unrelated to it. This defines a soft manner converting a group of features correlated to the semantics of the specific class (e.g., the c-th class) to its corresponding basis vector in the class-level semantic space.\nL dec = log(1 + e -w s c •y ),(2)\nOn the other hand, the semantic span targets at further enhancing orthogonality of basis vectors, which improve the quality of novel class reconstruction. In details, subvectors in {v s c,b } {b=1...B} ∪ {v q b } {b=1...B} belonging to different classes are expected to be orthogonal to each other while those corresponding to the same base classes, e.g., v s c,b and v q b , are expected to have a small vector angle. These two objectives are simultaneously achieved by minimizing the contrastive loss defined as\nL con = e 1+ b =b |cos| e |cos| ,(3)\nwhere cos < • > denotes the Cosine distance metric of two vectors. In summary, the final loss of the semantic reconstruction framework is defined as:\nL = αL dec + βL seg + γL con ,(4)\nwhere α, β and γ are weights of the loss functions. Note that L con is calculated during the later stage of training phase. When multiple objects from different classes exist in the same query image, the reconstructed features of the query image contains components of all these classes. To pick out objects belonging to the target class and suppress interfering semantics, i.e., divorcing the semantics related to the background or objects from other classes, we propose a semantic filtering module. Moreover, owing to that the reconstructed vectors of different classes are non-collinear, the semantic filtering module is implemented by projecting query feature vectors to the reconstructed support vector, as shown in Fig. 4. This is also based on the fact that the reconstructed support vector has precise semantics because the corresponding features have been multiplied with the ground-truth mask, as shown in Fig. 2.\nOn the support branch, we follow Eq. 1 to reconstruct the support features using basis vectors and obtain the reconstructed support vector vs c . On the query branch, we reconstruct each feature vector F q (x, y) on the feature maps in the same way and obtain the reconstructed features F q . We then project Fq to vs c to calculate filtered features as\nFq c (x, y) = Fq (x, y) • vs c ||v s c || • vs c ||v s c || ,(5)\nwhere (x, y) denotes the coordinates of pixels on the feature maps. The intuitive effects of the filter operation are displayed in Fig. 4, which illustrates that the support branch guides the query branch more effectively. The filtered query features Fq c are further enhanced by a residual convolutional module with iterative refinement optimization and fed to Atrous Spatial Pyramid Pooling (ASPP) to predict the segmentation mask, Fig. 2. For the residual convolutional module, we replace the history mask in CANet [43] with the squeezed Fq c . ASR can be analyzed from the perspectives of vector orthogonality and sparse reconstruction. Without loss of generality, we take the two-dimensional space as an example. Denote v 1 , v 2 ∈ R 2 two unit basis vectors ( v 1 = 1, v 2 = 1), which span the space. Denote θ as the angle between v 1 and v 2 , and cos θ = v 1 v 2 . According to the properties of linear algebra [16], any vectors, e.g., u 1 , u 2 ∈ R 2 , in the spanned space can be linearly reconstructed as cosine similarity between u 1 and u 2 is computed as Orthogonality. To reduce semantic aliasing of any two novel classes, the angle between their semantic vectors, u 1 and u 2 , should be large, known as to reduce cos < u 1 , u 2 >. According to the last line of Eq. 7, to obtain a small cos < u 1 , u 2 >, the term cos θ should approach to 0, which means that the angle between the basis vectors v 1 and v 2 is large, which implies the orthogonality of basis vectors. The proposed ASR approach satisfies the orthogonality by introducing the semantic span module. As shown in Fig. 5, the statistical visualization results over base classes validate the orthogonality.\nu 1 = C 1 (w 11 v 1 + w 12 v 2 ), ∀u 1 ∈ R 2 u 2 = C 2 (w 21 v 1 + w 22 v 2 ), ∀u 2 ∈ R 2(6\ncos < u 1 , u 2 >= u 1 u 2 |u 1 | |u 2 | = (w 11 v 1 + w 12 v 2 )(w 21 v 1 + w 22\nSparse Reconstruction. Refer to the last line of Eq. 7, the another manner to reduce the cos < u 1 , u 2 > is enlarging the term (w 11 + w 21 -2w 11 w 21 ). According to the function characteristics, (w 11 + w 21 -2w 11 w 21 ) reaches its maximum when |w 11 -w 21 | approaches to 1.0, which contains underlying conditions that |w 11 -w 12 | and |w 21 -w 22 | approach to 1.0 due to the linear constraints. This illustrates that to further aggregate the capability of anti-aliasing and guarantee the discrimination of novel classes, the reconstruction weights for novel classes should be differential and sparse. ASR satisfies these requirements due to the potential class-level semantic similarity according to the statistical results shown in Fig. 6. Meanwhile, nonzero weights over multiple basis classes other than the dominate one enable ASR to distinguish novel classes from base classes. PASCAL VOC. In Table 1, we report the performance on Pascal VOC. ASR outperforms the prior methods with significant margins. Under 1-shot settings, with a VGG16 backbone, it respectively outperforms RPMMs [37] and SST [49] by 2.64% and 1.36%. Under the 1-shot settings, with a ResNet50 backbone, ASR outperforms CANet [43] and RPMMs [37] method by 2.76% and 1.82%. Under the 5-shot settings, ASR is comparable to the state-of-the-art method. It is worth mentioning that the SST and PPNet used additional k-shot fusion strategies while ASR uses a simple averaging strategy to get five-shot results. In Table 3, ASR is compared with state-of-the-art approaches with respect to FB-IoU. FB-IoU calculates the mean of foreground IoU and background IoU over images regardless of the categories, which reflects how well the full object extent is activated. ASR is on par with the compared methods, if not outperforms. MS COCO. In Table 2, we report the segmentation performance on MS COCO. ASR outperforms the prior methods in most settings. Particularly under the 1-shot setting, it improves RPMMs [37] by 3.27%. Under 5-shot setting, it improves DAN [33] by 6.15%, which are significant margins. For the MS COCO dataset with larger semantic aliasing for the more object categories, semantic reconstruction demonstrated larger advantages. For the larger object category number, we construct a space using more orthogonal basis vectors, which have stronger ability of representation and discrimination. According to Section 3.5, semantic aliasing among novel classes is suppressed effectively. That is why ASR achieves larger performance gains on the MS COCO dataset. We sampled 4000 images from 20 classes in PASCAL VOC, and drew the confusion matrix according to the segmentation results, Fig. 7. ASR effectively reduce semantic aliasing among classes. We further visualize segmentation results and compare them with baseline, Fig. 8. Based on the anti-aliasing representation of novel classes and semantic filtering, ASR reduces the false positive segmentation caused by interfering semantics within the query images. Semantic Span. In Table 4, when simply introducing semantic reconstruction to the baseline method, the performance slightly drops. By using the semantic span module, we improved the performance from 53.26% to 55.98%, demonstrating the necessity of establishing orthogonal basis vectors during semantic reconstruction.\nSemantic Filtering. As shown in Table 4, directly applying semantic filtering on the baseline method harms the performance because the support features contain aliasing semantics. By combining all the modules, ASR improves the mIoU by 2.66% (58.64% vs. 55.98%). In Table 5, four filtering strategies are compared. The vector projection strategy defined in Section 3.4 achieves the best result. Vector projection utilizes the characteristic of vector operations to retain semantics related to the target class and suppress unrelated semantics at utmost. Channel Number (D). The channel number (D) of features to construct basis vectors is an important parameter which affects the orthogonality of basis vectors. From Fig. 9 we can see that the performance improves with the increase of D and starts to plateau when D = 8, where the orthogonality of different basis vectors is sufficient for novel class reconstruction. For the MS COCO dataset D is set to 30.\nModel Size and Efficiency. The model size of ASR is 36.7M, which is slightly larger than that of the baseline method [43] (36.3M) but much smaller than other methods, such as OSLSM [29] (272.6M) and FWB [25] (43.0M). With a Nvidia Tesla V100 GPU, the inference speed is 30 FPS, which is comparable with that of CANet (29 FPS). Following the settings in [24], we conduct two-way oneshot segmentation experiments on PASCAL VOC. From Tab. 6 one can see that ASR outperforms PPNet [24] with a Method Pascal-5 0 Pascal-5 1 Pascal-5 2 Pascal-5 significant margin (53.13% vs. 51.65%). Because two-way segmentation requires not only to segment targets objects but also to distinguish different classes, the model is more sensitive to semantic aliasing. Our ASR approach effectively reduces the semantic aliasing between novel classes and thereby achieves superiors segmentation performance." }, { "title": "Polarized self-attention: Towards high-quality pixel-wise regression", "year": 2021.0, "authors": "Huajun Liu; Fuqiang Liu; Xinyi Fan; Dong Huang", "arxiv_di": 2107.00782, "Introduction": "Recent trends from the coarse-grained (such as imagewise classification [38] and bounding box detection [15]) to the fine-grained computer vision tasks (such as keypoint estimation [31] and segmentation segmentation [62]) have along their counterparts dimensions, and fits output distributions of pixel-wise regression with a softmax-sigmoid composition. At minor computation-memory overheads upon the vanilla DC-NNs, PSA produces significantly higher-quality person keypoint heatmaps and semantic segmentation masks (also see Table 2-3 for the boosts in metrics). received booming advances in both research and industrial communities. Comparing to the coarse-grained tasks, perception at the pixel-wise level is increasingly appealing in autonomous driving [42], augment reality [7], medical image processing [29], and public surveillance [46].\nThe goal of the pixel-wise regression problem is to map every image pixels of the same semantics to the same scores. For instance, mapping all the background pixels to 0 and all the foreground pixels to their class indices, respectively. Two typical tasks are keypoint heatmap regression and segmentation mask regression. Most DCNN models for regression problems take an encoder-decoder architecture. The encoder usually consists of a backbone network, such as ResNet [18], that sequentially reduces the spatial resolution and increases the channel resolution, while the decoder usually contains de-convolution/up-sampling operations that recover the spatial resolution and decrease the channel resolution. Typically the tensor connecting the encoder and decoder has an element number smaller than both the input image tensor and the output tensor. The reduction of elements is necessary for computation/memory efficiency and stochastic optimization reasons [16]. However, the pixel appearances and patch shapes of the same semantics are highly nonlinear in nature and therefore difficult to be encoded with a reduced number of features. Moreover, high input-output resolutions are preferred for fine details of objects and object parts [26,40,44]. Comparing to the image classification task where an input image is collapsed to an output vector of class indices, the pixel-wise regression problem has a higher problem complexity by the order of output element numbers. From the model design perspective, the pixel-wise regression problem faces special challenges: (1) Keeping high internal resolution at a reasonable cost; (2) Fitting output distribution such as that of the keypoint heatmaps or segmentation masks.\nBased on the tremendous success in new DCNNs architectures, we focus on a plug-and-play solution that could consistently improve an existing (vanilla) network, i.e., inserting attention blocks [43] Most of above hybrids try to reach the best compromise among multiple types of tasks, for instance, image classification, object detection, as well as for instance segmentation. These generalized goals are partially the reason that channel-only attention (SE [20], GE [19] and GCNet [3]) are among the most popular blocks. Channel-only attention blocks put the same weights on different spatial locations, such that the classification task still benefits since its spatial information eventually collapses by pooling, and the anchor displacement regression in object detection benefits since the channel-only attention unanimously highlights all foreground pixels. Unfortunately, due to critical differences in attention designs, the channel-spatial compositional attention blocks, (e.g., DA [14], CBAM [48]), did not show significant overall advantages from the latest channel-only attentions such as GCNet [3].\nIn this paper, we present the Polarized Self-Attention (PSA) block (See Figure 1) for high-quality pixel-wise regression.\nTo preserve the potential loss of highresolution information in vanilla/baseline DCNNs by pooling/downsampling, PSA keeps the highest internal resolution in attention computation among existing attention blocks (see also Table 1). To fitting the output distribution of typical fine-grained regression, PSA fuse softmax-sigmoid composition in both channel-only and spatial-only attention branches. Comparing to existing channel-spatial compositions [48,14] that favor particular layouts, there is only marginal metric differences between PSA layouts. This indicates PSA may have exhausted the representation capacity within its channel-only and spatial-only branches. We conducted extensive experiments to demonstrate the direct performance gain of PSA on standard baselines as well as state-of-the-arts.", "Related_Work": "Pixel-wise Regression Tasks: The advances of DCNNs for pixel-wise regression are basically pursuing higher resolution. For body keypoint estimation, Simple-Baseline [51] consists of conventional components ResNet+deconvolution. HRnet [40] address the resolution challenge of Simple-Baseline with 4 parallel high-to-low resolution branches and their pyramid fusion. Other most recent variants, DARK-Pose [56] and UDP-Pose [21], both compensate for the loss of resolution due to the preprocessing, post-processing, and propose techniques to achieve a sub-pixel estimation of keypoints. Note that, besides the performance gain among network designs, the same models with and 388 × 284 inputs are usually better than that with 256 × 192 inputs. This constantly reminds researchers of the importance of keeping high-resolution information. For Semantic segmentation, [4] introduces atrous convolution in the decoder head of Deeplab for wide receptive field on high-resolution inputs. To overcome the limitation of ResNet backbones in Deeplab, all the latest advances are based on HRnet [44], in particular, HRNet-OCR [41] and its variants are the current state-of-the-art. There are many other multitask architecture [17,63,6] that include pixelwise regression as a component.\nPSA further pursues the high-resolution goals of the above efforts from the attention perspective and further boosts the above DCNNs.\nSelf-attention and its Variants. Attention mechanisms have been introduced into many visual tasks to address the weakness of standard convolutions [35][2][1][37] [3]. In the self-attention mechanism, each input tensor is used to compute an attention tensor and is then re-weighted by this attention tensor. Self-attention [43] [35][8] emerged as a standard component to capture long-range interactions, after it success in sequence modeling and generative modeling tasks. Cordonnier et al. [8] has proven that a multi-head self-attention layer with a sufficient number of heads is at least as expressive as any convolutional layer. In some vision tasks, such as object detection and image classification, self-attention augmented convolution models [2] or standalone self-attention models [37] have yielded remarkable gains. While most self-attention blocks were inserted after convolution blocks, attention-augmented convolution [2] demonstrates that parallelizing the convolution layer and attention block is a more powerful structure to handle both short and long-range dependency.\nPSA advances self-attention for pixel-wise regression and could also be used in other variants such as the convolution-augmented attentions.\nFull-tensor and simplified attention blocks. The basic non-local block (NL) [47] and its variants, such as a residual form [59] second-order non local [10] [50], and asymmetric non-local [64], produce full-tensor attentions and have successfully improved person re-identification, image super-resolution, and semantic segmentation tasks. To capture pair-wise similarities among all feature elements, the NL block computes an extremely large similarity matrix between the key feature maps and query feature maps, leading to huge memory and computational costs. EA [39] produces a low-rank approximation of NL block for computation efficiency. BAM [33],DAN [14] and CBAM [48] produce different compositions of the channel-only and spatial-only attentions. Squeeze-and-Excitation (SENet) [20], Gather-Excite [19] and GCNet [3] only re-weight feature channels using signals aggregated from global context modeling. Most of above attention blocks were designed as a compromise among multiple types of tasks, and do not address the specific challenges in fine-grained regression.\nPSA address the specific challenges in fine-grained regression by keeping the highest attention resolution among existing attention blocks, and directly fitting the typical output distributions.", "Methodology": "Notations:foot_0 Denote X ∈ Cin×H×W × as a feature tensor of one sample (e.g., one image), where C in , H, W are the number of elements along the height, width, and channel dimension of X, respectively. X = {x i } HW i=1 where x i ∈ Cin is a feature vector along the channel dimension. A self-attention block A(•) takes X as input, and produces a tensor Z as output, where Z ∈ Cout×H×W . A DCNN block is formulated as a nonlinear mapping Ψ : X → Z. The possible operators of the network block include: the convolution layer W(•), the batch norm layer BN (•), the ReLU activation layer RU (•), softmax SM (•). Without losing generality, all the convolution layers in attention blocks are the (1 × 1) convolution, denoted by W. For simplicity, we only consider the case where the input tensor X and output tensor Z of a DCNN block have the same dimension C × H × W (i.e., C in = C out ).", "Conclusion": "We presented the Polarized Self-Attention(PSA) block towards high-quality pixel-wise regression. PSA significantly boosts all compared DCNNs for two critical designs (1) keeping high internal resolution in both polarized channel-only and spatial-only attention branches, and (2) incorporating a nonlinear composition that fully leverages the high-resolution information preserved in the PSA branches. PSA can potentially benefit any computer vision tasks with pixel-wise regression.\nIt is still not clear how PSA would best benefit pixel-wise regression embedded with the classification and displace-ment regression in complex DCNN heads, such as those in the instance segmentation, anchor-free object detection and panoptic segmentation tasks. To our knowledge, most existing work with self-attention blocks only inserted blocks in the backbone networks. Our future work is to explore the use of PSAs in DCNN heads.", "Experiment_and_Results": "Implementation details. For any baseline networks with the bottleneck or basic residual blocks, such as ResNet and HRnet, we add PSAs after the first 3 × 3 convolution in every residual blocks, respectively. For 2D pose estimation, we kept the same training strategy and hyperparameters as the baseline networks. For semantic segmentation, we added a warming-up training phase of 5000 iterations, stretched the total training iteration by 30%, and kept all the rest training strategy and hyper-parameters of the baseline networks. Empirically, these changes allow PSA to train smoothly on semantic segmentation.", "Extra": "A DCNN for pixel-wise regression learns a weighted combination of features along two dimensions: (1) channelspecific weighting to estimate the class-specific output scores; (2) spatial-specific weighting to detect pixels of the same semantics. The self-attention mechanism applied to the DCNN is expected to further highlight features for both above goals.\nIdeally, with a full-tensor self-attention Z = A(X) X (A(X) ∈ C×H×W ), the highlighting could potentially be achieved at the element-wise granularity (C × H × W elements). However, the attention tensor A is very complex and noise-prone to learn directly. In the Non-Local selfattention block [47], A is calculated as,\nA = W z (F sm (X T W T k W q X)W v X).(1)\nThere are four (1 × 1) convolution kernels, i.e., W z ,W k , W q , and W v , that learns the linear combination of spatial features among different channels. Within the same channels, the HW × HW outer-product between W k X and W q X activates any features at different spatial locations that have a similar intensity. The joint activation mechanism of spatial features is very likely to highlight the spatial noise. The only actual weights, Ws, are channel-specific instead of spatial-specific, making the Non-Local attention exceptionally redundant at the huge memory-consumption of the HW × HW matrix. For efficient computation, reduction of NL leads to many possibilities: Low rank approximation of A (EA), Channel-only self-attention A ch ∈ C×1×1 that highlight the same global context for all pixels(GC [3] and SE [19] ), Spatial-only self-attention A sp ∈ 1×W ×H not powerful enough to be recognized as a standalone model, Channel-spatial composition A sp , where the parallel composition: Z = A ch ch X + A sp sp X and the sequential composition: Z = A ch ch (A sp sp X) introduce different order of non-linearity. Different conclusions were empirically drawn, such as CBAM [48] (sequential>parallel) and DA [14] (parallel>sequential), which partially indicates that the intended non-linearity of the tasks are not fully modeled within the attention blocks.\nThese issues are typical examples of general attention design that does not target the pixel-wise regression problem. With the help of Table 1, we re-visit critical design aspects of existing attention blocks and raise challenges on how to achieve both channel-specific and spatialspecific weighting for pixel-wise regression. (All the attention blocks are compared with their top-performance configurations.)\nInternal Attention Resolution. Recall that most pixelwise regression DCNNs use the same backbone networks, e.g., ResNet, as the classification (i.e., image recognition) and coordinate regression(i.e. bbox detection, instance segmentation) tasks. For robustness and computational effi-\nNL[47] C [W, H] SM C 2 W H + CW 2 H 2 GC [3] C/4 - SM+ReLU CW H SE [19] C/4 - ReLU+SD CW H CBAM [48] C/16 [W, H] SD CW H DA [14] C/8 [W, H] SM C 2 W H + CW 2 H 2 EA [39] d k ( C) dv ( min(W, H)) SM CW H PSA(ours) C/2 [W, H] SM+SD CW H\nTable 1. Re-visit critical design aspects in existing attention blocks. All the attention blocks are compared in their topperformance configurations. SM: SoftMax, SD: Sigmoid. Complexity is estimated assuming C < W H.\nciency, these backbones produce low-resolution features, for instance 1×1×512 for the classification and [W/r, H/r] for bbox detection, where r is the longest side pixels of the smallest object bounding box. Pixel-wise regression cannot afford such loss of resolution, especially because the highly non-linearity in object edges and body parts are very difficult to encode in low-resolution features [4,44,40]. Using these backbones in pixel-wise regression, selfattention blocks are expected to preserve high-resolution semantics in attention computation. However, in Table 1, all the reductions of NL reach their top performance at a lower internal resolution. Since their performance metrics are far from perfect, the natural question to ask is: are there better non-linearity that could leverages higher resolution information in attention computation?\nOutput Distribution/Non-linearity. In DCNNs for pixel-wise regression, outputs are usually encoded as 3D tensors. For instance, the 2D keypoint coordinates are encoded as a stack of 2D Gaussian maps [#keypoint type × W × H]. The pixel-wise class indices are encoded as a stack of binary maps [#semantic classes×W ×H] which follows the Binormial distribution. Non-linearity that directly fits the distribution upon linear transformations (such as convolution) could potentially alleviate the learning burden of DCNNs. The natural nonlinear functions to fit the above distributions are SoftMax for 2D Gaussian maps, and Sigmoid for 2D Binormial Distribution. However, none of the existing attention blocks in Table 1 contains such a combination of nonlinear functions. Our solution to the above challenges is to conduct \"polarized filtering\" in attention computation. A self-attention block operates on an input tensor X to highlight or suppress features, which is very much like optical lenses filtering the light. In photography, there are always random lights in transverse directions that produce glares/reflections. Polarized filtering, by only allowing the light pass orthogonal to the transverse direction, can potentially improve the contrast of the photo. Due to the loss of total intensity, the light after filtering usually has a small dynamic range, therefore needs a additional boost, e.g. by High Dynamic Range (HDR), to recover the details of the original scene.\nWe borrow the key factors of photography, and propose the Polarized Self-Attention (PSA) mechanism: (1) Filtering: completely collapse features in one direction while preserving high-resolution in its orthogonal direction; (2) HDR: increase the dynamic range of attention by Softmax normalization at the bottleneck tensor (smallest feature tensor in attention block), followed by tone-mapping with the Sigmoid function. Formally, we instantiate the PSA mechanism as a PSA block below (also see diagram in Figure 2):\nChannel-only branch A ch (X) ∈ C×1×1 :\nA ch (X) = F SG W z|θ1 (σ 1 (W v (X))×F SM (σ 2 (W q (X))) ,(2)\nwhere W q , W v and W z are 1 × 1 convolution layers respectively, σ 1 and σ 2 are two tensor reshape operators, and F SM (•) is a SoftMax operator and \"×\" is the matrix dotproduct operation F SM (X) = Np j=1 e x j Np m=1 e xm x j . The internal number of channels, between W v |W q and W z , is C/2. The output of channel-only branch is Z ch = A ch (X) ch X ∈ C×H×W , where ch is a channel-wise multiplication operator. A sp (X) ∈ 1×H×W : A sp (X) = F SG σ 3 F SM (σ 1 (F GP (W q (X))))×σ 2 (W v (X)) ,(3)\nwhere W q and W v are standard 1 × 1 convolution layers respectively, θ 2 is an intermediate parameter for these channel convolutions, and σ 1 , σ 2 and σ 3 are three tensor reshape operators, and F SM (•) is the SoftMax operator. F GP (•) is a global pooling operator F GP (X) = 1 H×W H i=1 W j=1 X(:, i, j), and × is the matrix dotproduct operation. The output of spatial-only branch is Z sp = A sp (X) sp X ∈ C×H×W , where sp is a spatialwise multiplication operator.\nComposition: The outputs of above two branches are composed either under the parallel layout\nP SA p (X) = Z ch + Z sp (4) = A ch (X) ch X + A sp (X) sp X,\nor under the sequential layout\nP SA s (X) = Z sp (Z ch ) (5) = A sp (A ch (X) ch X) sp A ch (X) ch X.\nwhere \"+\" is the element-wise addition operator.\nRelation of PSA to other Self-Attentions: We add PSA to Table 1 and make the following observations:\n• Internal Resolution vs Complexity: Comparing to existing attention blocks under their top configuration, PSA preserves the highest attention resolution for both the channel (C/2) 3 and spatial ([W, H]) dimension.\nMoreover, in our channel-only attention, the Softmax re-weighting is fused with squeeze-excitation leveraging Softmax as the nonlinear activation at the bottleneck tensor of size C/2 × W × H. The channel numbers C-C/2-C follow a squeeze-excitation pattern that benefited both GC and SE blocks. Our design conducts higher-resolution squeeze-and-excitation while at comparable computation complexity of the GC block.\nOur spatial-only attention not only keeps the full [W, H] spatial resolution, but also internally keeps 2×C×C/2 learnable parameters in W q and W v for the nonlinear Softmax re-weighting, which is more powerful structure than existing blocks. For instance, the spatial-only attention in CBAM is parameterized by a 7×7×2 convolution (a linear operator), and EA learns\nC × d k + C × d v parameters for linear re-weighting (d k , d v C ).\n• Output Distribution/Non-linearity. Both the PSA channel-only and spatial-only branches use a Softmax-Sigmoid composition. Considering the Softmax-Sigmoid composition as a probability distribution function, both the multi-mode Gaussian maps (keypoint heatmaps) and the piece-wise Binomial maps (segmentation masks) can be approximated upon linear transformations, i.e. 1 × 1 convolutions in PSA.\nWe therefore expect the non-linearity could fully leverage the high resolution information preserved within in PSA attention branches. We first add PSA blocks to standard baseline networks of the following tasks.\nTop-Down 2D Human Pose Estimation: Among the DCNN approaches for 2D human pose estimation, the topdown approaches generally dominate the top metrics. This top-down pipeline consists of a person bounding box detector and a keypoint heatmap regressor. Specifically, we use the pipelines in [51] and [40] as our baselines. An input image is first processed by a human detector [51] of 56.4AP (Average Precision) on MS-COCO val2017 dataset [28]. Then all the detected human image patches are cropped from the input image and resized to 384 × 288. Finally, the 384 × 288 image patches are used for keypoint heatmap regression by a single person pose estimator. The output heatmap size is 96 × 72.\nWe add PSA on Simple-Baseline [51] with the Resnet50/152 backbones and HRnet [40] with the HRnet-w32/w48 backbones. The results on MS-COCO val2017 are shown in Table 2. PSA boosts all the baseline networks by 2.6 to 4.3 AP with minor overheads of computation (Flops) and the number of parameters(mPara). Even without ImageNet pre-training, PSA with \"Res50\" backbone gets 76.5 AP, which is not only 4.3 better than Simple-Baseline with Resnet50 backbone, but also better than Simple-Baseline even with Resnet152 backbone. A similar benefit is also observed on PSA with HRNet-W32 backbone outperforms the baseline with \"HR-w48\" back- bone. This giant performance gains of PAS and the small overheads make PSA+HRNet-W32 the most cost-effective model among all models in Table 2.\nSemantic Segmentation. This task maps an input image to a stack of segmentation masks, one output mask for one semantic class. In Table 3, we compare PSA with the DeepLabV3Plus [4] baseline on the Pascal VOC2012 Aug [12] (21 classes, input image size 513 × 513, output mask size 513 × 513). PSA boosts all the baseline networks by 1.8 to 2.6mIoU(mean Intersection over Union) with minor overheads of computation (Flops) and the number of parameters (mPara). PSA with \"Res50\" backbone got 79.0 mIoU, which is not only 1.8 better than the DeepLabV3Plus with the Resnet50 backbone, but also better than DeepLabV3Plus even with Resnet101. We then apply PSA to the current state-of-the-arts of above tasks. Top-down 2D Human Pose Estimation. To our knowledge, the current state-of-the-art results by single models were achieved by UDP-HRnet with 65.1mAP bbox detector on the MS-COCO keypoint testdev set. In Table 4, we add PSA to the UDP-Pose with HRnet-W48 backbone and achieve a new state-of-the-art AP of 79.5. PSA boosts UDP-Pose (baseline) by 1.7 points (see Figure 3 (a) for their qualitative comparison).\nNote that there is only a subtle metric difference between the parallel (p) and sequential(s) layout of PSA. We believe this partially validate that our design of the channel-only and spatial-only attention blocks has exhausted the representation power along the channel and spatial dimension.\nSemantic Segmentation. To our knowledge, the current state-of-the-art results by single models were produced by HRNet-OCR(MA) [41] on the Cityscapes validation set [9](19 classes, input image size 1024 × 2048, output mask size 1024 × 2048). In Table 5, we add PSA to the basic configuration of HRNet-OCR and achieve the new state-ofthe-arts mIoU of 86.95. PSA boosts HRNet-OCR (strong baseline) by 2 points(see Figure 3 (b) for their qualitative comparison). Again that there is only a subtle metric difference between the PSA results under the parallel(p) layout and the sequential(s) layout. In Table 6, we conduct an ablation study of PSA configurations on Simple-Baseline(Resnet50) [51] and compare PSAs with other related self-attention methods. All the overheads, such as Flops, mPara, inference GPU memory(\"Mem.\"), and inference time (\"Time\")) are inference costs of one sample. To reduce the randomness in CUDA and Pytorch scheduling, we ran inference on MS-COCO val2017 using 4 TITAN RTX GPUs, batchsize 128 (batchsize 32/GPU), and averaged over the number of samples.\nFrom the results of \"PSA ablation\" in Table 6, we observe that (1) the channel-only block (A ch ) outperform spacial-only attention (A sp ), but can be further boosted by their parallel ([A ch |A sp ]) or sequential (A sp (A ch )) compositions; (2) The parallel ([A ch |A sp ]) or sequential (A sp (A ch )) compositions has similar AP, Flops, mPara, inference memory(Mem.), and inference (Time.).\nFrom the results of \"related self-attention methods\", we observe that (1) the NL block costs the most memory while produces the least boost (2.3AP) over the baseline, indicating that NL is highly redundant. (2) The channel-only attention GC is better than SE since it includes SE. GC is even better than channel+spatial attention CBAM because the inner-product-based attention mechanism in GC is more powerful than the convolution/MLP-based CBAM. (3) PSA A ch is the best channel-only attention block over GC and SE. We believe PSA benefits from its highest channel resolution (C/2) and its output design. (4) The channel+spatial attention CBAM with a relatively early design is still better than the channel-only attention SE. (5) Under the same sequential layout of spatial and channel attention, PSA is significantly better than CBAM. Finally, (6) At similar overheads, both the parallel and sequential PSAs are better than the compared blocks. 4) and (b) Semantic segmentation(HRNetV2-OCR, Table 5 ). The white eclipses highlight the fine-grained details that PSAs outperform the strong baselines." }, { "title": "Decoupled weight decay regularization", "year": 2017.0, "authors": "Ilya Loshchilov; Frank Hutter", "arxiv_di": "1711.05101", "Introduction": "Adaptive gradient methods, such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), Adam (Kingma & Ba, 2014) and most recently AMSGrad (Reddi et al., 2018) have become a default method of choice for training feed-forward and recurrent neural networks (Xu et al., 2015;Radford et al., 2015). Nevertheless, state-of-the-art results for popular image classification datasets, such as CIFAR-10 and CIFAR-100 Krizhevsky (2009), are still obtained by applying SGD with momentum (Gastaldi, 2017;Cubuk et al., 2018). Furthermore, Wilson et al. (2017) suggested that adaptive gradient methods do not generalize as well as SGD with momentum when tested on a diverse set of deep learning tasks, such as image classification, character-level language modeling and constituency parsing. Different hypotheses about the origins of this worse generalization have been investigated, such as the presence of sharp local minima (Keskar et al., 2016;Dinh et al., 2017) and inherent problems of adaptive gradient methods (Wilson et al., 2017). In this paper, we investigate whether it is better to use L 2 regularization or weight decay regularization to train deep neural networks with SGD and Adam. We show that a major factor of the poor generalization of the most popular adaptive gradient method, Adam, is due to the fact that L 2 regularization is not nearly as effective for it as for SGD. Specifically, our analysis of Adam leads to the following observations:\nL 2 regularization and weight decay are not identical. The two techniques can be made equivalent for SGD by a reparameterization of the weight decay factor based on the learning rate; however, as is often overlooked, this is not the case for Adam. In particular, when combined with adaptive gradients, L 2 regularization leads to weights with large historic parameter and/or gradient amplitudes being regularized less than they would be when using weight decay.\nL 2 regularization is not effective in Adam. One possible explanation why Adam and other adaptive gradient methods might be outperformed by SGD with momentum is that common deep learning libraries only implement L 2 regularization, not the original weight decay. Therefore, on tasks/datasets where the use of L 2 regularization is beneficial for SGD (e.g., on many popular image classification datasets), Adam leads to worse results than SGD with momentum (for which L 2 regularization behaves as expected).\nWeight decay is equally effective in both SGD and Adam. For SGD, it is equivalent to L 2 regularization, while for Adam it is not.\nOptimal weight decay depends on the total number of batch passes/weight updates. Our empirical analysis of SGD and Adam suggests that the larger the runtime/number of batch passes to be performed, the smaller the optimal weight decay.\nAdam can substantially benefit from a scheduled learning rate multiplier. The fact that Adam is an adaptive gradient algorithm and as such adapts the learning rate for each parameter does not rule out the possibility to substantially improve its performance by using a global learning rate multiplier, scheduled, e.g., by cosine annealing.\nThe main contribution of this paper is to improve regularization in Adam by decoupling the weight decay from the gradient-based update. In a comprehensive analysis, we show that Adam generalizes substantially better with decoupled weight decay than with L 2 regularization, achieving 15% relative improvement in test error (see Figures 2 and3); this holds true for various image recognition datasets (CIFAR-10 and ImageNet32x32), training budgets (ranging from 100 to 1800 epochs), and learning rate schedules (fixed, drop-step, and cosine annealing; see Figure 1). We also demonstrate that our decoupled weight decay renders the optimal settings of the learning rate and the weight decay factor much more independent, thereby easing hyperparameter optimization (see Figure 2).\nThe main motivation of this paper is to improve Adam to make it competitive w.r.t. SGD with momentum even for those problems where it did not use to be competitive. We hope that as a result, practitioners do not need to switch between Adam and SGD anymore, which in turn should reduce the common issue of selecting dataset/task-specific training algorithms and their hyperparameters.", "Methodology": "We now discuss a justification of decoupled weight decay in the framework of Bayesian filtering for a unified theory of adaptive gradient algorithms due to Aitchison (2018). After we posted a preliminary version of our current paper on arXiv, Aitchison noted that his theory \"gives us a theoretical framework in which we can understand the superiority of this weight decay over L 2 regularization, because it is weight decay, rather than L 2 regularization that emerges through the straightforward application of Bayesian filtering.\" (Aitchison, 2018). While full credit for this theory goes to Aitchison, we summarize it here to shed some light on why weight decay may be favored over L 2 regularization.\nAitchison (2018) views stochastic optimization of n parameters θ 1 , . . . , θ n as a Bayesian filtering problem with the goal of inferring a distribution over the optimal values of each of the parameters θ i given the current values of the other parameters θ -i (t) at time step t. When the other parameters do not change this is an optimization problem, but when they do change it becomes one of \"tracking\" the optimizer using Bayesian filtering as follows. One is given a probability distribution P (θ t | y 1:t ) of the optimizer at time step t that takes into account the data y 1:t from the first t mini batches, a state transition prior P (θ t+1 | θ t ) reflecting a (small) data-independent change in this distribution from one step to the next, and a likelihood P (y t+1 | θ t+1 ) derived from the mini batch at step t + 1. The posterior distribution P (θ t+1 | y 1:t+1 ) of the optimizer at time step t + 1 can then be computed (as usual in Bayesian filtering) by marginalizing over θ t to obtain the onestep ahead predictions P (θ t+1 | y 1:t ) and then applying Bayes' rule to incorporate the likelihood P (y t+1 | θ t+1 ). Aitchison (2018) assumes a Gaussian state transition distribution P (θ t+1 | θ t ) and an approximate conjugate likelihood P (y t+1 | θ t+1 ), leading to the following closed-form update of the filtering distribution's mean:\nµ post = µ prior + Σ post × g, (3\n)\nwhere g is the gradient of the log likelihood of the mini batch at time t. This result implies a preconditioner of the gradients that is given by the posterior uncertainty Σ post of the filtering distribution: updates are larger for parameters we are more uncertain about and smaller for parameters we are more certain about. Aitchison (2018) goes on to show that popular adaptive gradient methods, such as Adam and RMSprop, as well as Kronecker-factorized methods are special cases of this framework.\nDecoupled weight decay very naturally fits into this unified framework as part of the state-transition distribution: Aitchison (2018) assumes a slow change of the optimizer according to the following Gaussian: where Q is the covariance of Gaussian perturbations of the weights, and A is a regularizer to avoid values growing unboundedly over time. When instantiated as A = λ × I, this regularizer A plays exactly the role of decoupled weight decay as described in Equation 1, since this leads to multiplying the current mean estimate θ t by (1 -λ) at each step. Notably, this regularization is also directly applied to the prior and does not depend on the uncertainty in each of the parameters (which would be required for L 2 regularization).\nP (θ t+1 | θ t ) = N ((I -A)θ t , Q),(4)", "Dataset": "Several other research groups have already successfully applied AdamW in citable works. For example, Wang et al. (2018) used AdamW to train a novel architecture for face detection on the standard WIDER FACE dataset (Yang et al., 2016), obtaining almost 10x faster predictions than the previous state of the art algorithms while achieving comparable performance. Völker et al. (2018) employed AdamW with cosine annealing to train convolutional neural networks to classify and characterize error-related brain signals measured from intracranial electroencephalography (EEG) recordings.\nWhile their paper does not provide a comparison to Adam, they kindly provided us with a direct comparison of the two on their best-performing problem-specific network architecture Deep4Net and a variant of ResNet. AdamW with the same hyperparameter setting as Adam yielded higher test set accuracy on Deep4Net (73.68% versus 71.37%) and statistically significantly higher test set accuracy on ResNet (72.04% versus 61.34%). Radford et al. (2018) employed AdamW to train Transformer (Vaswani et al., 2017) architectures to obtain new state-of-the-art results on a wide range of benchmarks for natural language understanding. Zhang et al. (2018) compared L 2 regularization vs. weight decay for SGD, Adam and the Kronecker-Factored Approximate Curvature (K-FAC) optimizer (Martens & Grosse, 2015) on the CIFAR datasets with ResNet and VGG architectures, reporting that decoupled weight decay consistently outperformed L 2 regularization in cases where they differ.", "Conclusion": "Following suggestions that adaptive gradient methods such as Adam might lead to worse generalization than SGD with momentum (Wilson et al., 2017), we identified and exposed the inequivalence of L 2 regularization and weight decay for Adam. We empirically showed that our version of Adam with decoupled weight decay yields substantially better generalization performance than the common implementation of Adam with L 2 regularization. We also proposed to use warm restarts for Adam to improve its anytime performance.\nOur results obtained on image classification datasets must be verified on a wider range of tasks, especially ones where the use of regularization is expected to be important. It would be interesting to integrate our findings on weight decay into other methods which attempt to improve Adam, e.g, normalized direction-preserving Adam (Zhang et al., 2017). While we focused our experimental analysis on Adam, we believe that similar results also hold for other adaptive gradient methods, such as AdaGrad (Duchi et al., 2011) andAMSGrad (Reddi et al., 2018).", "Experiment_and_Results": "We now evaluate the performance of decoupled weight decay under various training budgets and learning rate schedules. Our experimental setup follows that of Gastaldi (2017), who proposed, in addition to L 2 regularization, to apply the new Shake-Shake regularization to a 3-branch residual DNN that allowed to achieve new state-of-the-art results of 2.86% on the CIFAR-10 dataset (Krizhevsky, 2009). We used the same model/source code based on fb.resnet.torchfoot_0 . We always used a batch size of 128 and applied the regular data augmentation procedure for the CI-FAR datasets. The base networks are a 26 2x64d ResNet (i.e. the network has a depth of 26, 2 residual branches and the first residual block has a width of 64) and a 26 2x96d ResNet with 11.6M and 25.6M parameters, respectively. For a detailed description of the network and the Shake-Shake method, we refer the interested reader to Gastaldi (2017). We also perform experiments on the Im-ageNet32x32 dataset (Chrabaszcz et al., 2017), a downsampled version of the original ImageNet dataset with 1.2 million 32×32 pixels images. We investigated whether the use of much longer runs (1800 epochs) of \"standard Adam\" (Adam with L 2 regularization and a fixed learning rate) makes the use of cosine annealing unnecessary.", "Extra": "In the weight decay described by Hanson & Pratt (1988), the weights θ decay exponentially as\nθ t+1 = (1 -λ)θ t -α∇f t (θ t ),(1)\nwhere λ defines the rate of the weight decay per step and ∇f t (θ t ) is the t-th batch gradient to be multiplied by a learning rate α. For standard SGD, it is equivalent to standard L 2 regularization: Proposition 1 (Weight decay = L 2 reg for standard SGD). Standard SGD with base learning rate α executes the same steps on batch loss functions f t (θ) with weight decay λ (defined in Equation 1) as it executes without weight decay on\nf reg t (θ) = f t (θ) + λ 2 θ 2 2 , with λ = λ α .\nThe proofs of this well-known fact, as well as our other propositions, are given in Appendix A.\nDue to this equivalence, L 2 regularization is very frequently referred to as weight decay, including in popular deep learning libraries. However, as we will demonstrate later in this section, this equivalence does not hold for adaptive gradient methods. One fact that is often overlooked already for the simple case of SGD is that in order for the equivalence to hold, the L 2 regularizer λ has to be set to λ α , i.e., if there is an overall best weight decay value λ, the best value of λ is tightly coupled with the learning rate α. In order to decouple the effects of these two hyperparameters, we advocate to decouple the weight decay step as proposed by Hanson & Pratt (1988) (Equation 1).\nLooking first at the case of SGD, we propose to decay the weights simultaneously with the update of θ t based on gradient information in Line 9 of Algorithm 1. This yields our proposed variant of SGD with momentum using decoupled weight decay (SGDW). This simple modification explicitly decouples λ and α (although some problem-dependent implicit coupling may of course remain as for any two hyperparameters). In order to account for a possible scheduling of both α and λ, we introduce a scaling factor η t delivered by a user-defined procedure SetScheduleM ultiplier(t).\nAlgorithm 1 SGD with L 2 regularization and SGD with decoupled weight decay (SGDW) , both with momentum 1: given initial learning rate α ∈ IR, momentum factor β1 ∈ IR, weight decay/L2 regularization factor λ ∈ IR 2: initialize time step t ← 0, parameter vector θt=0 ∈ IR n , first moment vector mt=0 ← 0, schedule multiplier ηt=0 ∈ IR 3: repeat 4:\nt ← t + 1 5:\n∇ft(θt-1) ← SelectBatch(θt-1) select batch and return the corresponding gradient 6:\ng t ← ∇ft(θt-1) +λθt-1 7:\nηt ← SetScheduleMultiplier(t) can be fixed, decay, be used for warm restarts 8:\nmt ← β1mt-1 + ηtαg t 9:\nθt ← θt-1 -mt -ηtλθt-1 10: until stopping criterion is met 11: return optimized parameters θt Algorithm 2 Adam with L 2 regularization and Adam with decoupled weight decay (AdamW) 1: given α = 0.001, β1 = 0.9, β2 = 0.999, = 10 -8 , λ ∈ IR 2: initialize time step t ← 0, parameter vector θt=0 ∈ IR n , first moment vector mt=0 ← 0, second moment vector vt=0 ← 0, schedule multiplier ηt=0 ∈ IR 3: repeat 4:\nt ← t + 1 5:\n∇ft(θt-1) ← SelectBatch(θt-1) select batch and return the corresponding gradient 6:\ng t ← ∇ft(θt-1) +λθt-1 7: mt ← β1mt-1 + (1 -β1)g t\nhere and below all operations are element-wise 8:\nvt ← β2vt-1 + (1 -β2)g 2 t 9: mt ← mt/(1 -β t 1 )\nβ1 is taken to the power of t 10:\nvt ← vt/(1 -β t 2 )\nβ2 is taken to the power of t 11:\nηt ← SetScheduleMultiplier(t) can be fixed, decay, or also be used for warm restarts 12: θt ← θt-1 -ηt α mt/( √ vt + ) +λθt-1 13: until stopping criterion is met 14: return optimized parameters θt along with the gradient of f . This leads to an inequivalence of L 2 and decoupled weight decay regularization for adaptive gradient algorithms:\nProposition 2 (Weight decay = L 2 reg for adaptive gradients). Let O denote an optimizer that has iterates θ t+1 ← θ t -αM t ∇f t (θ t ) when run on batch loss function f t (θ) without weight decay, and θ t+1 ← (1 -λ)θ t -αM t ∇f t (θ t ) when run on f t (θ) with weight decay, respectively, with\nM t = kI (where k ∈ R). Then, for O there exists no L 2 coefficient λ such that running O on batch loss f reg t (θ) = f t (θ)+ λ 2 θ 2 2 without weight decay is equivalent to running O on f t (θ) with decay λ ∈ R + .\nWe decouple weight decay and loss-based gradient updates in Adam as shown in line 12 of Algorithm 2; this gives rise to our variant of Adam with decoupled weight decay (AdamW).\nHaving shown that L 2 regularization and weight decay regularization differ for adaptive gradient algorithms raises the question of how they differ and how to interpret their effects. Their equivalence for standard SGD remains very helpful for intuition: both mechanisms push weights closer to zero, at the same rate. However, for adaptive gradient algorithms they differ: with L 2 regularization, the sums of the gradient of the loss function and the gradient of the regularizer (i.e., the L 2 norm of the weights) are adapted, whereas with decoupled weight decay, only the gradients of the loss function are adapted (with the weight decay step separated from the adaptive gradient mechanism). With L 2 regularization both types of gradients are normalized by their typical (summed) magnitudes, and therefore weights x with large typical gradient magnitude s are regularized by a smaller relative amount than other weights. In contrast, decoupled weight decay regularizes all weights with the same rate λ, effectively regularizing weights x with large s more than standard L 2 regularization does. We demonstrate this formally for a simple special case of adaptive gradient algorithm with a fixed preconditioner:\nProposition 3 (Weight decay = scale-adjusted L 2 reg for adaptive gradient algorithm with fixed preconditioner). Let O denote an algorithm with the same characteristics as in Proposition 2, and using a fixed preconditioner matrix M t = diag(s) -1 (with s i > 0 for all i). Then, O with base learning rate α executes the same steps on batch loss functions f t (θ) with weight decay λ as it executes without weight decay on the scale-adjusted regularized batch loss\nf sreg t (θ) = f t (θ) + λ 2α θ √ s 2 2 ,(2)\nwhere and √ • denote element-wise multiplication and square root, respectively, and λ = λ α .\nWe note that this proposition does not directly apply to practical adaptive gradient algorithms, since these change the preconditioner matrix at every step. Nevertheless, it can still provide intuition about the equivalent loss function being optimized in each step: parameters θ i with a large inverse preconditioner s i (which in practice would be caused by historically large gradients in dimension i) are regularized relatively more than they would be with L 2 regularization; specifically, the regularization is proportional to √ s i . In our first experiment, we compare Adam with L 2 regularization to Adam with decoupled weight decay (AdamW), using three different learning rate schedules: a fixed learning rate, a drop-step schedule, and a cosine annealing schedule (Loshchilov & Hutter, 2016). Since Adam already adapts its parameterwise learning rates it is not as common to use a learning rate multiplier schedule with it as it is with SGD, but as our results show such schedules can substantially improve Adam's performance, and we advocate not to overlook their use for adaptive gradient algorithms.\nFor each learning rate schedule and weight decay variant, we trained a 2x64d ResNet for 100 epochs, using different settings of the initial learning rate α and the weight decay factor λ. Figure 1 shows that decoupled weight decay outperforms L 2 regularization for all learning rate schedules, with larger differences for better learning rate schedules. We also note that decoupled weight decay leads to a more separable hyperparameter search space, especially when a learning rate schedule, such as step-drop and cosine annealing is applied. The figure also shows that cosine annealing clearly outperforms the other learning rate schedules; we thus used cosine annealing for the remainder of the experiments. In order to verify our hypothesis about the coupling of α and λ, in Figure 2 we compare the performance of L 2 regularization vs. decoupled weight decay in SGD (SGD vs. SGDW, top row) and in Adam (Adam vs. AdamW, bottom row). In SGD (Figure 2, top left), L 2 regularization is not decoupled from the learning rate (the common way as described in Algorithm 1), and the figure clearly shows that the basin of best hyperparameter settings (depicted by color and top-10 hyperparameter settings by black circles) is not aligned with the x-axis or y-axis but lies on the diagonal. This suggests that the two hyperparameters are interdependent and need to be changed simultaneously, while only changing one of them might substantially worsen results. Consider, e.g., the setting at the top left black circle (α = 1/2, λ = 1/8 * 0.001); only changing either α or λ by itself would worsen results, while changing both of them could still yield clear improvements. We note that this coupling of initial learning rate and L 2 regularization factor might have contributed to SGD's reputation of being very sensitive to its hyperparameter settings.\nIn contrast, the results for SGD with decoupled weight decay (SGDW) in Figure 2 (top right) show that weight decay and initial learning rate are decoupled. The proposed approach renders the two hyperparameters more separable: even if the learning rate is not well tuned yet (e.g., consider the value of 1/1024 in Figure 2, top right), leaving it fixed and only optimizing the weight decay factor would yield a good value (of 1/4*0.001). This is not the case for SGD with L 2 regularization (see Figure 2, top left).\nThe results for Adam with L 2 regularization are given in Figure 2 (bottom left). Adam's best hyperparameter settings performed clearly worse than SGD's best ones (compare Figure 2, top left). While both methods used L 2 regularization, Adam did not benefit from it at all: its best results obtained for non-zero L 2 regularization factors were comparable to the best ones obtained without the L 2 regularization, i.e., when λ = 0. Similarly to the original SGD, the shape of the hyperparameter landscape suggests that the two hyperparameters are coupled.\nIn contrast, the results for our new variant of Adam with decoupled weight decay (AdamW) in Figure 2 (bottom right) show that AdamW largely decouples weight decay and learning rate. The results for the best hyperparameter settings were substantially better than the best ones of Adam with L 2 regularization and rivaled those of SGD and SGDW.\nIn summary, the results in Figure 2 support our hypothesis that the weight decay and learning rate hyperparameters can be decoupled, and that this in turn simplifies the problem of hyperparameter tuning in SGD and improves Adam's performance to be competitive w.r.t. SGD with momentum. While the previous experiment suggested that the basin of optimal hyperparameters of AdamW is broader and deeper than the one of Adam, we next investigated the results for much longer runs of 1800 epochs to compare the generalization capabilities of AdamW and Adam.\nWe fixed the initial learning rate to 0.001 which represents both the default learning rate for Adam and the one which showed reasonably good results in our experiments. Figure 3 shows the results for 12 settings of the L 2 regularization of Adam and 7 settings of the normalized weight decay of AdamW (the normalized weight decay represents a rescaling formally defined in Appendix B.1; it amounts to a multiplicative factor which depends on the number of batch passes). Interestingly, while the dynamics of the learning curves of Adam and AdamW often coincided for the first half of the training run, AdamW often led to lower training loss and test errors (see Figure 3 top left and top right, respectively). Importantly, the use of L 2 weight decay in Adam did not yield as good In order to improve the anytime performance of SGDW and AdamW we extended them with the warm restarts we introduced in Loshchilov & Hutter (2016), to obtain SGDWR and AdamWR, respectively (see Section B.2 in the Appendix). As Figure 4 shows, AdamWR greatly sped up AdamW on CIFAR-10 and ImageNet32x32, up to a factor of 10 (see the results at the first restart). For the default learning rate of 0.001, AdamW achieved 15% relative improvement in test error compared to Adam both on CIFAR-10 (also see SuppFigure 5) and ImageNet32x32 (also see SuppFigure 6).\nAdamWR achieved the same improved results but with a much better anytime performance. These improvements closed most of the gap between Adam and SGDWR on CIFAR-10 and yielded comparable performance on ImageNet32x32. O without weight decay has the following iterates on\nf sreg t (θ) = f t (θ) + λ 2 θ √ s 2 2 : θ t+1 ← θ t -α∇f sreg t (θ t )/s (9) = θ t -α∇f t (θ t )/s -αλ θ t s/s (10) = θ t -α∇f t (θ t )/s -αλ θ t ,(11)\nwhere the division by s is element-wise. O with weight decay has the following iterates on f t (θ):\nθ t+1 ← (1 -λ)θ t -α∇f (θ t )/s (12) = θ t -α∇f (θ t )/s -λθ t ,(13)\nThese iterates are identical since λ = λ α . Having discussed decoupled weight decay for improving Adam's generalization, in this section we introduce two additional components to improve Adam's performance in practice. Our preliminary experiments showed that different weight decay factors are optimal for different computational budgets (defined in terms of the number of batch passes). Relatedly, Li et al. (2017) demonstrated that a smaller batch size (for the same total number of epochs) leads to the shrinking effect of weight decay being more pronounced. Here, we propose to reduce this dependence by normalizing the values of weight decay. Specifically, we replace the hyperparameter λ by a new (more robust) normalized weight decay hyperparameter λ norm , and use this to set λ as λ = λ norm b BT , where b is the batch size, B is the total number of training points and T is the total number of epochs.foot_1 Thus, λ norm can be interpreted as the weight decay used if only one batch pass is allowed. We emphasize that our choice of normalization is merely one possibility informed by few experiments; a more lasting conclusion we draw is that using some normalization can substantially improve results. We now apply cosine annealing and warm restarts to Adam, following our recent work (Loshchilov & Hutter, 2016). There, we proposed Stochastic Gradient Descent with Warm Restarts (SGDR) to improve the anytime performance of SGD by quickly cooling down the learning rate according to a cosine schedule and periodically increasing it. SGDR has been successfully adopted to lead to new state-of-the-art results for popular image classification benchmarks (Huang et al., 2017;Gastaldi, 2017;Zoph et al., 2017), and we therefore already tried extending it to Adam shortly after proposing it. However, while our initial version of Adam with warm restarts had better anytime performance than Adam, it was not competitive with SGD with warm restarts, precisely because L 2 regularization was not working as well as in SGD. Now, having fixed this issue by means of the original weight decay regularization (Section 2) and also having introduced normalized weight decay (Section B.1), our original work on cosine annealing and warm restarts directly carries over to Adam.\nIn the interest of keeping the presentation self-contained, we briefly describe how SGDR schedules the change of the effective learning rate in order to accelerate the training of DNNs. Here, we decouple the initial learning rate α and its multiplier η t used to obtain the actual learning rate at iteration t (see, e.g., line 8 in Algorithm 1). In SGDR, we simulate a new warm-started run/restart of SGD once T i epochs are performed, where i is the index of the run. Importantly, the restarts are not performed from scratch but emulated by increasing η t while the old value of θ t is used as an initial solution. The amount by which η t is increased controls to which extent the previously acquired information (e.g., momentum) is used. Within the i-th run, the value of η t decays according to a cosine annealing (Loshchilov & Hutter, 2016) learning rate for each batch as follows:\nη t = η (i) min + 0.5(η (i) max -η (i) min )(1 + cos(πT cur /T i )),(14) where η (i)\nmin and η (i) max are ranges for the multiplier and T cur accounts for how many epochs have been performed since the last restart. T cur is updated at each batch iteration t and is thus not constrained to integer values. Adjusting (e.g., decreasing) η (i) min and η (i) max at every i-th restart (see also Smith (2016)) could potentially improve performance, but we do not consider that option here because it would involve additional hyperparameters. For η (i) max = 1 and η (i) min = 0, one can simplify Eq. ( 14) to η t = 0.5 + 0.5 cos(πT cur /T i ).\n(15)\nIn order to achieve good anytime performance, one can start with an initially small T i (e.g., from 1% to 10% of the expected total budget) and multiply it by a factor of T mult (e.g., T mult = 2) at every restart. The (i + 1)-th restart is triggered when T cur = T i by setting T cur to 0. An example setting of the schedule multiplier is given in C.\nOur proposed AdamWR algorithm represents AdamW (see Algorithm 2) with η t following Eq. ( 15) and λ computed at each iteration using normalized weight decay described in Section B.1. We note that normalized weight decay allowed us to use a constant parameter setting across short and long runs performed within AdamWR and SGDWR (SGDW with warm restarts). An example schedule of the schedule multiplier η t is given in SuppFigure 1 for T i=0 = 100 and T mult = 2. After the initial 100 epochs the learning rate will reach 0 because η t=100 = 0. Then, since T cur = T i=0 , we restart by resetting T cur = 0, causing the multiplier η t to be reset to 1 due to Eq. ( 15). This multiplier will then decrease again from 1 to 0, but now over the course of 200 epochs because T i=1 = T i=0 T mult = 200. Solutions obtained right before the restarts, when η t = 0 (e.g., at epoch indexes 100, 300, 700 and 1500 as shown in SuppFigure 1) are recommended by the optimizer as the solutions, with more recent solutions prioritized. We thank Patryk Chrabaszcz for help with running experiments with ImageNet32x32; Matthias Feurer and Robin Schirrmeister for providing valuable feedback on this paper in several iterations; and Martin Völker, Robin Schirrmeister, and Tonio Ball for providing us with a comparison of AdamW and Adam on their EEG data. We also thank the following members of the deep learning community for implementing decoupled weight decay in various deep learning libraries:\nRobin Schirrmeister, and Kashif Rasul for their implementations in PyTorch (see https://github.com/pytorch/pytorch/pull/4429) • Phil Jund for his implementation in TensorFlow described at https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ DecoupledWeightDecayExtension • Sylvain Gugger, Anand Saha, Jeremy Howard and other members of fast.ai for their implementation available at https://github.com/sgugger/Adam-experiments • Guillaume Lambard for his implementation in Keras available at https://github. com/GLambard/AdamW_Keras • Yagami Lin for his implementation in Caffe available at https://github.com/ Yagami123/Caffe-AdamW-AdamWR This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant no. 716721, by the German Research Foundation (DFG) under the BrainLinksBrainTools Cluster of Excellence (grant number EXC 1086) and through grant no. INST 37/935-1 FUGG, and by the German state of Baden-Württemberg through bwHPC. A FORMAL ANALYSIS OF WEIGHT DECAY VS L 2 REGULARIZATION The proof for this well-known fact is straight-forward. SGD without weight decay has the following iterates on f reg t (θ) = f t (θ) + λ 2 θ 2 2 : θ t+1 ← θ t -α∇f reg t (θ t ) = θ t -α∇f t (θ t ) -αλ θ t .\n(5)\nSGD with weight decay has the following iterates on f t (θ):\nThese iterates are identical since λ = λ α . Similarly to the proof of Proposition 1, the iterates of O without weight decay on\nand O with weight decay λ on f t are, respectively:\nThe equality of these iterates for all θ t would imply λθ t = αλ M t θ t . This can only hold for all θ t if M t = kI, with k ∈ R, which is not the case for O. Therefore, no L 2 regularizer λ θ 2 2 exists that makes the iterates equivalent. SuppFigure 2 shows the results of standard Adam for a 4 by 4 logarithmic grid of hyperparameter settings (the coarseness of the grid is due to the high computational expense of runs for 1800 epochs). Even after taking the low resolution of the grid into account, the results appear to be at best comparable to the ones obtained with AdamW with 18 times less epochs and a smaller network (see SuppFigure 3, top row, middle). These results are not very surprising given Figure 1 in the main paper (which demonstrates both the improvements possible by using some learning rate schedule, such as cosine annealing, and the effectiveness of decoupled weight decay).\nOur experimental results with Adam and SGD suggest that the total runtime in terms of the number of epochs affect the basin of optimal hyperparameters (see SuppFigure 3). More specifically, the greater the total number of epochs the smaller the values of the weight decay should be. SuppFigure 4 shows that our remedy for this problem, the normalized weight decay defined in Eq. ( 15), simplifies hyperparameter selection because the optimal values observed for short runs are similar to the ones for much longer runs. We used our initial experiments on CIFAR-10 to suggest the square root normalization we proposed in Eq. ( 15) and double-checked that this is not a coincidence on the ImageNet32x32 dataset (Chrabaszcz et al., 2017), a downsampled version of the original ImageNet dataset with 1.2 million 32×32 pixels images, where an epoch is 24 times longer than on CIFAR-10. This experiment also supported the square root scaling: the best values of the normalized weight decay observed on CIFAR-10 represented nearly optimal values for ImageNet32x32 (see SuppFigure 3). In contrast, had we used the same raw weight decay values λ for ImageNet32x32 as for CIFAR-10 and for the same number of epochs, without the proposed normalization, λ would have been roughly 5 times too large for ImageNet32x32, leading to much worse performance. The optimal normalized weight decay values were also very similar (e.g., λ norm = 0.025 and λ norm = 0.05) across SGDW and AdamW. These results clearly show that normalizing weight decay can substantially improve performance; while square root scaling performed very well in our experiments we emphasize that these experiments were not very comprehensive and that even better scaling rules are likely to exist. SuppFigure 4 is the equivalent of Figure 3 in the main paper, but for ImageNet32x32 instead of for CIFAR-10. The qualitative results are identical: weight decay leads to better training loss (crossentropy) than L 2 regularization, and to an even greater improvement of test error. SuppFigure 5 and SuppFigure 6 are the equivalents of Figure 4 in the main paper but supplemented with training loss curves in its bottom row. The results show that Adam and its variants with decoupled weight decay converge faster (in terms of training loss) on CIFAR-10 than the corresponding SGD variants (the difference for ImageNet32x32 is small). As is discussed in the main paper, when the same values of training loss are considered, AdamW demonstrates better values of test error than Adam. Interestingly, SuppFigure 5 and SuppFigure 6 show that the restart variants AdamWR and SGDWR also demonstrate better generalization than AdamW and SGDW, respectively." }, { "title": "Simpler is better: Few-shot semantic segmentation with classifier weight transformer", "year": 2021.0, "authors": "Zhihe Lu; Sen He; Xiatian Zhu; Li Zhang; Yi-Zhe Song; Tao Xiang", "arxiv_di": "2108.03032", "Introduction": "Semantic segmentation has achieved remarkable progress in the past five years thanks to the availability of large-scale labeled datasets and advancements in deep learning algorithms [5,7,21]. Nonetheless, relying on many training images with exhaustive pixel-level annotation for every single class, existing methods have poor scalability to new classes. Indeed, the high annotation cost has hindered the general applicability of semantic segmentation models. For instance, creating the COCO dataset [17] took over 70,000 worker hours even for only 80 common object categories. Inspired by the significant efforts in few-shot image classification [27,11,29,37], few-shot learning has been introduced into semantic segmentation recently [25,22,3,34,36,40,41]. A few-shot segmentation method eliminates the need of labeling a large set of training images. This is typically achieved by meta learning which enables the model to adapt to a new class represented by a support set consisting of as few as a single image. tation method is how to effectively adapt a complex image segmentation model to a new class represented by a small support set composed of only few images. More specifically, most recent segmentation models are deep CNNs with complex architectures consisting of three parts: a CNN encoder, a CNN decoder and a classifier. Among the three, the classifier, which labels each pixel into either foreground classes or background, is much simpler compared to the other two -in most cases, it is a 1 × 1 convolutional layer with the parameter number doubling the pixel feature dimension.\nAs shown in Figure 1(a), existing few-shot segmentation models [25,36,40,41] aim to meta-learn all three parts. Concretely, existing few-shot segmentation methods mostly adopt an episodic training strategy. In each training episode, one training class is sampled with a small support set and query images to imitate the setting for testing. Once learned, given a new class with a fully annotated support set and an unannotated query image, all three parts are expected to adapt to the new class so that foreground and background pixels can be separated accurately in the query image. Note that both the encoder and decoder are deep CNNs, e.g., VGG-16 [26] (15M parameters without fullyconnected (FC) layers) or ResNet-50/101 [15] (24M/43M parameters without FC layers) for encoder, and ASPP [6] (3.4M parameters) for decoder. Effectively adapting the full model, especially the encoder and decoder, is thus a daunting task, hindering the performance of the existing models.\nTo overcome the above fundamental limitations with existing few-shot segmentation methods, we conjecture that the key is to simplify the meta-learning task so that few-shot learning becomes more tractable and hence more effective. To this end, we propose to focus meta-learning on the simplest and latest stage of the three-part pipeline -the classifier whilst leaving the training of the encoder and decoder to a pre-training stage (see Figure 1(b)). Once trained, only the classifier needs to be adapted to the new class with the rest of the model frozen, drastically reducing the complexity of model adaptation. Our assumption is that if we pre-train an off-the-shelf segmentation model over a set of diverse training classes, the encoder and decoder can already capture a rich set of discriminative segmentation features suitable for not only the training classes but also the unseen test classes, i.e., being class-agnostic. We need to then focus on adapting the classifier part alone. Indeed, we found that if we simply do the pre-training and use the support set of a new class to train a classifier (i.e., without meta-learning), the result is already comparable to that obtained by the stateof-the-art methods (see Sec. 4). However, this naive baseline cannot adapt to each query image, which is critical for our problem due to the large intra-class variation (Figure 2). Without sufficient training samples in the support set to accommodate this intra-class variation, a few-shot segmentation model must adapt to each query image as well. We hence further propose a novel meta-learning framework that employs a Classifier Weight Transformer (CWT) to dynamically adapt the support-set trained classifier's weights to each query image in an inductive way, i.e., adaptation occurs independently on each query image.\nWe make the following contributions in this work: (1) We propose a novel model training paradigm for few-shot semantic segmentation. Instead of meta-learning the whole, complex segmentation model, we focus on the simplest classifier part to make new-class adaptation more tractable.\n(2) We introduce a novel meta-learning algorithm that leverages a Classifier Weight Transformer (CWT) for adapting dynamically the classifier weights to every query sample. (3) Extensive experiments with two popular backbones (ResNet-50 and ResNet-101) show that the proposed method yields a new state-of-the-art performance, often surpassing existing alternatives, especially on 5-shot case, by a large margin. Further, under a more challenging yet practical cross-domain setting, the margin becomes even bigger.", "Methodology": "A few-shot segmentation model generally consists of three modules: an encoder, a decoder and a classifier. For learning to adapt to a new class, existing methods typically meta-learn the entire model after the encoder is pre-trained on ImageNet [23]. During the episodic training stage, all three parts of the model are meta-learned. Once trained, given a new class with annotated support set images and query images for test, the model is expected to adapt all three parts to the new class. With only few annotated support set images and the complex and interconnected three parts, this adaptation is often sub-optimal.\nTo overcome these limitations we propose a simple yet effective training paradigm in two stages. In the first stage, we pre-train the encoder and decoder for stronger feature representation with supervised learning. In the second stage, together with the frozen encoder and decoder we meta-train the classifier only. This is because we consider the pre-trained feature representation parts (i.e., the encoder and decoder) are sufficiently generalizable to any unseen classes; the key for few-shot semantic segmentation would thus be in adapting the binary classifier (separating foreground and background pixels) rather than the entire model from few-shot samples. The overview of our method is depicted in Figure 3. As in all existing few-shot semantic segmentation models [25,3], one of the key objectives is to learn the feature representation parts (i.e., the encoder and decoder) through meta learning, so that it can generalize to any unseen classes. For example, the state-of-the-art RPMMs model [36] was directly meta-trained with the encoder pretrained on ImageNet. However, in most recent few-shot learning methods for static image classification [37,38], pre-training the feature network on the whole meta-training set before episodic training starts has become a standard step. In this work, we also adopt such a pre-training step and show in our experiments that this step is vital (see Sec. 4.5).\nSpecifically, we use the PSPNet [42] as our backbone segmentation model. It is then pre-trained on the whole training set D train with the cross-entropy loss. Training details are provided in Sec 4.2.", "Dataset": "In our experiments, two standard few-shot semantic segmentation datasets are used. COCO-20 i is currently the largest and most challenging dataset for few-shot semantic segmentation. It provides the train/val sets including 82,081/40,137 images in 80 classes, built from the popular COCO [17] benchmark. Following [22] we divide the 80 classes into 4 splits i ∈ {0, 1, 2, 3}, each of which contains 20 classes. In a single experiment, three class splits are selected as the base classes for training whilst the remaining class split for testing. Therefore a total of four experiments are conducted. PASCAL-5 i is the extension of PASCAL VOC 2012 [10] with extra annotations from SDS dataset [14]. The train and val sets contain 5,953 and 1,449 images, respectively. There are 20 categories in both the training set and the validation set. Following [25] we make 4 class splits each with 5 classes and design the experiments in a similar protocol as COCO-20 i .", "Conclusion": "We have presented a novel few-shot segmentation learning method with meta-learning. Our method differs significantly from existing ones in that we only meta-learn the classifier part of a complex segmentation model whilst freezing the pre-trained encoder and decoder parts. To address the intra-class variation issue, we further propose a Classifier Weight Transformer (CWT) for adapting the classifier's weights, which is first initialized on a support set, to every query image. Extensive experiments verify the performance superiority of our proposed method over the existing state-of-the-art few-shot segmentation methods on two standard benchmarks. Besides, we investigate a more challenging and realistic setting -cross-domain few-shot segmentation, and show the advantages of the proposed method.", "Experiment_and_Results": "Pre-training To obtain a strong encoder and decoder, we adopt the standard supervised learning for semantic segmentation on base classes, i.e., 16/61 base classes (including the background class) in each split of PASCAL-5 i /COCO-20 i . We choose PSPNet [42] as our baseline segmentation model. For fair comparisons with existing methods, we tested two common backbones, ResNet-50 and ResNet-101 [15]. We also present the results with VGG-16 in the supplementary. We train a model for 100 epochs on PASCAL-5 i and 20 epochs on COCO-20 i . We set the batch size to 12, the image size to 417. The objective function is the cross-entropy loss. We use the SGD optimizer with the momentum 0.9, the weight decay 1e-4, the initial learning rate 2.5e-3, and the cosine learning rate scheduler. We set the label smoothing parameter to 0.1. For data augmentation, we only use random horizontal flipping. Episodic Training After pre-training, we froze the encoder and decoder in the subsequent episodic training. In this stage, we form the training data of base classes into episodes, each including a support set and a query set from a randomly selected class. We first train a new classifier for the selected class for 200 iterations on the support set using the SGD optimizer and cross-entropy loss function. The learning rate 1e-1 is used. Next, we train the proposed CWT once at learning rate of 1e-3. There are total 20 epochs for above inner-and outer-loop optimization. Our transformer has a shared linear layer with dimension, 512 × 2048, for projecting the inputs to latent space, a 4-head attention module and a fully connected layer recovering the dimension to 512. In addition, a dropout layer for stable training and layer normalization are used. It outputs query-adaptive classifier weights which will be used to predict every pixel of the query images. In meta-learning setup, the classifier resides in the inner loop whilst the transformer is in the outer loop. Evaluation Metrics We use the class mean intersection over union (mIoU) as the evaluation metric. Specifically, mIoU is computed over averaging the IoU rates of each class. Following [20], we report all the results averaged across 5 trials. For each trial, we test 1,000 episodes. We report the results for every single split and their average. In Table 1 we compare the segmentation results of our method and the latest state-of-the-art models on COCO- 20 i . We consider 1-shot and 5-shot cases, and two backbone networks (ResNet-50 and ResNet-101) for a more extensive comparison. Overall, the performance advantage of our method over all competitors is significant. For example, in 1-shot case we obtain 2.3% mIoU gain over the best competitor with ResNet-50 backbone. Much more gains (5.1%/4.6% with ResNet-50/ResNet-101) are shown in 5shot case. Similar to PPNet, our method can benefit consistently from support set expansion. In contrast, some existing methods (e.g., RPMMs, FWB and PFENet) are clearly inferior in leveraging extra labeled samples. More importantly, all the compared SOTA baselines attempt to adapt all three parts of a segmentation model to the new class and each query image. Nevertheless, our method is much simpler by focusing on the linear classifier only. The superior results achieved by our method thus verify our assumption that the pre-trained encoder and decoder is generalizable; and meta-learning and adaptation are thus only necessary for the classifier. Furthermore, we investigate the inference speed under a challenging yet practical scenario, i.e., 1,000 query samples per task on 1-shot case, as a trained model is expected to serve more images. Our model runs at 21.7 frame per second (FPS) vs. 18.9 FPS achieved by RPMMs [36] with ResNet-50. In Table 2 we show the comparative results on PASCAL-5 i . On this less challenging dataset with a smaller number of object classes, we have similar observations as on COCO-20 i . Our proposed method again achieves the best overall performance. It is also noted that our model's advantage over the competitors is less pronounced compared to the COCO results. Our method even performs worse in 1-shot case. A plausible reason is that with far less training classes and images on PASCAL-5 i , our assumption of the pre-trained encoder/decoder being class-agnostic is not valid anymore. Existing methods' approach of adapting them to new class thus has some benefit contributing to narrowing down the gap to our method. Beyond the standard single domain few-shot segmentation setting, we further introduce a more challenging and more realistic cross-domain setting. In this new setting, we aim to test the generalization of a pre-trained model across previously unseen domains (datasets) with different data distributions. This setting is more difficult yet more practical -in real-world applications, the new segmentation tasks often involve both new classes and new domains.\nIn this experiment, we train a few-shot segmentation model on COCO-20 i /PASCAL-5 i and then directly apply it to PASCAL-5 i /COCO-20 i without any domain-specific model re-training or fine-tuning. This data transfer design is a good cross-domain test, as the two datasets present a clear domain shift in terms of instance size, instance number and categories per image [17]. Concretely, from COCO to PAS-CAL, we use the original COCO-20 i training class splits for model training. During testing on PASCAL-5 i , we take all the classes in validation set as a whole and remove the training classes seen in each split of COCO-20 i to ensure the few-shot learning nature. For PASCAL-to-COCO case, the only difference from the original PASCAL-5 i experiments is that the testing splits are from COCO that contains all the classes of PASCAL.\nFor comparative evaluation, we select the latest state-ofthe-art method RPMMs [36]. We use the models released by the RPMMs' authors for achieving its optimal results. We adopt the ResNet-50 backbone. It is observed in Table 3 that our method is significantly superior on almost all the splits. For COCO-to-PASCAL setting, on average, our model yields a gain of 9.9% and 12.7 % in 1-shot and 5shot cases, respectively. From PASCAL to COCO, the improvements are 2.8%/4.8% for 1-shot/5-shot. This suggests that our training method is more effective than conventional whole pipeline meta-learning for solving the domain shift problem. This is due to stronger feature representation and a simpler and more effective learning to learn capability with focus on classifier adaptation.", "Extra": "Few-shot learning (FSL) aims to learn to learn a model for a novel task with only a handful of labeled samples. The majority of existing FSL works adopt the meta-learning paradigm [24] and are mostly focused on image classification [33,11,27,2,29,12,19,28,13]. As for which part of a classification model is meta-learned, this varies in different works including feature representation [27] and distance metrics [29]. Beyond image classification, this learning paradigm can be applied to many different computer vision problems including semantic segmentation as investigated in this work. Recently, meta-learning has been introduced into semantic segmentation for addressing the same few-shot learning challenge [25]. A semantic segmentation system generally consists of three parts: an encoder, a decoder and a classifier. To incorporate meta-learning, a common strategy existing works consider involves two steps: first relating the support-set and query-set image features from the encoder, and then updating all three parts by minimizing a loss measuring the discrepancy between the prediction and the ground-truth of query samples. In terms of how to relate the support and query images, there exist two different ways: prototypical learning [8,34,20], and feature concatenation [3,1,40,36]. PL [8] is the first work introducing prototypical learning into few-shot segmentation which predicts foreground/background classes by similarity comparison to prototypes. PANet [34] further introduced a prototype alignment regularization to do bidirectional prototypical learning. Recently, PPNet [20] emphasized the importance of fine-grained features and proposed part-aware prototypes. By contrast, feature concatenation based methods first combine prototypes and query features, and then utilize a segmentation head, e.g., ASPP [6] and PPM [42], for the final prediction. Despite the differences in model design, existing methods share a common characteristic, i.e., they all attempt to update the whole complex model with just a few samples during meta-learning. This may cause optimizing difficulty as we mentioned before. To overcome this issue, we propose to only meta-learn the classifier during meta-training.\nTo address the intra-class variation challenge, we further introduce a Classifier Weight Transformer (CWT) to adapt the support-set trained classifier to every query image. Our CWT is based on the self-attention mechanism [32]. Motivated by the great success in NLP, researchers have started to employ self-attention for vision tasks such as object detection [16,4], and image classification [35,9]. The closest work to ours is FEAT [37] which leverages a prototype Transformer to calibrate the relationships of different classes for few-shot image classification. However, in this work we explore the Transformer differently for tackling the intra-class variation problem in few-shot segmentation. We adopt the standard few-shot semantic segmentation setting [25,3]. Given a meta-test dataset D test , we sample a target task with K-shot labeled images (i.e., the support set) and several test images (i.e., the query set) from one random class and test a learned segmentation model θ. The objective is to segment all the objects of the new class in the query images. To train the model θ in a way that it can perform well on those sampled segmentation tasks, episodic training is adopted to meta-learn the model. Concretely, a large number of such tasks are randomly sampled from a meta-training set D train , and then used to train the model in an episodic manner.\nIn each episode, we start with sampling one class c from D train at random, from which labeled training samples are then randomly drawn to create a support set S and a query set Q with K and Q samples, respectively. Formally, the support and query sets are defined as:\nS = {(x i , M i )} K i=1 , Q = {(x j , M j )} Q j=1 ,(1)\nwhere M i/j denotes the ground-truth mask. Note, S ∩ Q = ∅ are sample-wise disjoint and Q is 1 for the currently standard setting.\nWe conduct episodic training in a two-loop manner [27]: the support set is first used in the inner loop to construct a classifier for the sampled class, and the query set is then utilized in the outer loop to adapt the classifier with a Classifier Weight Transformer (CWT).\nThe key is to obtain a learner able to recognize any novel class with only a few labeled samples. Compared with the standard segmentation setup, this is a more challenging task due to lacking sufficient supervision for new target classes. D train and D test contain base classes C base and novel classes C novel , which are mutually disjoint, i.e., C base ∩ C novel = ∅. Unlike the sparsely annotated metatest classes, each meta-training class comes with abundant labeled training data so that sufficient episodes for model meta-training can be formed. After the pre-training stage, the encoder and decoder can be simply frozen for any different few-shot tasks. Since any new task involves a previously unseen class, the classifier has to be learned. To this end, an intuitive and straightforward method is to optimize the classifier weights w with the labeled support set S. With the pre-trained encoder and decoder we first extract the feature vectors f ∈ R d for every support-set image pixel. The feature dimension is denoted as d. Same as in pre-training, we then adopt the cross-entropy loss function to train the classifier model w.\nAs the feature representation is considered to be class generic, they can be used directly with the newly trained classifier for meta-test without going through the episodic training process. Especially, after seeing sufficient diverse classes, it can work well when the task is to separate foreground and background pixels. Indeed, our experiments show that this turns out to be a surprisingly strong baseline that even outperforms a state-of-the-art PPNet [20] (see Tables 1 and5). This verifies for the first time that good feature representation (or feature reuse) is similarly critical for few-shot semantic segmentation modeling -a finding that has been reported in recent static image few-shot learning works [30,18].\nNonetheless, this baseline is still limited for few-shot segmentation since it cannot adapt to every query image whereby the target object may appear drastically dissimilar to the ones in the support-set images. To that end, we next introduce our meta-learning algorithm that learns a Classifier Weight Transformer (CWT) for query object adaptation.\nDuring episodic training, we aim to learn via our CWT how to adapt the classifier weights w ∈ R 2×d to a sampled class in each episode. Formally, the input to our transformer is in the triplet form of (Query, Key, Value). We start with extracting the feature F ∈ R n×d for all n pixels of the query image using the encoder and decoder. To learn discriminative query conditioned information, the input is designed as:\nQuery = wW q , Key = F W k , Value = F W v ,(2)\nwhere W q /W k /W v ∈ R d×da are the learnable parameters (each represented by a fully connected layer) that project the classifier weights and query feature to a d a -D latent space. To adapt the classifier weights to the query image, we form a classifier-to-query-image attention mechanism as:\nw * = w + ψ(softmax( wW q (F W k ) √ d a )(F W v )),(3)\nwhere softmax(•) is a row-wise softmax function for attention normalization and ψ(•) is a linear layer with the input dimension d a and output dimension d. Residual learning is adopted for more stable model convergence.\nAs written in Eq. ( 3), pairwise similarity defines the attentive scores between the classifier weight and query image pixels, and is further used for weighted aggregation in the Value space. This adapts the classifier weight to the query image. The intuition is that, the pairs involving a query image pixel from the new class often enjoy higher similarity than those with background classes except few outlier instances; as a result, this attentive learning would reinforce this desired proximity and adjust the classifier weights conditioned on the query. Consequently, the intra-class variation can be mitigated.\nLearning objective Once the classifier weight w * is adapted to a query sample, we then apply it for segmentation prediction on the corresponding feature F . To train our transformer, a cross-entropy loss can be derived from the query-image ground-truth and the prediction as the metatraining objective.\nUnseen class adaptation During meta-testing, given any new task/class this proposed transformer can be directly applied to condition the classifier's weights first optimized on the support set to any given query image. Note that both support set and query images are used as input to our transformer to update the foreground/background classifier for the new unseen foreground class. However, the transformer parameter is fixed after meta-training and the adaptation is done for each query image independently, i.e., in an inductive manner as in most existing few-shot segmentation works. We conduct a set of ablation experiments on the more challenging dataset COCO-20 i . We take ResNet-50 as backbone and test the 1-shot case. Our method can be decomposed into two phases: model pre-training, and classifier adaptation. Specifically, we pretrain the encoder and decoder (i.e., the feature representation components) on the whole training set. To validate its effectiveness, we create a baseline of pre-training only without any meta-learning. That is, after pre-training, we go straight into testing and train a classifier on the support set for every meta-test task whilst freezing the encoder and decoder. The results in Table 5 reveal that this turns out to be a very strong baseline. For example, it even outperforms the latest model PPNet [20] by 2.9% (Table 1). This verifies the importance of model pre-training for obtaining a strong class-agnostic feature representation, and the efficacy of focusing meta-training on the classifier alone. Both simplify model optimization and finally improve the model performance. By adapting the classifier to every query sample as in our full CWT model, the performance can be further im- Attend to 1-shot s-0 s-1 s-2 s-3 Mean Support image 20.1 18.9 21.9 11.9 18.2 Query image 32.2 36.0 31.6 31.6 32.9 proved significantly. This validates the design of our CWT for solving the intra-class variation challenge. As shown in Figure 4, the baseline fails to detect the airplane (1 st row) and the person (2 nd row) due to the lack of query adaptation. These failure cases are rectified once query-image adaptation is in place using the proposed CWT.\nSupport Query Baseline Ours Recall that our CWT is designed primarily to address the intra-class variation problem by adapting the classifier weights initialized on the support set to each query image. To further validate this module, we contrast with another transformer design without involving the query image whilst still remaining the attention learning ability. Concretely, we set the support feature as the Key and Value inputs of the transformer, instead of the query feature. This ignores the intra-class variation issue in design. The results in Table 6 show that without conditioning on query image, the segmentation performance degrades drastically. This indicates the essential importance of resolving the intra-class variation problem and the clear effectiveness of the proposed design. Beyond the numerical evaluations as shown above, we further study failure cases in Figure 5. This can give us some insights and potentially inspiring directions for the future investigation. Overall we observe that our model fails when the target instances with extreme object appearance changes exist between the support and query images. For instance, the support image presents only the hands of a person whilst the query image covers a whole person body (see the 2 nd and 3 rd rows). In contrast, the failure in the 1 st and 4 th rows would be caused mainly by extreme viewpoint differences. How to deal with such appearance variation requires better modeling of changes in views, pose and occlusion." }, { "title": "Hypercorrelation squeeze for few-shot segmentation", "year": 2021.0, "authors": "Juhong Min; Dahyun Kang; Minsu Cho", "arxiv_di": "2104.01538", "Introduction": "The advent of deep convolutional neural networks [17,20,64] has promoted dramatic advances in many computer vision tasks including object tracking [28,29,45], visual correspondence [22,44,48], and semantic segmentation [7,47,62] to name a few. Despite the effectiveness of deep networks, their demand for a heavy amount of annotated examples from large-scale datasets [9,11,35] still remains a fundamental limitation since data labeling requires substantial human efforts, especially for dense prediction tasks, e.g., semantic segmentation. To cope with the challenge, there have been various attempts in semi-and weakly-supervised segmentation approaches [6,26,39,66,72,77,88] which in turn effectively alleviated the data-hunger issue. However, given only a few annotated training examples, the problem of poor generalization ability of the deep networks is yet the primary concern that many few-shot segmentation methods [10,12,13,19,33,36,37,46,54,61,63,69,70,74,75,80,83,86,87,89] struggle to address. In contrast, human visual system easily achieves generalizing appearances of new objects given extremely limited supervision. The crux of such intelligence lies at the ability in finding reliable correspondences across different instances of the same class. Recent work on semantic correspondence shows that leveraging dense intermediate features [38,42,44] and processing correlation tensors with high-dimensional convolutions [30,58,71] are significantly effective in establishing accurate correspondences. However, while recent few-shot segmentation research began active exploration in the direction of correlation learning, most of them [36,37,46,65,73,75,80] neither exploit diverse levels of feature representations from early to late layers of a CNN nor construct pair-wise feature correlations to capture fine-grained correlation patterns. There have been some attempts [74,86] in utilizing dense correlations with multilevel features, but they are yet limited in the sense that they simply employ the dense correlations for graph attention, using only a small fraction of intermediate conv layers.\nIn this work we combine the two of the most influential techniques in recent research of visual correspondence, multi-level features and 4D convolutions, and deign a novel framework, dubbed Hypercorrelation Squeeze Networks (HSNet), for the task of few-shot semantic segmentation. As illustrated in Fig. 1, our network exploits diverse geometric/semantic feature representations from many different intermediate CNN layers to construct a collection of 4D correlation tensors, i.e., hypercorrelations, which represent a rich set of correspondences in multiple visual aspects. Following the work of FPN [34], we adapt pyramidal design to capture both high-level semantic and low-level geometric cues for precise mask prediction in coarse-to-fine manner using deeply stacked 4D conv layers. To reduce computational burden caused by such heavy use of highdimensional convs, we devise an efficient 4D kernel via reasonable weight-sparsification which enables real-time inference while being more effective and light-weight than the existing ones. The improvements on standard few-shot segmentation benchmarks of PASCAL-5 i [61], COCO-20 i [35], and FSS-1000 [33] verify the efficacy of the proposed method.", "Related_Work": "Semantic segmentation. The goal of semantic segmentation is to classify each pixel of an image into one of the predefined object categories. Prevalent segmentation approaches [5,7,47,49,52,62,76] typically employ encoderdecoder structure in their architecture; the encoder aggregates features along deep convolutional pathways and provides high-dimensional feature map in low-resolution and the corresponding decoder takes the output to predict segmentation mask by reversing this process [49]. Although the methods clearly show the effectiveness of the encoderdecoder architecture in the task of semantic segmentation, offering useful insights to our study, they still suffer apparent disadvantages of data-driven nature of neural networks: lack of generalizibility under insufficient training data. Few-shot learning. To resolve the generalization problem, many recent approaches to image classification made various attempts in training deep networks with a few annotated examples [1,18,25,31,50,53,59,65,67,73,79,84,85]. Vinyals et al. [73] propose matching networks for one-shot learning; the method utilizes a special kind of mini-batches called episodes to match training and testing environments, facilitating better generalization on novel classes. Snell et al. [65] introduce prototypical networks which compute distances between representative embeddings, i.e., prototypes, for few-shot classification. With the growing interests in few-shot learning in classification domain, the problem of few-shot segmentation has attracted a great deal of attention as well. Shaban et al. [61] propose one-shot semantic segmentation networks which (meta-) learns to generate parameters of FCN [62]. Inspired by the prototypical networks [65], utilizing prototype representations to guide mask prediction in a query image became a popular paradigm in few-shot segmentation literature [10,36,37,46,63,75,80,87,89].\nWitnessing the limitation of prototypical approaches, e.g., loss of spatial structure due to masked average pooling [89], work of [74,86] build pair-wise feature correlations, e.g., graph attention, to retain the spatial structure of the images for fine-grained mask prediction. Note that both prototypical and graph-based methods fundamentally focus on learning to find reliable correspondences between support and query images for accurate mask prediction. In this work, we advance this idea and focus on learning to analyze correspondences using adequately designed learnable layers, e.g., 4D convolutions [58], for effective semantic segmentation. Learning visual correspondences. The task of visual correspondence aims to find reliable correspondences under challenging degree of variations [3,14,15,43,60]. Many methods [21,22,30,38,42,44,56,58,81] typically built upon convolutional features pretrained on classification task [9], showing they serve as good transferable representations. Recent approaches to semantic correspondence [21,38,42,44] show that efficiently exploiting different levels of convolutional features distributed over all intermediate layers clearly benefits matching accuracy. In wide-baseline matching literature, a trending choice is to employ 4D convolutions [30,41,57,58,71] on dense feature matches to identify spatially consistent matches by analyzing local patterns in 4D space. The use of multi-level features and relational pattern analysis using 4D convs are the two widely adopted techniques in the field of visual correspondence.\nIn this paper we adapt the two most influential methodologies in visual correspondence to tackle few-shot segmentation: multi-level features and 4D convolutions. Inspired by the previous matching methods [42,44,27], which use multi-level features to build effective \"appearance features\", we construct high-dimensional \"relational features\" using intermediate CNN features and process them with a series of 4D convolutions. However, their quadratic complexity still remains a major bottleneck in designing cost-effective deep networks, constraining many previous matching methods [30,57,58,71] to use only a few 4D conv layers. To resolve the issue, we develop a light-weight 4D convolutional kernel by collecting only a small subset of vital parameters for effective pattern recognition, which eventually leads to an efficient decomposition into a pair of 2D conv kernels with a linear complexity. Our contributions can be summarized as follows:\n• We present the Hypercorrelation Squeeze Networks that analyze dense feature matches of diverse visual aspects using deeply stacked 4D conv layers. • We propose center-pivot 4D conv kernel which is more effective than the existing one in terms both accuracy and speed, achieving real-time inference.", "Methodology": "The goal of few-shot semantic segmentation is to perform segmentation given only a few annotated examples. To avoid the risk of overfitting due to insufficient training data, we adopt widely used meta-learning approach called episodic training [73]. Let us denote respective training and test sets as D train and D test which are disjoint with respect to object classes. Both sets consist of multiple episodes each of which is composed of a support set S = (I s , M s ) and a query set Q = (I q , M q ) where I * and M * are an image and its corresponding mask label respectively. During training, our model iteratively samples an episode from D train to learn a mapping from (I s , M s , I q ) to query mask M q . Once the model is trained, it uses the learned mapping for evaluation without further optimization, i.e., the model takes randomly sampled (I s , M s , I q ) from D test to predict query mask. In this section, we present a novel few-shot segmentation architecture, Hypercorrelation Squeeze Networks (HSNet), which capture relevant patterns in multi-level feature correlations between a pair of input images to predict fine-grained segmentation mask in a query image. As illustrated in Fig. 2, we adopt an encoder-decoder structure in our architecture; the encoder gradually squeezes dimension of the input hypercorrelations by aggregating their local information to a global context, and the decoder processes the encoded context to predict a query mask. In Sec. 4.1-4.3, we demonstrate each pipeline in one-shot setting, i.e., the model predicts the query mask given I q and S = (I s , M s ). In Sec. 4.4, to mitigate large resource demands of 4D convs, we present a light-weight 4D kernel which greatly improves model efficiency in terms of both memory and time. In Sec. 4.5, we demonstrate how the model can be easily extended to K-shot setting, i.e., S = {(I s k , M s k )} K k=1 , without loss of generality.", "Conclusion": "We have presented a novel framework that analyzes complex feature correlations in a fully-convolutional manner using light-weight 4D convolutions. The significant performance improvements on three standard benchmarks demonstrate that learning patterns of feature relations from multiple visual aspects is effective in fine-grained segmentation under limited supervision. We also demonstrated a unique way of discarding insignificant weights leads to an efficient decomposition of a 4D kernel into a pair of 2D kernels, thus allowing extensive use of 4D conv layers at a significantly small cost. We believe our investigation will further facilitate the use of 4D convolutions in other domains that require learning to analyze high-dimensional correlations.", "Experiment_and_Results": "In this section we evaluate the proposed method, compare it with recent state of the arts, and provide in-depth analyses of the results with ablation study.\nImplementation details. For the backbone network, we employ VGG [64] and ResNet [17] families pre-trained on ImageNet [9], e.g., VGG16, ResNet50, and ResNet101. For VGG16 backbone, we extract features after every conv layer in the last two building blocks: from conv4_x to conv5_x, and after the last maxpooling layer. For ResNet backbones, we extract features at the end of each bottleneck before ReLU activation: from conv3_x to conv5_x. This feature extracting scheme results in 3 pyramidal layers (P = 3) for each backbone. We set spatial sizes of both support and query images to 400 × 400, i.e., H, W = 400, thus having H 1 , W 1 = 50, H 2 , W 2 = 25, and H 3 , W 3 = 13. The network is implemented in PyTorch [51] and optimized using Adam [24] with learning rate of 1e-3. We freeze the pre-trained backbone networks to prevent them from learning class-specific representations of the training data.\nDatasets. We evaluate the proposed network on three standard few-shot segmentation datasets: PASCAL-5 i [61], COCO-20 i [35], and FSS-1000 [33]. PASCAL-5 i is created from PASCAL VOC 2012 [11] with extra mask annotations [16], consisting of 20 object classes that are evenly divided into 4 folds: {5 i : i ∈ {0, 1, 2, 3}}. COCO-20 i consists of mask-annotated images from 80 object classes divided into 4 folds: {20 i : i ∈ {0, 1, 2, 3}}. Following common training/evaluation scheme [37,46,70,74,80], we conduct cross-validation over all the folds; for each fold i, samples from the other remaining folds are used for training and 1,000 episodes from the target fold i are randomly sampled for evaluation. For every fold, we use the same Table 3: Mean IoU comparison on FSS-1000 [33]. Some results are from [2,74].\nmodel with the same hyperparameter setup following the standard cross-validation protocol. FSS-1000 contains maskannotated images from 1,000 classes divided into training, validation and test splits having 520, 240, and 240 classes respectively.\nEvaluation metrics. We adopt mean intersection over union (mIoU) and foreground-background IoU (FB-IoU) as our evaluation metrics. The mIoU metric averages over IoU values of all classes in a fold: mIoU We evaluate the proposed model on PASCAL-5 i , COCO-20 i , and FSS-1000 and compare the results with recent methods [4,37,46,54,61,63,70,74,75,86]. Table 1 summarizes 1-shot and 5-shot results on PASCAL-5 i ; all of our models with three different backbones clearly set new state of the arts with the smallest the number of learnable parameters. With ResNet101 backbone, our 1-shot and 5-shot results respectively achieve 6.1%p and 4.8%p of mIoU improvements over [70] and [4], verifying its superiority in few-shot segmentation task. As shown in Tab. 2, our model outperforms recent methods with a sizable margin on COCO-20 i as well, achieving 2.7%p (1-shot) and 6.8%p (5-shot) of mIoU improvements over [70] with ResNet101 backbone. Also on the last benchmark, FSS-1000, our method sets a new state of the art, outperforming [2,74] as shown in Tab. 3.\nWe conduct additional experiments without support feature masking (Eqn. 1). Note that this setup is similar to co-segmentation problem [8,68,82] classes. As seen in the bottom row of Tab. 1, our model without support masking still performs remarkably well, achieving 1.4%p mIoU improvement over the previous best method [70] in 1-shot setting whereas it rivals [4,70] in 5-shot setting. This interesting result reveals that our model is also capable of identifying 'common' instances across different input images as well as predicting fine-grained segmentation masks.\nRobustness to domain shift. To demonstrate the robustness of our method to domain shift, we evaluate COCO-trained HSNet on each fold of PASCAL-5 i following the recent work of [4]. We use the same training/test folds as in [4] where object classes in training and testing do not overlap. As seen in Tab. 4, our model, which is trained without any data augmentation methods with 18 times smaller number of trainable parameters compared to [4] (2.6M vs. 46.7M), performs robustly in presence of large domain gaps between COCO-20 i and PASCAL-5 i , surpassing [4] by 1.0%p in 5-shot setting, and further improves with a larger backbone, e.g., ResNet101. The results clearly show the robustness of our method to domain shift, and may further increase when trained with data augmentations used in [4,70]. Results without support feature masking. As demonstrated in Sec. 5.1, we conduct experiments without support feature masking (Eqn. 1), similarly to co-segmentation problem with stronger demands for generalizibility. Figure A1 visualizes some example results on PASCAL-5 i dataset. Even without the use of support masks (in both training and testing), our model effectively segments target instances in query images. The results indicate that learning patterns of feature correlations from multiple visual aspects is effective in fine-grained segmentation as well as identifying 'common' instances in the support and query images. Additional qualitative results. We present additional qualitative results on PASCAL-5 i [61], COCO-20 i [46], and FSS-1000 [33] benchmark datasets. All the qualitative results are best viewed in electronic forms. Example results in presence of large scale-differences, truncations, and occlusions are shown in Fig. A2, A3, andA4. Figure A5 visualizes model predictions under large illumination-changes in support and query images. Figure A6 visualizes some sample predictions given exceptionally small objects in either support or query images. As seen in Fig. A7, we found that our model sometimes predicts more reliable segmentation masks than ground-truth ones. Some qualitative results in presence of large intra-class variations and noisy clutters in background are shown in Fig. A8 andA9. Given only a single support image-annotation pair, our model effectively segments multiple instances in a query image as visualized in Fig. A10. Figure A11 shows representative failure cases; our model fails to localize target objects in presence of severe occlusions, intra-class variances and extremely tiny support (or query) objects. As seen in Fig. A12, the model predictions become much reliable given multiple support image-mask pairs, i.e., K > 1.\nThe code and data to reproduce all experiments in this paper is available at our project page: http://cvlab. postech.ac.kr/research/HSNet/.", "Extra": "Inspired by recent semantic matching approaches [38,42,44], our model exploits a rich set of features from the intermediate layers of a convolutional neural network to capture multi-level semantic and geometric patterns of similarities between the support and query images. Given a pair of query and support images, I q , I s ∈ R 3×H×W , the backbone network produces a sequence of L pairs of intermediate feature maps {(F q l , F s l )} L l=1 . We mask each support feature map F s l ∈ R C l ×H l ×W l using the support mask M s ∈ {0, 1} H×W to discard irrelevant activations for reliable mask prediction:\nFs l = F s l ζ l (M s ),(1)\nwhere is Hadamard product and ζ l (•) is a function that bilinearly interpolates input tensor to the spatial size of the feature map F s l at layer l followed by expansion along channel dimension such that ζ l : R H×W -→ R C l ×H l ×W l . For the subsequent hypercorrleation construction, a pair of query and masked support features at each layer forms a 4D correlation tensor Ĉl ∈ R H l ×W l ×H l ×W l using cosine similarity:\nĈl (x q , x s ) = ReLU F q l (x q ) • Fs l (x s ) F q l (x q ) Fs l (x s ) ,(2)\nwhere x q and x s denote 2-dimensional spatial positions of feature maps F q l and Fs l respectively, and ReLU suppresses noisy correlation scores. From the resultant set of 4D correlations { Ĉl } L l=1 , we collect 4D tensors if they have the same spatial sizes and denote the subset as { Ĉl } l∈Lp where L p is a subset of CNN layer indices {1, ..., L} at some pyramidal layer p. Finally, all the 4D tensors in { Ĉl } l∈Lp are concatenated along channel dimension to form a hypercorrelation C p ∈ R |Lp|×Hp×Wp×Hp×Wp where (H p , W p , H p , W p ), with abuse of notation, represents the spatial resolution of the hypercorrelation at pyramidal layer p. Given P pyramidal layers, we denote hypercorrelation pyramid as C = {C p } P p=1 , representing a rich collection of feature correlations from multiple visual aspects. Our encoder network takes the hypercorrelation pyramid C = {C p } P p=1 to effectively squeeze it into a condensed feature map Z ∈ R 128×H1×W1 . We achieve this correlation learning using two types of building blocks: a squeezing block f sqz p and a mixing block f mix p . Each block consists of three sequences of multi-channel 4D convolution, group normalization [78], and ReLU activation as illustrated in Fig. 3. In the squeezing block f sqz p , large strides periodically squeeze the last two (support) spatial dimensions of C p down to (H , W ) while the first two spatial (query) dimensions remain the same as (H p , W p ), i.e., f sqz p : R |Lp|×Hp×Wp×Hp×Wp -→ R 128×Hp×Wp×H ×W where H p > H and W p > W . Similar to FPN [34] structure, two outputs from adjacent pyramidal layers, p and p+1, are merged by element-wise addition after upsampling the (query) spatial dimensions of the upper layer output by a factor of 2. The mixing block f mix p : R 128×Hp×Wp×H ×W -→ R 128×Hp×Wp×H ×W then processes this mixture with 4D convolutions to propagate relevant information to lower layers in a top-down fashion. After the iterative propagation, the output tensor of the lowest mixing block f mix 1 is further compressed by average-pooling its last two (support) spatial dimensions, which in turn provides a 2-dimensional feature map Z ∈ R 128×H1×W1 that signifies a condensed representation of the hypercorrelation C. The decoder network consists of a series of 2D convolutions, ReLU, and upsampling layers followed by softmax function as illustrated in Fig. 2. The network takes the context representation Z and predicts two-channel map Mq ∈ [0, 1] 2×H×W where two channel values indicate probabilities of foreground and background. During training, the network parameters are optimized using the mean of crossentropy loss between the prediction Mq and the ground-truth M q over all pixel locations. During testing, we take the maximum channel value at each pixel to obtain final query mask prediction Mq ∈ {0, 1} H×W for evaluation. Apparently, our network with such a large number of 4D convolutions demands a substantial amount of resources due to the curse of dimensionality, which constrained many visual correspondence methods [22,30,32,58,71] to use only a few 4D conv layers. To address the concern, we revisit the 4D convolution operation and delve into its limitations. Then we demonstrate how a unique weight-sparsification scheme effectively resolves the issues. 4D convolution and its limitation. Typical 4D convolution parameterized by a kernel k ∈ R k× k× k× k on a correlation tensor c ∈ R H×W ×H×W at position (x, x ) ∈ R 4 * is formulated as\n(c * k)(x, x ) = (p,p )∈P(x,x ) c(p, p )k(p -x, p -x ),(3)\nwhere P(x, x ) denotes a set of neighbourhood regions within the local 4D window centered on position (x, x ), i.e., P(x, x ) = P(x) × P(x ) as visualized in Fig. 4. Although the use of 4D convolutions on a correlation tensor has shown its efficacy with good empirical performance in correspondence-related domains [22,30,32,58,71], its quadratic complexity with respect to the size of input features still remains a primary bottleneck. Another limiting factor is over-parameterization of the high-dimensional kernel: Consider a single activation in an nD tensor convolved by nD conv kernel. The number of times that the kernel processes this activation is exponentially proportional to n. This implies some unreliable input activations with large magnitudes may entail some noise in capturing reliable patterns as a result of their excessive exposure to the highdimensional kernel. The work of [81] resolves the former problem (quadratic complexity) using spatially separable 4D kernels to approximate the 4D conv with two separate 2D kernels along with additional batch normalization layers [23] that settle the latter problem (numerical instability). In this work we introduce a novel weight-sparsification scheme to address both issues at the same time. Center-pivot 4D convolution. Our goal is to design a lightweight 4D kernel that is efficient in terms of both memory and time while effectively approximating the existing ones [58,81]. We achieve this via a reasonable weightsparsification; from a set of neighborhood positions within a local 4D window of interest, our kernel aims to disregard a large number of activations located at fairly insignificant positions in the 4D window, thereby focusing on a small subset of relevant activations only. Specifically, we consider the activations at positions that pivots either one of 2-dimensional centers, e.g., x or x , as the foremost influential ones as illustrated in Fig. 4. Given 4D position (x, x ), we collect its neighbors if and only if they are adjacent to either x or x in its corresponding 2D subspace and define two respective sets as P c (x, x ) = {(p, p ) ∈ P(x, x ) : p = x} and P c (x, x ) = {(p, p ) ∈ P(x, x ) : p = x }. The set of center-pivot neighbours is defined as P CP (x, x ) = P c (x, x ) ∪ P c (x, x ). Based on these two subsets of neighbors, center-pivot 4D convolution can be formulated as a union of two separate 4D convolutions:\n(c * k CP )(x, x ) = (c * k c )(x, x ) + (c * k c )(x, x ) (4)\nwhere k c and k c are 4D kernels convolved on P c (x, x ) and\nP c (x, x ) respectively. Note that (c * k c )(x, x ) is equivalent to convolutions with a 2D kernel k 2D c = k(0, :) ∈ R k× k performed on 2D slice of 4D tensor c(x, :). Similarly, with k 2D c = k(:, 0) ∈ R k× k, we reformulate Eqn. 4 as follows (c * k CP )(x, x ) = p ∈P(x ) c(x, p )k 2D c (p -x )(5)\n+ p∈P(x) c(p, x )k 2D c (p -x),\nwhich performs two different convolutions on separate 2D subspaces, having a linear complexity. In Sec. 5.2, we experimentally demonstrate the superiority of the center-pivot 4D kernels over the existing ones [58,81] in terms of accuracy, memory, and time. We refer the readers to the Appendix A for a complete derivation of Eqn. 5. Our network can be easily extended to K-shot setting:\nGiven K support image-mask pairs S = {(I s k , M s k )} K k=1 and\na query image I q , model performs K forward passes to provide a set of K mask predictions { Mq k } K k=1 . We perform voting at every pixel location by summing all the K predictions and divide each output score by the maximum voting score. We assign foreground labels to pixels if their values are larger than some threshold τ whereas the others are classified as background. We set τ = 0.5 in our experiments. We conduct extensive ablation study to investigate the impacts of major components in our model: hypercorrelations, pyramidal architecture, and center-pivot 4D kernels. We also study how freezing backbone networks prevents overfitting and helps generalization on novel classes. All ablation study experiments are performed with ResNet101 backbone on PASCAL-5 i Ablation study on pyramid layers. To see the impact of hypercorrelation C p at each layer p, we perform experiments in absence of each pyramidal layer. We train and evaluate our model using two different hypercorrelation pyramids, C (2:3) = {C 2 , C 3 } and C (3) = {C 3 }, and compare the results with ours C = {C p } 3 p=1 . Figure 6 summarizes the results; given hypercorrelation pyramid without geometric information (C (2:3) ), our model fails to refine object boundaries in the final mask prediction as visualized in Fig. 7. Given a single hypercorrelation that only encodes semantic relations (C (3) ), the model predictions are severely damaged, providing only rough localization of the target objects. These results indicate that capturing patterns of both semantic and geometric cues is essential for fine-grained localization.\nComparison between three different 4D kernels. We conduct ablation study on 4D kernel by replacing the proposed center-pivot 4D kernel with the original [58] and spatially separable [81] 4D kernels and compare their model size, For fair comparison, the inference times of all the models are measured on a machine with an Intel i7-7820X and an NVIDIA Titan-XP. per-episode inference time (1-shot), memory consumption, and floating point operations per second (FLOPs) with ours. Table 5 summarizes the results. The proposed kernel records the fastest inference time with the smallest memory/FLOPs requirements while being comparably effective than the other two. The results clearly support our claim that a large part of parameters in a high-dimensional kernel can safely be discarded without harming the quality of predictions; only a few relevant parameters are sufficient and even better for the purpose. While both the separable [81] and our centerpivot 4D convolutions operate on two separate 2D convolutions, auxiliary transformation layers with multiple batch normalizations that make the separable 4D conv numerically stable in its sequential design result in twice larger number of parameters (4.4M vs. 2.6M) and slower inference time (28.48ms vs. 25.51ms) than ours.\nThe number of 4D layers in building blocks. We also perform experiments with varying number of 4D conv layers in the two building blocks: f sqz p and f mix p . Figure 8 plots 1shot and 5-shot mIoU results on PASCAL-5 i with the model sizes. In the experiments, appending additional 4D layers (with a group norm and a ReLU activation) in the building blocks provides clear performance improvements up to three layers but the accuracy eventually saturates after all. Hence we use a stack of three 4D layers for both.\nFinetuning backbone networks. To investigate the significance of learning 'feature correlations' over learning 'feature representation' in few-shot regime, we finetune our backbone network and compare learning processes of the finetuned model and ours (frozen backbone). Figure 9 plots the training/validation curves of the finetuned model and ours on every fold of PASCAL-5 i . The finetuned model rapidly overfits to the training data, losing generic, comprehensive visual representations learned from large-scale dataset [9]. Meanwhile, our model with frozen backbone provides better generalizibility with large trade-offs between training and validation accuracies. The results reveal that learning new appearances under limited supervision requires understanding their 'relations' to diverse visual patterns acquired from a vast amount of past experiences, e.g., ImageNet classification. This is quite analogous to human vision perspective in the sense that we generalize novel concepts (what we see) by analyzing their relations to the past observations (what we know) [40]. For additional experimental details, results and analyses, we refer the readers to the Appendix. In this section, we extend Sec. 4.4 to provide a complete derivation of the center-pivot 4D convolution. Note that a typical 4D convolution parameterized by a kernel k ∈ R k× k× k× k on a correlation tensor c ∈ R H×W ×H×W at position (x, x ) ∈ R 4 is formulated as\n(c * k)(x, x ) = (p,p )∈P(x,x ) c(p, p )k(p -x, p -x ),(6)\nwhere P(x, x ) denotes a set of neighbourhood regions within the local 4D window centered on position (x, x ), i.e., P(x, x ) = P(x) × P(x ) as visualized in Fig. 4. Now we design a light-weight, efficient 4D convolution via a reasonable weight-sparsification; from a set of neighborhood positions within a local 4D window of interest, our kernel aims to disregard a large number of activations located at fairly insignificant positions in the 4D window, thereby focusing only on a small subset of relevant activations for capturing complex patterns in the correlation tensor. Specifically, we consider activations at positions that pivots either one of 2-dimensional centers, e.g., x or x , as the foremost influential ones. Given 4D position (x, x ), we collect its neighbors if and only if they are adjacent to either x or x in its corresponding 2D subspace and define two respective sets as\nP c (x, x ) = {(p, p ) ∈ P(x, x ) : p = x},(7)\nand\nP c (x, x ) = {(p, p ) ∈ P(x, x ) : p = x }.(8)\nThe set of center-pivot neighbours P CP (x, x ) is defined as a union of the two subsets:\nP CP (x, x ) = P c (x, x ) ∪ P c (x, x ).(9)\nBased on this small subset of neighbors, center-pivot 4D (CP 4D) convolution can be formulated as a union of two separate 4D convolutions:\n(c * k CP )(x, x ) = (c * k c )(x, x ) + (c * k c )(x, x ),(10)\nwhere k c and k c are 4D kernels with their respective neighbours P c (x, x ) and P c (x, x ). Now consider below\n(c * k c )(x, x ) = (p,p )∈Pc(x,x ) c(p, p )k(p -x, p -x ) = p ∈P(x ) c(x, p )k(x -x, p -x ) = p ∈P(x ) c(x, p )k(0, p -x ) = p ∈P(x ) c(x, p )k 2D c (p -x ),(11)\nwhich is equivalent to convolution with kernel k 2D c = k(0, : ) ∈ R k× k performed on 2D slice of the 4D tensor c(x, :). Similarly,\n(c * k c )(x, x ) = (p,p )∈P c (x,x ) c(p, p )k(p -x, p -x ) = p∈P(x) c(p, x )k(p -x, x -x ) = p∈P(x) c(p, x )k(p -x, 0) = p∈P(x) c(p, x )k 2D c (p -x),(12)\nwhere k 2D c = k(:, 0) ∈ R k× k. Based on above derivations, we rewrite Eqn. 10 as follows\n(c * k CP )(x, x ) = p ∈P(x ) c(x, p )k 2D c (p -x ) + p∈P(x) c(p, x )k 2D c (p -x),(13)\nwhich performs two different convolutions on separate 2D subspaces, having a linear complexity. For the backbone networks, we employ VGG [64] and ResNet [17] families pre-trained on ImageNet [9], e.g., VGG16, ResNet50, and ResNet101. For the VGG16 backbone, we extract features after every conv layer in the last two building blocks: from conv4_x to conv5_x, and after the last maxpooling layer. For the ResNet backbones, we extract features at the end of each bottleneck before ReLU activation: from conv3_x to conv5_x. This feature extracting scheme results in 3 pyramidal layers (P = 3) for every backbone. We set spatial sizes of both support and query images to 400 × 400, i.e., H, W = 400, thus having H 1 , W 1 = 50, H 2 , W 2 = 25, and H 3 , W 3 = 13 for both ResNet50 and ResNet101 bakcbones and H 1 , W 1 = 50, H 2 , W 2 = 25, and H 3 , W 3 = 12 for the VGG16 backbone. The network is implemented in PyTorch [51] and optimized using Adam [24] with learning rate of 1e-3. We train our model with batch size of 20, 40, and 20 for PASCAL-5 i , COCO-20 i , and FSS-1000 respectively. We freeze the pretrained backbone networks to prevent them from learning class-specific representations of the training data. The intermediate tensor dimensions, the number of parameters of each layers and other additional details of the network are demonstrated in Tab. A5, A6, and A7 for respective backbones of VGG16, ResNet50, and ResNet101. Numerical comparisons of ablation study. We tabularize Figures 5 and6, e.g., ablation study on hypercorrelations and pyramidal layers, in Tables A3 and A4 respectively. Achieving 4.5%p mIoU improvements over C deep p , our method clearly benefits from diverse feature correlations from multi-level CNN layers (C p ) as seen in Tab. A3. A large performance gap between C (2:3) and C (3) in Tab. A4 (63.9 vs. 55.5) reveals that the intermediary second pyramidal layer (p = 2) is especially effective in robust mask prediction compared to the first pyramidal layer (p = 1).\nEvaluation results without using ignore_label on PASCAL-5 i . The benchmarks of PASCAL-5 i [61], COCO-20 i [35], and FSS-1000 [33] consist of segmentation mask annotations in which each pixel is labeled with either background or one of the predefined object categories. As pixelwise segmentation near object boundaries is ambiguous to perform even for human annotators, PASCAL-5 i uses a spe- Table A4: Numerical results of Figure 6. All experiments are performed with ResNet101 backbone [17].\ncial kind of label called ignore_label which marks pixel regions ignored during training and evaluation to mitigate the ambiguity † . Most recent few-shot segmentation work [4,37,46,54,61,63,70,74,75,86] adopt this evaluation criteria but we found that some methods [36,80,87,89] do not utilize ignore_label in their evaluations. Therefore, the methods are unfairly evaluated as fine-grained mask prediction near object boundaries is one of the most challenging part in segmentation problem. For fair comparisons, we intentionally exclude the methods of [36,80,87,89] from Tab. 1 and compare the results of our model evaluated without the use of ignore_label with those methods [36,80,87,89]. The results are summarized in Tab. A1. Even without using ignore_label, the proposed method sets a new state of the art with ResNet50 backbone, outperforming the previous best methods of [80] and [36] by (1-shot) 2.8%p and (5-shot) 5.4%p respectively. With VGG16 backbone, our method performs comparably effective to the previous best method [36] while having the smallest learnable parameters. † The use of ignore_label was originally adopted in PASCAL VOC dataset [11]. The same evaluation criteria is naturally transferred to PASCAL-5 i [61] as it is created from PASCAL VOC. Query image Prediction Ground-truth" }, { "title": "Feature weighting and boosting for few-shot segmentation", "year": 2019.0, "authors": "Khoi Nguyen; Sinisa Todorovic", "arxiv_di": 1909.1314, "Introduction": "This paper is about few-shot segmentation of foreground objects in images. As Fig. 1 shows, given only a few training examples -called support images -and their groundtruth segmentation of the target object class, our goal is to segment the target class in the query image. This problem is challenging, because the support and query images may significantly differ in the number of instances and 3D poses of the target class, as illustrated in Fig. 1. This important problem arises in many applications dealing with scarce training examples of target classes.\nRecently, prior work has addressed this problem by training an object segmenter on a large training set, under the few-shot constraint [26,6,20]. The training set is split into many small subsets. In every subset, one image serves as the query and the other(s) as the support image(s) with known ground truth(s). As shown in Fig. 1, their framework uses a CNN -e.g., VGG [27] or ResNet [12] -for extracting feature maps from the support and query images. The support's feature maps are first pooled over the known ground-truth foreground. Then, the support's masked-pooled features are used to estimate a cosine similarity map with the query's features. The resulting similarity map and the query's features are finally passed to a few convolutional layers in order to segment the target object class in the query. The incurred loss between the prediction and the query's ground-truth is used for the CNN's training.\nThe above framework has two critical limitations which we address in this paper. First, we experimentally found that the CNN has a tendency to learn non-discriminative features with high activations for different classes. To address this issue, as Fig. 2 shows, our first contribution extends prior work by efficiently estimating feature relevance so as to encourage that their activations are high inside the groundtruth locations of the target class, and low elsewhere in the image. This is formulated as an optimization problem, for which we derive a closed-form solution.\nSecond, learning from few support images is prone to overfitting and poor generalization to the query in the face of the aforementioned large variations of the target class. To address this issue, as Fig. 3 shows, our second contribution is a new boosted inference, motivated by the traditional ensemble learning methods which are robust to overfitting [9,10]. We specify an ensemble of experts, where each expert adapts the features initially extracted from the support image. This feature adaptation is guided by the gradient of loss incurred when segmenting the support image relative to its provided ground truth. The ensemble of experts produce the corresponding ensemble of object segmentations of the query image, whose weighted average is taken as our final prediction. Importantly, while we use the first contribution in both training and testing, similar to the traditional ensemble learning methods, our second contribution is applied only in testing for boosting the performance of our CNN-based segmenter.\nFor K-shot setting, both contributions are naturally extended for segmenting the query image by jointly analyzing the provided support images and their ground-truths rather than treating support images independently as in prior work.\nFor evaluation, we compare with prior work on the benchmark PASCAL-5 i dataset [26]. Our results demonstrate that we significantly outperform the state of the art. In addition, we perform evaluation on the larger and more challenging COCO-20 i dataset [16]. To the best of our knowledge, we are the first to report results of few-shot object segmentation on COCO-20 i .\nIn the following, Sec. 2 reviews previous work, Sec. 3 specifies our two contributions, Sec. 4 describes our imple-mentation details and complexity, and Sec. 5 presents our experimental results.", "Related_Work": "This section reviews related work on few-shot image classification and semantic segmentation.\nFew-shot classification predicts image class labels with access to few training examples. Prior work can be broadly divided into three groups: transfer learning of models trained on classes similar to the target classes [28,22,29,30], meta-learning approaches that learn how to effectively learn new classes from small datasets [8,18], and generative approaches aimed at data-augmentation [31,25].\nSemantic segmentation labels all pixels in the image. Recently, significant advances have been made by using fully convolutional network (FCN) [17] and its variantsincluding SegNet [1], UNet [23], RefineNet [15], PSPNet [33], DeepLab v1 [2], v2 [3], v3 [4], v3+ [5] -all of which are usually evaluated on the PASCAL VOC 2012 [7] and MSCOCO [16] datasets. However, these approaches typically require very large training sets, which limits their application to a wide range of domains.\nFew-shot semantic segmentation labels pixels of the query image that belong to a target object class, conditioned by the ground-truth segmentation masks of a few support images. Prior work typically draws from the above mentioned approaches to few-shot image classification and semantic segmentation. For example, the one-shot learning method OSLSM [26] and its extensions -namely, Co-FCN [20,21], PL+SEG [6], and SG-One [32] -consist of the conditioning and segmentation branches implemented as VGG [27] and FCN-32s [17], respectively. The condi- tioning branch analyzes the target class in the support image, and conditions the segmentation branch for object segmentation in the query image. Co-FCN improves OSLSM by segmenting the query image based on a concatenation of pooled features from the support image and feature maps from the query image. PL+SEG first estimates a distance between the query's feature maps and prototypes predicted from the support image, and then labels pixels in the query image with the same class as their nearest neighbor prototypes. SG-One also estimates similarity between a pooled feature from the support image and feature maps of the query for predicting the query's segmentation. Our approach extends SG-One with the two contributions specified in the next section.", "Methodology": "An object class is represented by K support images with ground-truth segmentation masks. Given a query image showing the same object class in the foreground, our goal is predict the foreground segmentation mask. For K = 1, this problem is called one-shot semantic segmentation. Below, and in the following Sec. 3.1 and Sec. 3.2, we consider the one-shot setting, for simplicity. Then, we discuss the K-shot setting, K > 1, in Sec. 3.3 Given a large set of training images showing various object classes and the associated ground-truth segmentation masks, our approach follows the common episodic training strategy. In each training episode, we randomly sam-ple a pair of support and query images, x s and x q , with binary segmentation masks, m s and m q , of the target object class in the foreground. Elements of the mask are set to 1, m s,i = 1, for pixels i occupied by the target class; otherwise, m s,i = 0. The same holds for m q . We use m s to condition the target class in the query image.\nThe standard cross-entropy loss, L( mq , m q ), between the binary ground-truth m q and predicted query mask mq = f θ (x s , m s , x q ), is used for the end-to-end training of parameters θ of our deep architecture, θ * = arg min θ L( mq , m q ). PASCAL-5 0 PASCAL-5 One-shot Segmentation.\nTab. 4 compares our B+C1+C2 with the state of the art, ablations, and aforementioned variants in the one-shot setting on PASCAL-5 i . B+C1+C2 gives the best performance for both VGG 16 and ResNet 101, where the latter configuration significantly outperforms the state of the art with the increase in the mIoU averaged over the four folds of cross-validation by 13.49%. Relative to B, our first contribution evaluated with B+C1 gives relatively modest performance improvements. From the results for B+C2, our second contribution produces larger gains in performance relative to B and B+C1, suggesting that contribution 2 in and of itself is more critical than contribution 1. Interestingly, combining both contribution 1 and contribution 2 significantly improves the results relative to using either contribution only. We also observe that performance of our B+C1+C2 for some folds of crossvalidation (e.g., PASCAL-5 2 and PASCAL-5 3 ) comes very close to that of Upper-bound, suggesting that our approach is very effective in generalizing to new classes in testing. Fig. 4 shows the mIoU of B+C1+C2 as a function of the number of experts N in the one-shot setting on PASCAL-5 i . As can be seen, for N ≥ 10 our approach is not sensitive to a particular choice of N . We use N = 10 as a good trade-off between complexity and accuracy. Five-shot Segmentation. Tab. on PASCAL-5 i . Our-K-shot gives the best performance for both VGG 16 and ResNet 101, where the latter configuration significantly outperforms the state of the art with the increase in the mIoU averaged over the four folds of crossvalidation by 15.97%. In comparison with Average, the joint analysis of support images by Our-K-shot appears to be more effective, as Our-K-shot gives superior performance in every fold of cross-validation. Results on COCO-20 i . Tab. 6 and Tab. 7 shows our ablations' results in the one-shot and five-shot settings on COCO-20 i . The former results are obtained with B+C1+C2 and the latter, with Our-K-shot. The lower values of mIoU relative to those in Tab. 4 and Tab. 5 indicate that COCO-20 i is more challenging than PASCAL-5 i . Surprisingly, in fold COCO-20 0 , B+C1+C2 with VGG 16 outperforms its counterpart with ResNet 101 in the one-shot setting. The same holds for Our-K-shot in the five-shot setting. On average, using ResNet 101 gives higher results. As expected, the increased supervision in the five-shot setting in general gives higher accuracy than the one-shot setting.\nQualitative Results. Fig. 5 shows challenging examples from PASCAL-5 0 , and our segmentation results obtained with B+C1+C2 with ResNet 101 for the one-shot setting, and Our-K-shot with ResNet 101 for the five-shot setting. In the leftmost column, the bike in the support image has different pose from the bike in the query image. While this example is challenging for B+C1+C2, our performance improves when using Our-K-shot. In the second column from left, the query image shows a partially occluded target -a part of the bottle. With five support images, Our-K-shot improves performance by capturing the bottle's shadow. The third column from left shows that the bike's features in the support image are insufficiently discriminative as the person also gets segmented along with the bike. With more ex-amples, the bike is successfully segmented by Our-K-shot. In the rightmost column, the plane in the support image is partially occluded, and thus in the query image B+C1+C2 can only predict the head of the airplane while Our-K-shot's predicted segment covers most of the airplane.", "Conclusion": "We have addressed one-shot and few-shot object segmentation, where the goal is to segment a query image, given a support image and the support's ground-truth segmentation. We have made two contributions. First, we have formulated an optimization problem that encourages high feature responses on the foreground and low feature activations on the background for more accurate object segmentation. Second, we have specified the gradient boosting of our model for fine-tuning to new classes in testing. Both contributions have been extended to the few-shot setting for segmenting the query by jointly analyzing the provided support images and their ground truths, rather than treating the support images independently. For evaluation, we have compared with prior work, strong baselines, ablations and variants of our approach on the PASCAL-5 i and COCO-20 i datasets. We significantly outperform the state of the art on both datasets and in both one-shot and five-shot settings. Using only the second contribution gives better results than using only the first contribution. Our integration of both contributions gives a significant gain in performance over each.", "Experiment_and_Results": "Datasets. For evaluation, we use two datasets: (a) PASCAL-5 i which combines images from the PASCAL VOC 2012 [7] and Extended SDS [11] datasets; and (b) COCO-20 i which is based on the MSCOCO dataset [16]. For PASCAL-5 i , we use the same 4-fold cross-validation setup as prior work [26,20,6]. Specifically, from the 20 object classes in PASCAL VOC 2012, for each fold i = 0, ..., 3, we sample five as test classes, and use the remaining 15 classes for training. Tab. 1 specifies our test classes for each fold of PASCAL-5 i . As in [26], in each fold i, we use 1000 support-query pairs of test images sampled from the selected five test classes.\nWe create COCO-20 i for evaluation on a more challenging dataset than PASCAL-5 i , since MSCOCO has 80 object classes and its ground-truth segmentation masks have As in [26,20,6], we use the mean intersectionover-union (mIoU) for quantitative evaluation. IoU of class l is defined as IoU l = T P l T P l +F P l +F N l , where T P, F P and F N are the number of pixels that are true positives, false positives and false negatives of the predicted segmentation masks, respectively. The mIoU is an average of the IoUs of different classes, mIoU = 1 n l l IoU l , where n l is the number of test classes. We report the mIoU averaged over the four folds of cross-validation.\nBaselines, Ablations, and Variants of Our Approach. As a baseline B, we consider the approach depicted in the gray box \"PRIOR WORK\" in Fig. 2 and specified in Sec. 3.1 before our contribution 1. We also consider several ablations of our approach: B+C1 -extends the baseline B with contribution 1 only; B+C2 -extends the baseline B with contribution 2 only; and B+C1+C2 -represents our full approach. These ablations are aimed at testing the effect of each of our contributions on performance. In addition, we consider two alternative neural networks -VGG 16 and ResNet 101 -as the CNN for extracting image features. VGG 16 has also been used in prior work [26,20,6]. We also compare with an approach called Upper-bound that represents a variant of our full approach B+C1+C2 trained such that both training and testing datasets consist of the same classes. As Upper-bound does not encounter new classes in testing, it represents an upper bound of our full approach. Finally, in the K-shot setting, we consider another baseline called Average. It represents our full approach B+C1+C2 that first independently predicts segmentations of the query image for each of the K > 1 support images, and then averages all of the predictions. Our approach for the K-shot setting is called Our-K-shot, and differs from Average in that we rather jointly analyze all of the K support images than treat them independently, as explained in Sec. 3.3. Training/testing time. The training/testing time is reported in Tab. 3. We can see that the contribution 1 just adds very small computational overhead over the baseline but significantly outperforms the baseline. Additionally, although contribution 2 has substantially larger testing time (about 40% with VGG backbone and 35% with ResNet backbone compare to the baseline), but it yields more significant performance gain than contribution 1 does.", "Extra": "Fig. 2 shows the episodic training of a part of our deep architecture (without our second contribution which is not trained) on a pair of support x s and query x q images. We first use a CNN to extract feature maps F s ∈ R d×w×h from x s , and feature maps F q ∈ R d×w×h from x q , where d is the feature dimensionality, and w and h denote the width and height of the feature map. Then, we average F s over the known foreground locations in m s , resulting in the average class feature vector f s of the support image. For this masked feature averaging, m s is down-sampled to ms ∈ {0, 1} w×h with the size w × h of the feature maps, and f s is estimated as\nf s = 1 | ms | wh i=1 F s,i ms,i .(1)\nwhere | ms | = i ms,i is the number of foreground locations in ms . Next, we compute the cosine similarity between f s and every feature vector F q,i from the query feature maps F q . This gives a similarity map, σ q ∈ [0, 1] w×h , between the support and query image:\nσ q,i = cos(f s , F q,i ) = f T s F q,i f s 2 • F q,i 2 , i = 1, . . . , w h.\n(2) We expect that σ q provides informative cues for object segmentation, as high values of σ q,i indicate likely locations of the target class in the query image.\nBefore we explain how to finally predict mq from σ q and F q , in the following, we specify our first technical contribution aimed at extending the above framework.\nContribution 1: Feature Weighting. For learning more discriminative features of the target class from a single (or very few) support image(s), we introduce a regularization that encourages high feature activations on the foreground and simultaneoulsy low feature activations on the background of the support image. This is formalized as an optimization problem for maximizing a sum of relevant differences between feature activations. Let φ s ∈R d×1 denote a vector of feature differences normalized over the foreground and background areas in the segmentation map of the support image as\nφ s = wh i=1 F s,i ms,i | ms | - 1 -ms,i wh -| ms | ,(3)\nThe relevance r ∈ R d×1 of features in φ s is estimated by maximizing a sum of the feature differences:\nmaximize r φ s r, s.t. r 2 = 1 .(4)\nThe problem in (4) has a closed-form solution:\nr * = φ s φ s 2 . (5\n)\nWe use the estimated feature relevance r * when computing the similarity map between the support and query images. Specifically, we modify the cosine similarity between f s and F q , given by (2), as\nσ * q,i = cos(f s , F q,i , r * ) = (f s r * ) T (F q,i r * ) f s r * 2 • F q,i r * 2 ,(6)\nwhere is the element-wise product between two vectors.\nNote that we account for feature relevance in both training and testing. As r * has a closed-form solution, it can be computed very efficiently. Also, the modification of the similarity map in ( 6) is quite simple and cheap to implement in modern deep learning frameworks.\nAs shown in Fig. 2, in the final step of our processing, we concatenate σ * q and F q together, and pass them to a network with only two convolutional layers for predicting mq . In testing, the CNN is supposed to address a new object class which has not been seen in training. To improve generalization to the new class, in testing, we use a boosted inference -our second contribution -inspired by the gradient boosting [10]. Alg.1 summarizes our boosted inference in testing. Note that in testing parameters of the CNN and convolutional layers remain fixed to the trained values.\nAs shown in Fig. 3, given a support image with groundtruth m s and a query image, in testing, we predict not only the query mask mq , but also the support mask ms using the same deep architecture as specified in Sec. 3.1. ms is estimated in two steps. First, we compute a similarity map, σ * s , as a dot product between f s r * and F s,i r * , as in (6). Second, we pass σ * s and F s to the two-layer convolutional network for predicting ms . Third, we estimate the standard cross-entropy loss L( ms , m s ), and iteratively update the average class features as\nf n+1 s = f n s -ν ∂L( mn s , m s )/∂f n s ,(7)\nwhere ν is the learning rate. f n s , n = 1, . . . , N , are experts that we use for predicting the corresponding query masks mn q , by first estimating the similarity map σ * q,i = cos(f n s , F q,i , r * ), i = 1, ...wh, as in (6), and then passing σ * q and F q to the two-layer network for computing mn q . Finally, we fuse the ensemble { mn q : n = 1, . . . , N } into the final segmentation, mq , as\nmq = N n=1 mn q ρ n ,(8)\nAlgorithm 1: Guided Ensemble Inference in Testing Input: F s , f s , F q , m s , ν Output: mq // Guided ensemble of experts 1. Initialize:\nf 1 s = f s , E = {}, R = {}; 2. for n = 1, . . . , N do a. σ * s,i = cos(f n s , F n s,i , r * ), i = 1, ...wh, as in (6) ; b. mn s = Conv(σ * s , F n s ); c. ρ n = IoU( mn s , m s ); d. Compute cross-entropy loss: L( mn s , m s ); e. f n+1 s = f n s -ν ∂L( mn s , m s )/∂f n s ; f. E = E ∪ {f n s }, R = R ∪ {ρ n }, end //\nInference using E and R 4. for n = 1, . . . , N do a. σ * q,i = cos(f n s , F q,i , r * ), i = 1, ...wh, as in (6) ; b. mn q = Conv(σ * q , F q ); end 5. mq = N n=1 mn q ρ n where ρ n denotes our estimate of the expert's confidence in correctly segmenting the target class, computed as the intersection-over-union score between mn s and m s :\nρ n = IoU( mn s , m s ).(9) When the number of support images K > 1, prior work [20,21,6,32] predicts mk q for each support image independently, and then estimates mq as an average over these predictions, mq = 1 K K k=1 mk q . In contrast, our two contributions can be conveniently extended to the K-shot setting so as to further improve our robustness, beyond the standard averaging over K independent segmentations of the query.\nOur contribution 1 is extended by estimating relevance r ∈ R d×1 of a more general difference vector of feature activations defined as\nφ s = K k=1 wh i F k s,i mk s,i | mk s | - 1 -mk s,i wh -| mk s |(10)\nSimilar to ( 4) and ( 5), the optimal feature relevance has a closed-form sulution r * = φs φs 2 . Note that we estimate r * jointly over all K support images, rather than as an average of independently estimated feature relevances for each support image. We expect the former (i.e., our approach) to be more robust than the latter.\nOur contribution 2 is extended by having a more robust update of f n+1 s than in (7):\nf n+1 s = f n s -ν ∂ K k=1 L( mk s , m k s )/∂f n s ,(11)\nwhere L( mk s , m k s ) is the cross entropy loss incurred for predicting the segmentation mask mk s using the unique vector f n s given by (11) for every support image k, as explained in Sec. 3.2. Importantly, we do not generate K independent ensembles of experts {f n s : n = 1, . . . , N } k=1,K for each of the K support images. Rather, we estimate a single ensemble of experts more robustly over all K support images, starting with the initial expert\nf 1 s = 1 K K k=1 f k s . Implementation. The CNN we use is a modified version of either VGG-16 [27] or ResNet-101 [12]. Our CNN has the last two convolutional layers modified so as to have the stride equal to 1 instead of 2 in the original networks. This is combined with a dilated convolution to enlarge the receptive field with rates 2 and 4, respectively. So the final feature maps of our network have stride = 8, which is 1/8 of the input image size. For the two-layer convolutional network (Conv) aimed at producing the final segmentation, we use a 3 × 3 convolution with ReLU and 128 channels, and a 1 × 1 convolution with 2 output channels -background and foreground. It is worth noting that we do not use a CRF as a common post-processing step [14].\nFor implementation, we use Pytorch [19]. Following the baselines [32,20,21], we pretrain the CNN on ImageNet [24]. Training images are resized to 512 × 512, while keeping the original aspect ratio. All test images keep their original size. Training is done with the SGD, learning rate 7e -3 , batch size 8, and in 10,000 iterations. For the contribution 2, the number of experts N is analyzed in Sec. 5. For updating f n s in (7), we use Adam optimizer [13] with ν = 1e -2 .\nComplexity. In training, prior work [32,20,21]: (1) Uses a CNN with complexity O(CNN) for extracting features from the support and query images, (2) Computes the similarity map with complexity O(d w h), and (3) [32] additionally uses a convolutional network for segmenting the query with complexity O(Conv). Note that O(Conv) = O(d w h) as both the similarity map and convolutions in the Conv network are computed over feature maps with the size d × w × h. Also, note that O(d w h) is significantly smaller than O(CNN). Our contribution 1 additionally computes the feature relevance using the closed-form solution with a linear complexity in the size of the feature maps O(d w h). Therefore, our total training complexity is equal to that of prior work [32,20,21]:\nO(Train) = O(CNN) + O(d w h).\nIn testing, complexity of prior work [32,20,21] is the same as O(Train). Our contribution 2 increases complexity in testing by additionally estimating the ensemble of N segmentations of the query image. Therefore, in testing, our complexity is O(Test) = O(CNN)+O(N d w h). Thus, in testing, we increase only the smaller term of the total complexity. For small N , we have that the first term O(CNN) still dominates the total complexity. As we show in Sec. 5, for N = 10, we significantly outperform the state of the art, which justifies our slight increase in testing complexity." }, { "title": "One-shot learning for semantic segmentation", "year": 2018.0, "authors": "Amirreza Shaban; Shray Bansal; Zhen Liu; Irfan Essa; Byron Boots", "arxiv_di": 1709.0341, "Introduction": "Deep Neural Networks are powerful at solving classification problems in computer vision. However, learning classifiers with these models requires a large amount of labeled training data, and recent approaches have struggled to adapt to new classes in a data-efficient manner. There is interest in quickly learning new concepts from limited data using one-shot learning methods [21,37]. One-shot image classification is the problem of classifying images given only a single training example for each category [22,39].\nWe propose to undertake One-Shot Semantic Image Segmentation. Our goal is to predict a pixel-level segmentation mask for a semantic class (like horse, bus, etc.) given only a single image and its corresponding pixel-level annotation. We refer to the image-label pair for the new class as the support set here, but more generally for k-shot learning, support set refers to the k images and labels.\nA simple approach to performing one-shot semantic image segmentation is to fine-tune a pre-trained segmentation network on the labeled image [3]. This approach is prone to overfitting due to the millions of parameters being updated. It also introduces complications in c 2017. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. In our approach, we input S to a function g that outputs a set of parameters θ . We use θ to parameterize part of a learned segmentation model which produces a segmentation mask given I q .\noptimization, where parameters like step size, momentum, number of iterations, etc. may be difficult to determine. Recent one-shot image categorization methods [22,39] in contrast, meta-learn a classifier that, when conditioned on a few training examples, can perform well on new classes. Since Fully Convolutional Neural Networks (FCNs) [26] perform segmentation as pixel-wise classification, we could extend these one-shot methods directly to classify at the pixel level. However, thousands of dense features are computed from a single image and one-shot methods do not scale well to this many features. We illustrate this issue by implementing an extension to the Siamese Network from [22] as a baseline in Section 6.\nWe take inspiration from few-shot learning and propose a novel two-branched approach to one-shot semantic image segmentation. The first branch takes the labeled image as input and produces a vector of parameters as output. The second branch takes these parameters as well as a new image as input and produces a segmentation mask of the image for the new class as output. This is illustrated in Figure 1. Unlike the fine tuning approach to one-shot learning, which may require many iterations of SGD to learn parameters for the segmentation network, the first branch of our network computes parameters in a single forward pass. This has several advantages: the single forward pass makes our method fast; our approach for one-shot learning is fully differentiable, allowing the branch to be jointly trained with the segmentation branch of our network; finally, the number of parameters θ is independent of the size of the image, so our method does not have problems in scaling.\nTo measure the performance for one-shot semantic segmentation we define a new benchmark on the PASCAL VOC 2012 dataset [11] (Section 5). The training set contains labeled images from a subset of the PASCAL classes and the testing set has annotations of classes that were not present in training. We show significant improvements over the baselines on this benchmark in terms of the standard meanIoU (mean Intersection over Union) metric as described in Section 7.\nWe extend to k-shot learning by applying our one-shot approach for each of the k images independently to produce k segmentation masks. We then aggregate these masks by performing a logical-OR operation at the pixel level. This approach, apart from being easy to implement and fast, requires no retraining to generalize to any number of images in the support set. We show its effectiveness in terms of increasing meanIOU accuracy per added image to the support set in section 7.\nPASCAL VOC contains only 20 classes, which is small when compared to standard datasets used for training one-shot classification methods like Omniglot (1623) [24] and ImageNet (1000) ( [10]). Simulating the one-shot task during training, even with such a limited number of classes performs well. This is in contrast to the common notion that training models for few-shot learning requires a large number of classes. We hypothesize that part of our algorithm's ability to generalize well to unseen classes comes from the pretraining performed on ImageNet, which contains weak image-level annotations for a large number of classes. We perform experiments on the pretraining in section 7.1.\nThis paper makes the following contributions: (1) we propose a novel technique for oneshot segmentation which outperforms baselines while remaining significantly faster; (2) we show that our technique can do this without weak labels for the new classes; (3) we show that meta-learning can be effectively performed even with only a few classes having strong annotations available; and (4) we set up a benchmark for the challenging k-shot semantic segmentation task on PASCAL.", "Related_Work": "Semantic Image Segmentation is the task of classifying every pixel in an image into a predefined set of categories. Convolutional Neural Network (CNN) based methods have driven recent success in the field. Some of these classify super-pixels [13,15,27], others classify pixels directly [6,16,26,28]. We base our approach on the Fully Convolutional Network (FCN) for Semantic Segmentation [26] which showed the efficiency of pixel-wise classification. However, unlike FCN and the other approaches above, we do not assume a large set of annotated training data for the test classes. Weak Supervision. Weak and semi-supervised methods for Semantic Segmentation reduce the requirement on expensive pixel-level annotations, thus attracting recent interest. Weak supervision refers to training from coarse annotations like bounding boxes [9] or image labels [30,31,33]. A notable example is co-segmentation, where the goal is to find and segment co-occurring objects in images from the same semantic class [12,35]. Many cosegmentation algorithms [8,17,34] assume object visual appearances in a batch are similar and either rely on hand-tuned low-level features or high-level CNN features trained for different tasks or objects [34]. In contrast, we meta-learn a network to produce a high-level representation of a new semantic class given a single labeled example. Semi-supervised approaches [18,19,30] combine weak labels with a small set of pixel-level annotations. However, they assume a large set of weak labels for each of the desired objects. For instance, Pathak et al. [32] use image-level annotations for all classes and images in the PASCAL 2012 training set [11], while we exclude all annotations of the testing classes from the PASCAL training set. Few-Shot Learning algorithms seek to generalize knowledge acquired through classes seen during training to new classes with only a few training examples [25,36,39]. Discriminative methods in which the parameters of the base classifier (learned on training classes) are adapted to the new class [1,2,14,40] are closely related to our work. The main challenge is that the adapted classifier is prone to over-fit to the newly presented training examples. Wang and Herbert [40] address this challenge by learning to predict classifiers which remain close to the base classifier. Bertinetto et al. [2] trained a two-branch network, in which one branch receives an example and predicts a set of dynamic parameters. The second branch classifies the query image using the dynamic parameters along with a set of learned static parameters. A similar approach was used by Noh et al. in [29] for question answering. We draw several ideas from these papers and adapt them for the task of dense classification to design our model. Metric learning is another approach to low-shot learning [22,39]. It aims to learn an embedding space that pulls objects from the same categories close, while pushing those from different categories apart. Koch et al. [22] show that a Siamese architecture trained for a binary verification task can beat several classification baselines in k-shot image classification. We adapt their approach for image segmentation as one of our baselines.", "Methodology": "Let the support set S = {(I i s ,Y i s (l))} k i=1 be a small set of k image-binary mask pairs where Y i s ∈ L H×W test is the segmentation annotation for image I i s and Y i s (l) is the mask of the i th image for the semantic class l ∈ L test . The goal is to learn a model f (I q , S) that, when given a support set S and query image I q , predicts a binary mask Mq for the semantic class l. An illustration of the problem for k = 1 is given Figure 1.\nDuring training, the algorithm has access to a large set of image-mask pairs\nD = {(I j ,Y j )} N j=1\nwhere Y j ∈ L H×W train is the semantic segmentation mask for training image I j . At testing, the query images are only annotated for new semantic classes i.e. L train ∩ L test = ∅ . This is the key difference from typical image segmentation where training and testing classes are the same. While the problem is similar to k-shot learning, which has been extensively studied for image classification [36,39], applying it to segmentation requires some modification.\nIn this problem, unlike image classification, examples from L test might appear in training images. This is handled naturally when an annotator unaware of some object class, labels it as background. Annotations of L test objects are excluded from the training set, while the images are included as long as there is an object from L train present. State-of-the-art algorithms for image segmentation [4,5] use networks pre-trained on large-scale image classification datasets like [10]. Although these weights give the models a better starting point, they still require many segmented images and thousands of weight updates to learn a good model for pixel classification. This is true even for the classes that directly overlap. We allow similar access to weak annotations for our problem by initializing VGG with weights pre-trained on ImageNet [10]. In section 7.1 however, we show that even excluding all the overlapping classes from pre-training does not degrade the performance of our approach. We propose an approach where the first branch receives as input a labeled image from the support set S and the second branch receives the query image I q . In the first branch, we input the image-label pair S = (I s ,Y s (l)) to produce a set of parameters, w, b = g η (S).\n(1)\nIn the other branch, we extract a dense feature volume from I q using a parametric embedding function φ . Let F q = φ ζ (I q ) be that feature volume extracted from I q , then F mn q is the feature vector at the spatial location (m, n). Pixel level logistic regression is then performed on the features using the parameters from the first layer to get the final mask, Here, σ (.) is the sigmoid function and Mmn q is the (m, n) location of the predicted mask for the query. This can be understood as a convolutional layer with parameters {w, b} followed by a sigmoid activation function, where the parameters are not fixed after training and get computed through the first branch for each image in the support set. The predicted mask is then upsampled back to the original image size using standard bilinear interpolation. The final binary mask is produced by using a threshold of 0.5 on Mq . The overall architecture is illustrated in Figure 2. We explain each part of the architecture in more detail in the following subsections.\nMmn q = σ (w F mn q + b).(2)", "Dataset": "Dataset: We create a new dataset, PASCAL-5 i , for the problem of k-shot Image Segmentation using images and annotations from PASCALVOC 2012 [11] and extended annotations from SDSfoot_0 [15]. From L, the set of twenty semantic classes in PASCALVOC, we sample five and consider them as the test label-set L test = {4i + 1, . . . , 4i + 5}, with i being the fold Table 1: Mean IoU results on PASCAL-5 i . Top: test classes for each fold of PASCAL-5 i . The middle and bottom tables contain the semantic segmentation meanIoU on all folds for the 1-shot and 5-shot tasks respectively. number, and the remaining fifteen forming the training label-set L train . Test and training class names are shown in Table 5. We form the training set D train by including all image-mask pairs from PASCALVOC and SDS training sets that contain at least one pixel in the semantic mask from the label-set L train . The masks in D train are modified so that any pixel with a semantic class = L train is set as the background class l ∅ . We follow a similar procedure to form the test set D test , but here the image-label pairs are taken from PASCALVOC validation set and the corresponding label-set is L test . Thus, apart from a few exclusions, the set of images is similar to those used in Image Segmentation papers, like FCN [26]. However, the annotations are different. Given the test set D test , we use the same procedure that is described in Section 4.3 to sample each test example {S, (I q ,Y q (l))}. We sample N = 1000 examples and use it as the benchmark for testing each of the models described in the next section. Metric: Given a set of predicted binary segmentation masks { Mq } N i=1 and the ground truth annotated mask {M q } N i=1 for a semantic class l we define the per-class Intersection over Union (IoU l ) as tp l tp l +fp l +fn l . Here, tp l is the number of true positives, fp l is the number of false positives and fn l is the number of false negatives over the set of masks. The meanIoU is just its average over the set of classes, i.e. (1/n l ) ∑ l IoU l . This is the standard metric of meanIU defined in Image Segmentation literature adapted for our binary classification problem.", "Conclusion": "Deep learning approaches have achieved top performance in many computer vision problems. However, learning a new concept given few examples is still a very challenging task. implement it as a fully connected layer since we can keep both the hashing values and θ in memory. The weights are set as\nW (i, j) = ζ (i)δ j (κ(i)),(S3)\nwhere δ j (•) is discreet Dirac delta function. The weights are set according to the random hashing functions before training and are kept fixed. This is both easier to implement and more computationally efficient than the original formulation and that used by [29]. The output of the inner product layer W x is equal to θ from Equation S1.", "Experiment_and_Results": "We conduct several experiments to evaluate the performance our approach on the task of kshot Image segmentation by comparing it to other methods. Table 1 reports the performance of our method in 1-shot and 5-shot settings and compares them with the baseline methods.\nTo fit a 5-shot Siamese network into memory we sampled from features in the support set with a rate of 0.3. However, sub-sampling considerably degraded the performance of the method and 5-shot results were worse than the 1-shot version so we exclude those results.\nOur method shows better generalization performance to new classes. The difference is very noticeable in 1-shot learning as other methods overfit to only the image in the support set. Specifically, our method outperforms 1-NN and fine-tuning in one-shot image segmentation by 25% relative meanIoU. We also provide some qualitative result from our method in Figure 4. Surprisingly, the results for 1-NN are almost as good as the fine-tuning baseline, which overfits quickly to the data in the support set.\nIn Table 1, we also compare Co-segmentation by Composition [35] for 5-shot segmentation to our approach. As expected, using the strong pixel-level annotations enables our method to outperform the unsupervised co-segmentation approach, by 16%. In fact, we can outperform co-segmentation results that require 5 weakly annotated images with just a single strongly annotated image. Dilated-FCN: In addition to the low-resolution version of our method, we also trained the dilated-FCN with higher resolution on PASCAL-5 0 and achieved 37.0% and 37.43% mean-IoU for 1-shot and 5-shot respectively. We notice a 3.4% improvement over low-resolution for one-shot, however, the gap between 1-shot and 5-shot is small at this resolution. We believe this is due to our training being specific to the 1-shot problem. We do not use dilated-FCN architecture for other methods due to the impracticality caused by their high computational cost or memory footprint. Running Time: In Table 2 we include the running time of each algorithm. All the experiments were executed on a machine with a 4GHz Intel Core-i7 CPU, 32GB RAM, and a Titan X GPU. In one-shot setting our method is ∼3× faster the than second fastest method logistic regression. For 5-shot our method is ∼ 10× faster than logistic regression. We include some more qualitative results of our approach for One Shot Semantic Segmentation in Figure S3. We see that our method is capable of segmenting a variety of classes well and can distinguish an object from others in the scene given only a single support image.\nWe illustrate the effect of conditioning by segmenting the same query image with different support sets in Figure S4. We picked an unseen query image with two unseen classes, car and cow, and sample support image-mask pairs for each class. Figure S5 shows how increasing size of the support set helps improve the predicted mask. Note that in both Figures S5 and S4 yellow indicates the overlap between ground truth and the prediction.\nFigure S3: Qualitative results for 1-shot. Inside each tile, we have the support set at the top and the query image at the bottom. The support is overlaid with the ground truth in yellow and the query is overlaid with our predicted mask in red.", "Extra": "We modify the VGG-16 architecture from [38] to model the function g η (•).\nMasking. We chose to mask the image with its corresponding label so it contains only the target object instead of modifying the first layer to receive the four channel imagemask pair as input. We do this for the following two empirical reasons. (1) Even in the presence of the mask the network response tends to be biased towards the largest object in the image which may not be the object we would like to segment. (2) Including the background information in the input increased the variance of the output parameters {w, b} which prevented the network from converging.\nWeight Hashing. Inspired by Noh et al. [29], we employed the weight hashing layer from [7] to map the 1000-dimensional vector output from the last layer of VGG to the 4097 dimensions of {w, b}. This mapping avoids the overfitting which would occur due to the massive number of extra parameters that a fully connected layer will introduce if used instead. We implemented it efficiently as a fully connected layer with fixed weights. This is explained in more detail in the supplementary material. We model the embedding function F q = φ ζ (I q ) by the FCN-32s fully convolutional architecture [26] excluding the final prediction layer. The 4096 channel feature volume at conv-fc7 is then fed to the logistic pixel classifier described above. In section 7 we also evaluate performance of the high resolution dilated-FCN [41] with stride 8. We simulate the one shot task during training by sampling a support set S, a query image I q and its corresponding binary mask M q from the training set D train at each iteration. First, an image-label pair (I q ,Y q ) is sampled uniformly at random from D train , then we sample a class l ∈ L train uniformly from the classes present in the semantic mask and use it to produce the binary mask Y q (l). S is formed by picking one image-mask pair at random from D train -{(I q ,Y q )} with class l present. We can then predict the mask Mq with a forward pass through our network. We maximize the log likelihood of the ground-truth mask\nL(η, ζ ) = E S,I q ,M q ∼D train ∑ m,n log p η,ζ (M mn q |I q , S) .(3)\nHere η and ζ are the network parameters, p η,ζ is the probability of the mask given the neural network output, and S, I q , and M q are sampled by the sampling strategy described above. We use Stochastic Gradient Descent with a fixed learning rate of 10 -10 , momentum 0.99 and batch size of 1. The VGG network overfits faster than the fully-convolutional branch; therefore, we set the learning rate multiplier to 0.1 for learning the parameter η. We stop training after 60k iterations. In the case of k-shot segmentation the support set contains k labeled images, S = {I i s ,Y i s (l)} k i=1 . We use these images to produce k sets of the parameters {w i , b i } k i=1 . Each of them can be understood to be an independent classifier of an ensemble. These classifiers we noticed have high precision but low recall. We believe this is because each is produced by one example from the support set and a single image can only contain a small subset of the possible appearances of the object. So, we combine the decision of these classifiers by including a pixel in the final mask if it was considered an object by any of these classifiers. This is implemented as a logical OR operation between the k binary masks. This approach has the benefit that it does not require any retraining and can be generalized to any k. It is also much faster than the baselines as shown in section 7. We evaluate the performance of our method with different baselines. Since one-shot image segmentation is a new problem, we adapt previous work for dense pixel prediction to serve as baselines to compare against.\n• Base Classifiers: CNNs learn deep representations of images, so these models are an intuitive starting point for classification. Specifically, we first fine-tune FCN-32s pretrained on ILSVRC2014 data to perform 16-way (15 training foreground classes + 1 background class) pixel-wise predictions on the PASCAL-5 i dataset. During testing, we extract dense pixel-level features from both images in the support set and the query image. We then train classifiers to map dense fc-7 features from the support set to their corresponding labels and use it to generate the predicted mask Mq . We experimented with various classifiers including 1-NN and logistic regressionfoot_2\n• Fine-tuning: As suggested by [3], for each test iteration we fine-tune the trained segmentation network on examples in the support set and test on the query image. We only fine-tune the fully connected layers (fc6, fc7, fc8) to avoid overfitting and reducing the inference time per query. We also found that the fine-tuned network converges faster if we normalize the fc-7 features by a batch normalization layer.\n• Co-segmentation by Composition: To compare with the these techniques, we include the results of the publicly available implementationfoot_3 of [12] on PASCAL-5 i .\n• Siamese Network for One-shot Dense Matching: Siamese Networks trained for image verification, i.e. predicting whether two inputs belong to the same class, have shown good performance on one-shot image classification [22]. We adapt them by using two FCNs to extract dense features and then train it for pixel verification. A similarity metric from each pixel in the query image to every pixel in the support set is also learned and pixels are then labeled according to their nearest neighbors in the support set. Implementation details are provided in the supplementary document. The models compared above have two sources of information, the image-level labels for the classes in ImageNet [10] through the pretraining and the pixel-level annotation of classes in L train . Although the test classes L test do not overlap with L train , they have partial overlap with some ImageNet classes. To understand this effect, we use a dataset which excludes all the classes in ImageNet with any overlap with PASCAL categories called PASCAL-removed-ImageNet as in [20]. This dataset contains only 771 classes as compared to 1000 originally since each class in PASCAL usually overlaps with multiple ImageNet classes. We use AlexNet [23] trained on ImageNet and PASCAL-removed-ImageNet (from Huh et al. [20]) with the suffices 1000 and 771 respectively. We replaced the VGG and FCN from both branches of our approach with AlexNet to give us AlexNet-1000 and AlexNet-771. We also have a baseline in the form of Logistic Regression performed on convolutional AlexNet features finetuned on PASCAL, similar to the Base Classifiers described in section 6. We refer to these as LogReg-1000 and LogReg-771. Figure 3 contains the results for these models on the first fold, i.e. PASCAL-5 0 . Note that the results for the two baselines are constant because we evaluate the networks only once they converge.\nIn Figure 3 we observe that AlexNet-1000 is better initially and shows faster convergence. However, after convergence AlexNet-771 performs on par with AlexNet-1000. The initial gap could be understood by the fact that even the L train classes were not presented during the pre-training. AlexNet being a simpler model performs worse than VGG, meanIOU was 33.6% in Table 5. However, AlexNet-771 outperforms even our best VGG baseline, which was Siamese at 28.1% for PASCAL-5 0 . This result shows that we can generalize to new categories without any weak supervision for them. In contrast, LogReg-1000 outperforming LogReg-771 shows its incapacity to learn a good representation without seeing weak labels for test categories. This highlights the importance of meta-learning for this task. In the paper, we used the adapted version of Siamese Neural Network for One-shot Image Recognition by Koch et al. [22] for one-shot image segmentation. Here we explain the implementation details. The method from [22] receives as input two images that are each passed through identical convolutional networks and produce a vector as the output for each of them. These vectors are then compared using a learned L1 similarity metric and the image is classified according to the label of its nearest neighbor in this metric space. In our case, we use an FCN that outputs a feature volume each for both query and support images. Then the feature for every pixel in the query image is compared to every pixel in the support using a learned L1 similarity metric. We implemented this cross similarity measurement between pixels as a python layer for Caffe. The binary label here is assigned to each pixel according to the label of the nearest pixel label in the support set. The whole structure is illustrated in Figure S2. During training, we use FCNs initialized on ImageNet [10] and at each iteration we sample a pair of images from the PASCAL-5 i training set. One of them is treated as the query image and the other becomes the support image. The gradient is computed according to the cross-entropy loss between the sigmoid of the similarity metric and the true binary label. Every batch contains a subset (50%) of the pixels of a query and a support image. Both the similarity metric and the FCN feature extraction are jointly as different parts of the same neural network. In our Proposed Work, section 5 of the paper, we employed the weight hashing technique from [7] to map the 1000-dimensional vector output from the last layer of VGG to the 4097 dimensions of {w, b}. This mapping (1) reduces the variance of {w, b} as was also noted by Huh et al. in [29], and (2) reduces the overfitting which would occur due to the massive number of extra parameters that a fully connected layer will introduce if used instead. Weight hashing is explained as decompression in [7] and is performed as follows. Let x ∈ R m and θ ∈ R d , typically d > m, be the inputs and outputs of the layer respectively. Weight hashing works by replicating each coefficient of x in multiple locations of θ and randomly flipping the sign to reduce the covariance of copied coefficients. Illustrated in Figure S1. Specifically, the i th coefficient of θ is\nwhere both κ(i) → {1, . . . , m} and ζ (i) → {-1, +1} are hashing functions determined randomly. While [7] perform the hashing implicitly to keep the memory footprint small, we" }, { "title": "Prototypical networks for few-shot learning", "year": 2017.0, "authors": "Jake Snell; Kevin Swersky; Richard Zemel", "arxiv_di": 1703.05175, "Introduction": "Few-shot classification [20,16,13] is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy [16].\nTwo recent approaches have made significant progress in few-shot learning. Vinyals et al. [29] proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle [22] take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM [9] to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode.\nWe attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class's prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class.\np φ (y = k|x) ∝ exp(-d(f φ (x), c k )).\nClassification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point.\nIn this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering [4] in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning.", "Related_Work": "The literature on metric learning is vast [15,5]; we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) [8] learns a Mahalanobis distance to maximize K-nearest-neighbor's (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton [27] extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification [30] also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN [21] is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA [27] because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class's prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions.\nOur approach is also similar to the nearest class mean approach [19], where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences.\nAnother relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle [22]. The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode.\nPrototypical networks are also related to the neural statistician [6] from the generative modeling literature, which extends the variational autoencoder [12,24] to learn generative models of datasets rather than individual points. One component of the neural statistician is the \"statistic network\" which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification.\nWith respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of [3] in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of [23] also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither [3] nor [23] uses episodic training, which allows us to help speed up training and regularize the model.", "Methodology": "Prototypical networks compute an M -dimensional representation c k ∈ R M , or prototype, of each class through an embedding function f φ : R D → R M with learnable parameters φ. Each prototype is the mean vector of the embedded support points belonging to its class:\nc k = 1 |S k | (xi,yi)∈S k f φ (x i )(1)\nGiven a distance function d : R M × R M → [0, +∞), prototypical networks produce a distribution over classes for a query point x based on a softmax over distances to the prototypes in the embedding space:\np φ (y = k | x) = exp(-d(f φ (x), c k )) k exp(-d(f φ (x), c k ))(2)\nLearning proceeds by minimizing the negative log-probability\nJ(φ) = -log p φ (y = k | x)\n} do S k ← RANDOMSAMPLE(D V k , N S ) Select support examples Q k ← RANDOMSAMPLE(D V k \\ S k , N Q )\nSelect query examples\nc k ← 1 N C (xi,yi)∈S k f φ (x i ) Compute prototype from support examples end for J ← 0 Initialize loss for k in {1, . . . , N C } do for (x, y) in Q k do J ← J + 1 N C N Q d(f φ (x), c k )) + log k exp(-d(f φ (x), c k ))\nUpdate loss end for end for A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(z, z ) = zz 2 , then the model in Equation ( 2) is equivalent to a linear model with a particular parameterization [19]. To see this, expand the term in the exponent:\n-f φ (x) -c k 2 = -f φ (x) f φ (x) + 2c k f φ (x) -c k c k(7)\nThe first term in Equation ( 7) is constant with respect to the class k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows:\n2c k f φ (x) -c k c k = w k f φ (x) + b k , where w k = 2c k and b k = -c k c k(8)\nWe focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., [14,28].", "Conclusion": "We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning.", "Experiment_and_Results": "For few-shot learning, we performed experiments on Omniglot [16] and the miniImageNet version of ILSVRC-2012 [26] with the splits proposed by Ravi and Larochelle [22]. We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) [31]. In Table 4 we show test classification accuracy for prototypical networks using Euclidean distance trained with 5, 20, and 60 classes per episode. Figure 3 shows a sample t-SNE visualization [18] of the embeddings learned by prototypical networks. We visualize a subset of test characters from the same alphabet in order to gain better insight, despite the fact that classes in actual test episodes are likely to come from different alphabets. Even though the visualized characters are minor variations of each other, the network is able to cluster the hand-drawn characters closely around the class prototypes. In Table 5 we show the full results for the comparison of training episode configuration in Figure 2 of the main paper.\nWe also compared Euclidean-distance prototypical networks trained with a different number of classes per episode. Here we vary the classes per training episode from 5 up to 30 while keeping the number of query points per class fixed at 15. The results are shown in Figure 4. Our findings indicate that construction of training episodes is an important consideration in order to achieve good results for few-shot classification. Table 6 contains the full results for this set of experiments.", "Extra": "In few-shot classification we are given a small support set of N labeled examples S = {(x 1 , y 1 ), . . . , (x N , y N )} where each x i ∈ R D is the D-dimensional feature vector of an example and y i ∈ {1, . . . , K} is the corresponding label. S k denotes the set of examples labeled with class k. For a particular class of distance functions, known as regular Bregman divergences [4], the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence d ϕ is defined as:\nd ϕ (z, z ) = ϕ(z) -ϕ(z ) -(z -z ) T ∇ϕ(z ),(3)\nwhere ϕ is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance zz 2 and Mahalanobis distance.\nPrototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown [4] for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used.\nMoreover, any regular exponential family distribution p ψ (z|θ) with parameters θ and cumulant function ψ can be written in terms of a uniquely determined regular Bregman divergence [4]:\np ψ (z|θ) = exp{z T θ -ψ(θ) -g ψ (z)} = exp{-d ϕ (z, µ(θ)) -g ϕ (z)}(4)\nConsider now a regular exponential family mixture model with parameters\nΓ = {θ k , π k } K k=1 : p(z|Γ) = K k=1 π k p ψ (z|θ k ) = K k=1 π k exp(-d ϕ (z, µ(θ k )) -g ϕ (z))(5)\nGiven Γ, inference of the cluster assignment y for an unlabeled point z becomes:\np(y = k|z) = π k exp(-d ϕ (z, µ(θ k ))) k π k exp(-d ϕ (z, µ(θ k )))(6)\nFor an equally-weighted mixture model with one cluster per class, cluster assignment inference ( 6) is equivalent to query class prediction (2) with f φ (x) = z and c k = µ(θ k ). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by d ϕ . The choice of distance therefore specifies modeling assumptions about the classconditional data distribution in the embedding space. Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks [29] produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, c k = x k since there is only one support point per class, and matching networks and prototypical networks become equivalent.\nA natural question is whether it makes sense to use multiple prototypes per class instead of just one.\nIf the number of prototypes per class is fixed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. [19] and Rippel et al. [25]; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods.\nVinyals et al. [29] propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. Distance metric Vinyals et al. [29] and Ravi and Larochelle [22] apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold.\nEpisode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and Ravi and Larochelle [22], is to choose N c classes and N S support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 5-way classification and 1-shot learning, then training episodes could be comprised of N c = 5, N S = 1. We have found, however, that it can be extremely beneficial to train with a higher N c , or \"way\", than will be used at test-time. In our experiments, we tune the training N c on a held-out validation set. Another consideration is whether to match N S , or \"shot\", at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same \"shot\" number. Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector v k for each class. These could be determined in advance, or they could be learned from e.g., raw text [7]. Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define c k = g ϑ (v k ) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g to have unit length, however we do not constrain the query embedding f . Omniglot [16] is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject.\nWe follow the procedure of Vinyals et al. [29] by resizing the grayscale images to 28 × 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. [29] and is composed of four convolutional blocks. Each block comprises a 64-filter 3 × 3 convolution, batch normalization layer [10], a ReLU nonlinearity and a 2 × 2 max-pooling layer. When applied to the 28 × 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam [11].\nWe used an initial learning rate of 10 -3 and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization.\nWe trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher \"way\") per training episode rather than fewer. We compare against various baselines, including the neural statistician [6] and both the fine-tuned and non-fine-tuned versions of matching networks [29]. We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. The miniImageNet dataset, originally proposed by Vinyals et al. [29], is derived from the larger ILSVRC-12 dataset [26]. The splits used by Vinyals et al. [29] consist of 60,000 color images of size 84 × 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle [22] in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only.\nWe use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle [22], which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin.\nWe conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset [31]. The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. [23] in preparing the data. We use We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10 -4 and weight decay of 10 -5 . Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set.\nTable 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE [1], SJE [2], and DS-SJE/DA-SJE [23]. We also compare to a recent clustering approach [17] which trains an SVM on a learned feature space obtained by fine-tuning AlexNet [14]. These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research." }, { "title": "Learning to compare: Relation network for few-shot learning", "year": 2018.0, "authors": "Flood Sung; Yongxin Yang; Li Zhang; Tao Xiang; Timothy Philip Hs Torr; Hospedales", "arxiv_di": "1711.06025", "Introduction": "Deep learning models have achieved great success in visual recognition tasks [22,15,35]. However, these supervised learning models need large amounts of labelled data and many iterations to train their large number of parameters. This severely limits their scalability to new classes due to annotation cost, but more fundamentally limits their applicability to newly emerging (eg. new consumer devices) or rare (eg. rare animals) categories where numerous annotated images may simply never exist. In contrast, humans are very good at recognising objects with very little direct supervision, or none at all i.e., few-shot [23,9] or zero-shot [24] learning. For example, children have no problem generalising the concept of \"zebra\" from a single picture in a book, or hearing its description as looking like a stripy horse. Motivated by the failure of conventional deep learning methods to work well on one or few examples per class, and inspired by the few-and zero-shot learning ability of humans, there has been a recent resurgence of interest in machine one/few-shot [8,39,32,18,20,10,27,36,29] and zero-shot [11,3,24,45,25,31] learning.\nFew-shot learning aims to recognise novel visual categories from very few labelled examples. The availability of only one or very few examples challenges the standard 'fine-tuning' practice in deep learning [10]. Data augmentation and regularisation techniques can alleviate overfitting in such a limited-data regime, but they do not solve it. Therefore contemporary approaches to few-shot learning often decompose training into an auxiliary meta learning phase where transferrable knowledge is learned in the form of good initial conditions [10], embeddings [36,39] or optimisation strategies [29]. The target few-shot learning problem is then learned by fine-tuning [10] with the learned optimisation strategy [29] or computed in a feed-forward pass [36,39,4,32] without updating network weights. Zero-shot learning also suffers from a related challenge. Recognisers are trained by a single example in the form of a class description (c.f., single exemplar image in one-shot), making data insufficiency for gradient-based learning a challenge.\nWhile promising, most existing few-shot learning approaches either require complex inference mechanisms [23,9], complex recurrent neural network (RNN) architectures [39,32], or fine-tuning the target problem [10,29]. Our approach is most related to others that aim to train an effective metric for one-shot learning [39,36,20]. Where they focus on the learning of the transferrable embedding and pre-define a fixed metric (e.g., as Euclidean [36]), we further aim to learn a transferrable deep metric for comparing the relation between images (few-shot learning), or between images and class descriptions (zero-shot learning). By expressing the inductive bias of a deeper solution (multiple non-linear learned stages at both embedding and relation modules), we make it easier to learn a generalisable solution to the problem.\nSpecifically, we propose a two-branch Relation Network (RN) that performs few-shot recognition by learning to compare query images against few-shot labeled sample images. First an embedding module generates representations of the query and training images. Then these embeddings are compared by a relation module that determines if they are from matching categories or not. Defining an episodebased strategy inspired by [39,36], the embedding and relation modules are meta-learned end-to-end to support fewshot learning. This can be seen as extending the strategy of [39,36] to include a learnable non-linear comparator, instead of a fixed linear comparator. Our approach outperforms prior approaches, while being simpler (no RNNs [39,32,29]) and faster (no fine-tuning [29,10]). Our proposed strategy also directly generalises to zero-shot learning. In this case the sample branch embeds a single-shot category description rather than a single exemplar training image, and the relation module learns to compare query image and category description embeddings.\nOverall our contribution is to provide a clean framework that elegantly encompasses both few and zero-shot learning. Our evaluation on four benchmarks show that it provides compelling performance across the board while being simpler and faster than the alternatives.", "Related_Work": "The study of one or few-shot object recognition has been of interest for some time [9]. Earlier work on few-shot learning tended to involve generative models with complex iterative inference strategies [9,23]. With the success of discriminative deep learning-based approaches in the datarich many-shot setting [22,15,35], there has been a surge of interest in generalising such deep learning approaches to the few-shot learning setting. Many of these approaches use a meta-learning or learning-to-learn strategy in the sense that they extract some transferrable knowledge from a set of auxiliary tasks (meta-learning, learning-to-learn), which then helps them to learn the target few-shot problem well without suffering from the overfitting that might be expected when applying deep models to sparse data problems.\nLearning to Fine-Tune The successful MAML approach [10] aimed to meta-learn an initial condition (set of neural network weights) that is good for fine-tuning on few-shot problems. The strategy here is to search for the weight configuration of a given neural network such that it can be effectively fine-tuned on a sparse data problem within a few gradient-descent update steps. Many distinct target problems are sampled from a multiple task training set; the base neural network model is then fine-tuned to solve each of them, and the success at each target problem after finetuning drives updates in the base model -thus driving the production of an easy to fine-tune initial condition. The few-shot optimisation approach [29] goes further in metalearning not only a good initial condition but an LSTMbased optimizer that is trained to be specifically effective for fine-tuning. However both of these approaches suffer from the need to fine-tune on the target problem. In contrast, our approach solves target problems in an entirely feed-forward manner with no model updates required, making it more convenient for low-latency or low-power applications.\nRNN Memory Based Another category of approaches leverage recurrent neural networks with memories [27,32]. Here the idea is typically that an RNN iterates over an examples of given problem and accumulates the knowledge required to solve that problem in its hidden activations, or external memory. New examples can be classified, for example by comparing them to historic information stored in the memory. So 'learning' a single target problem can occur in unrolling the RNN, while learning-to-learn means training the weights of the RNN by learning many distinct problems. While appealing, these architectures face issues in ensuring that they reliably store all the, potentially long term, historical information of relevance without forgetting. In our approach we avoid the complexity of recurrent networks, and the issues involved in ensuring the adequacy of their memory. Instead our learning-to-learn approach is defined entirely with simple and fast feed forward CNNs.", "Methodology": "The prior approaches entail some complexity when learning the target few-shot problem. Another category of approach aims to learn a set of projection functions that take query and sample images from the target problem and classify them in a feed forward manner [39,36,4]. One approach is to parameterise the weights of a feed-forward classifier in terms of the sample set [4]. The meta-learning here is to train the auxiliary parameterisation net that learns how to paramaterise a given feed-forward classification problem in terms of a few-shot sample set. Metric-learning based approaches aim to learn a set of projection functions such that when represented in this embedding, images are easy to recognise using simple nearest neighbour or linear classifiers [39,36,20]. In this case the meta-learned transferrable knowledge are the projection functions and the target problem is a simple feed-forward computation.\nThe most related methodologies to ours are the prototypical networks of [36] and the siamese networks of [20]. These approaches focus on learning embeddings that transform the data such that it can be recognised with a fixed nearest-neighbour [36] or linear [20,36] classifier. In contrast, our framework further defines a relation classifier CNN, in the style of [33,44,14] (While [33] focuses on reasoning about relation between two objects in a same image which is to address a different problem.). Compared to [20,36], this can be seen as providing a learnable rather than fixed metric, or non-linear rather than linear classifier. Compared to [20] we benefit from an episodic training strategy with an end-to-end manner from scratch, and compared to [32] we avoid the complexity of set-to-set RNN embedding of the sample-set, and simply rely on pooling [33].\nZero-Shot Learning Our approach is designed for few-shot learning, but elegantly spans the space into zero-shot learning (ZSL) by modifying the sample branch to input a single category description rather than single training image. When applied to ZSL our architecture is related to methods that learn to align images and category embeddings and perform recognition by predicting if an image and category embedding pair match [11,3,43,46]. Similarly to the case with the prior metric-based few-shot approaches, most of these apply a fixed manually defined similarity metric or linear classifier after combining the image and category embedding. In contrast, we again benefit from a deeper end-to-end architecture including a learned nonlinear metric in the form of our learned convolutional relation network; as well as from an episode-based training strategy. We consider the task of few-shot classifier learning. Formally, we have three datasets: a training set, a support set, and a testing set. The support set and testing set share the same label space, but the training set has its own label space that is disjoint with support/testing set. If the support set contains K labelled examples for each of C unique classes, the target few-shot problem is called C-way K-shot.\nWith the support set only, we can in principle train a classifier to assign a class label ŷ to each sample x in the test set. However, due to the lack of labelled samples in the support set, the performance of such a classifier is usually not satisfactory. Therefore we aim to perform meta-learning on the training set, in order to extract transferrable knowledge that will allow us to perform better few-shot learning on the support set and thus classify the test set more successfully.\nAn effective way to exploit the training set is to mimic the few-shot learning setting via episode based training, as proposed in [39]. In each training iteration, an episode is formed by randomly selecting C classes from the training set with K labelled samples from each of the C classes to act as the sample set S = {(x i , y i )} m i=1 (m = K × C), as well as a fraction of the remainder of those C classes' samples to serve as the query set Q = {(x j , y j )} n j=1 . This sample/query set split is designed to simulate the support/test set that will be encountered at test time. A model trained from sample/query set can be further fine-tuned using the support set, if desired. In this work we adopt such an episode-based training strategy. In our few-shot experiments (see Section 4.1) we consider one-shot (K = 1, Figure 1) and five-shot (K = 5) settings. We also address the K = 0 zero-shot learning case as explained in Section 3.3. One-Shot Our Relation Network (RN) consists of two modules: an embedding module f ϕ and a relation module g φ , as illustrated in Figure 1. Samples x j in the query set Q, and samples x i in the sample set S are fed through the embedding module f ϕ , which produces feature maps f ϕ (x i ) and f ϕ (x j ). The feature maps f ϕ (x i ) and f ϕ (x j ) are combined with operator C(f ϕ (x i ), f ϕ (x j )). In this work we assume C(•, •) to be concatenation of feature maps in depth, although other choices are possible.\nThe combined feature map of the sample and query are fed into the relation module g φ , which eventually produces a scalar in range of 0 to 1 representing the similarity between x i and x j , which is called relation score. Thus, in the C-way one-shot setting, we generate C relation scores r i,j for the relation between one query input x j and training sample set examples x i ,\nr i,j = g φ (C(f ϕ (x i ), f ϕ (x j ))), i = 1, 2, . . . , C(1)\nK-shot For K-shot where K > 1, we element-wise sum over the embedding module outputs of all samples from each training class to form this class' feature map. This pooled class-level feature map is combined with the query image feature map as above. Thus, the number of relation scores for one query is always C in both one-shot or fewshot setting.\nObjective function We use mean square error (MSE) loss (Eq. ( 2)) to train our model, regressing the relation score r i,j to the ground truth: matched pairs have similarity 1 and the mismatched pair have similarity 0.\nϕ, φ ← argmin ϕ,φ m i=1 n j=1 (r i,j -1(y i == y j )) 2 (2)\nThe choice of MSE is somewhat non-standard. Our problem may seem to be a classification problem with a label space {0, 1}. However conceptually we are predicting relation scores, which can be considered a regression problem despite that for ground-truth we can only automatically generate {0, 1} targets. Related prior few-shot work uses fixed pre-specified distance metrics such as Euclidean or cosine distance to perform classification [39,36]. These studies can be seen as distance metric learning, but where all the learning occurs in the feature embedding, and a fixed metric is used given the learned embedding. Also related are conventional metric learning approaches [26,7] that focus on learning a shallow (linear) Mahalanobis metric for a fixed feature representa- tion. In contrast to prior work's fixed metric or features and shallow learned metric, Relation Network can be seen as both learning a deep embedding and learning a deep non-linear metric (similarity function) 2 . These are mutually tuned end-to-end to support each other in few short learning. Why might this be particularly useful? By using a flexible function approximator to learn similarity, we learn a good metric in a data driven way and do not have to manually choose the right metric (Euclidean, cosine, Mahalanobis). Fixed metrics like [39,36] assume that features are solely compared element-wise, and the most related [36] assumes linear separability after the embedding. These are thus critically dependent on the efficacy of the learned embedding network, and hence limited by the extent to which the embedding networks generate inadequately discriminative representations. In contrast, by deep learning a nonlinear similarity metric jointly with the embedding, Relation Network can better identify matching/mismatching pairs.", "Conclusion": "We proposed a simple method called the Relation Network for few-shot and zero-shot learning. Relation network learns an embedding and a deep non-linear distance metric for comparing query and sample items. Training the network end-to-end with episodic training tunes the embedding and distance metric for effective few-shot learning. This ap- proach is far simpler and more efficient than recent few-shot meta-learning approaches, and produces state-of-the-art results. It further proves effective at both conventional and generalised zero-shot settings.", "Experiment_and_Results": "We evaluate our approach on two related tasks: few-shot classification on Omniglot and miniImagenet, and zeroshot classification on Animals with Attributes (AwA) and Caltech-UCSD Birds-200-2011 (CUB). All the experiments are implemented based on PyTorch [1].", "Extra": "Zero-shot learning is analogous to one-shot learning in that one datum is given to define each class to recognise. However instead of being given a support set with one-shot image for each of C training classes, it contains a semantic class embedding vector v c for each. Modifying our framework to deal with the zero-shot case is straightforward: as a different modality of semantic vectors is used for the support set (e.g. attribute vectors instead of images), we use a second heterogeneous embedding module f ϕ2 besides the embedding module f ϕ1 used for the image query set. Then the relation net g φ is applied as before. Therefore, the relation score for each query input x j will be:\nr i,j = g φ (C(f ϕ1 (v c ), f ϕ2 (x j ))), i = 1, 2, . . . , C(3)\nThe objective function for zero-shot learning is the same as that for few-shot learning. As most few-shot learning models utilise four convolutional blocks for embedding module [39,36], we follow the same architecture setting for fair comparison, see Figure 2. More concretely, each convolutional block contains a 64filter 3 × 3 convolution, a batch normalisation and a ReLU nonlinearity layer respectively. The first two blocks also contain a 2 × 2 max-pooling layer while the latter two do not. We do so because we need the output feature maps for further convolutional layers in the relation module. The relation module consists of two convolutional blocks and two fully-connected layers. Each of convolutional block is a 3 × 3 convolution with 64 filters followed by batch normalisation, ReLU non-linearity and 2 × 2 max-pooling. The output size of last max pooling layer is H = 64 and H = 64 * 3 * 3 = 576 for Omniglot and miniImageNet respectively. The two fully-connected layers are 8 and 1 dimensional, respectively. All fully-connected layers are ReLU except the output layer is Sigmoid in order to generate relation scores in a reasonable range for all versions of our network architecture.\nThe zero-shot learning architecture is shown in Figure 3. In this architecture, the DNN subnet is an existing network (e.g., Inception or ResNet) pretrained on ImageNet. Few-shot learning in all experiments uses Adam [19] with initial learning rate 10 -3 , annealed by half for every 100,000 episodes. All our models are end-to-end trained from scratch with no additional dataset. Baselines We compare against various state of the art baselines for few-shot recognition, including neural statistician [8], Matching Nets with and without fine-tuning [39], MANN [32], Siamese Nets with Memory [18], Convolutional Siamese Nets [20], MAML [10], Meta Nets [27], Prototypical Nets [36] and Meta-Learner LSTM [29]. example that there are 19 × 5 + 1 × 5 = 100 images in one training episode/mini-batch for the 5-way 1-shot experiments.\nResults Following [36], we computed few-shot classification accuracies on Omniglot by averaging over 1000 randomly generated episodes from the testing set. For the 1shot and 5-shot experiments, we batch one and five query images per class respectively for evaluation during testing. The results are shown in Table 1. We achieved state-of-theart performance under all experiments setting with higher averaged accuracies and lower standard deviations, except 5-way 5-shot where our model is 0.1% lower in accuracy than [10]. This is despite that many alternatives have significantly more complicated machinery [27,8], or fine-tune on the target problem [10,39], while we do not. Dataset The miniImagenet dataset, originally proposed by [39], consists of 60,000 colour images with 100 classes, each having 600 examples. We followed the split introduced by [29], with 64, 16, and 20 classes for training, validation and testing, respectively. The 16 validation classes is used for monitoring generalisation performance only. Training Following the standard setting adopted by most existing few-shot learning work, we conducted 5 way 1-shot and 5-shot classification. Beside the K sample images, the 5-way 1-shot contains 15 query images, and the 5-way 5shot has 10 query images for each of the C sampled classes in each training episode. This means for example that there are 15×5+1×5 = 80 images in one training episode/minibatch for 5-way 1-shot experiments. We resize input images to 84 × 84. Our model is trained end-to-end from scratch, with random initialisation, and no additional training set.\nResults Following [36], we batch 15 query images per class in each episode for evaluation in both 1-shot and 5shot scenarios and the few-shot classification accuracies are computed by averaging over 600 randomly generated episodes from the test set.\nFrom Table 2, we can see that our model achieved stateof-the-art performance on 5-way 1-shot settings and competitive results on 5-way 5-shot. However, the 1-shot result reported by prototypical networks [36] reqired to be trained on 30-way 15 queries per training episode, and 5-shot result was trained on 20-way 15 queries per training episode. When trained with 5-way 15 query per training episode, [36] only got 46.14 ± 0.77% for 1-shot evaluation, clearly weaker than ours. In contrast, all our models are trained on 5-way, 1 query for 1-shot and 5 queries for 5-shot per training episode, with much less training queries than [36]. Datasets and settings We follow two ZSL settings: the old setting and the new GBU setting provided by [42] for training/test splits. Under the old setting, adopted by most existing ZSL works before [42], some of the test classes also appear in the ImageNet 1K classes, which have been used to pretrain the image embedding network, thus violating the zero-shot assumption. In contrast, the new GBU setting ensures that none of the test classes of the datasets appear in the ImageNet 1K classes. Under both settings, the [36]. For each task, the best-performing method is highlighted, along with any others whose confidence intervals overlap. '-': not reported.\ntest set can comprise only the unseen class samples (conventional test set setting) or a mixture of seen and unseen class samples. The latter, termed generalised zero-shot learning (GZSL), is more realistic in practice. Two widely used ZSL benchmarks are selected for the old setting: AwA (Animals with Attributes) [24] consists of 30,745 images of 50 classes of animals. It has a fixed split for evaluation with 40 training classes and 10 test classes. CUB (Caltech-UCSD Birds-200-2011) [40] contains 11,788 images of 200 bird species with 150 seen classes and 50 disjoint unseen classes. Three datasets [42] are selected for GBU setting: AwA1, AwA2 and CUB. The newly released AwA2 [42] consists of 37,322 images of 50 classes which is an extension of AwA while AwA1 is same as AwA but under the GBU setting. For AwA, we use the continuous 85-dimension class-level attribute vector from [24], which has been used by all recent works. For CUB, a continuous 312-dimension class-level attribute vector is used.\nImplementation details Two different embedding modules are used for the two input modalities in zero-shot learning. Unless otherwise specified, we use Inception-V2 [38,17] as the query image embedding DNN in the old and conventional setting and ResNet101 [16] for the GBU and generalised setting, taking the top pooling units as image embedding with dimension D = 1024 and 2048 respectively. This DNN is pre-trained on ILSVRC 2012 1K classification without fine-tuning, as in recent deep ZSL works [25,30,45]. A MLP network is used for embedding semantic attribute vectors. The size of hidden layer FC1 (Figure 3) is set to 1024 and 1200 for AwA and CUB respectively, and the output size FC2 is set to the same dimension as the image embedding for both datasets. For the relation module, the image and semantic embeddings are concatenated before being fed into MLPs with hidden layer FC3 size 400 and 1200 for AwA and CUB, respectively.\nWe add weight decay (L2 regularisation) in FC1 & 2 as there is a hubness problem [45] in cross-modal mapping for ZSL which can be best solved by mapping the semantic feature vector to the visual feature space with regularisation. After that, FC3 & 4 (relation module) are used to compute the relation between the semantic representation (in the visual feature space) and the visual representation. Since the hubness problem does not existing in this step, no L2 regularisation/weight decay is needed. All the ZSL models are trained with weight decay 10 -5 in the embedding network. The learning rate is initialised to 10 -5 with Adam [19] and then annealed by half every 200,000 iterations.\nResults under the old setting The conventional evaluation for ZSL followed by the majority of prior work is to assume that the test data all comes from unseen classes. We evaluate this setting first. We compare 15 alternative approaches in Table 3. With only the attribute vector used as the sample class embedding, our model achieves competitive result on AwA and state-of-the-art performance on the more challenging CUB dataset, outperforming the most related alternative prototypical networks [36] by a big margin. Note that only inductive methods are considered. Some re- CUB (hit@1 accuracy over all samples) under the old and conventional setting. SS: semantic space; A: attribute space; W: semantic word vector space; D: sentence description (only available for CUB). F: how the visual feature space is computed; For non-deep models: F O if overfeat [34] is used; F G for GoogLeNet [38]; and F V for VGG net [35]. For neural network based methods, all use Inception-V2 (GoogLeNet with batch normalisation) [38,17] as the DNN image imbedding subnet, indicated as N G .\ncent methods [48,12,13] are tranductive in that they use all test data at once for model training, which gives them a big advantage at the cost of making a very strong assumption that may not be met in practical applications, so we do not compare with them here.\nResults under the GBU setting We follow the evaluation setting of [42]. We compare our model with 11 alternative ZSL models in Table 4. The 10 shallow models results are from [42] and the result of the state-of-the-art method DEM [45] is from the authors' GitHub page 1 . We can see that on AwA2 and CUB, Our model is particularly strong under the more realistic GZSL setting measured using the harmonic mean (H) metric. While on AwA1, our method is only outperformed by DEM [45]. To illustrate the previous point about adequacy of learned input embeddings, we show a synthetic example where existing approaches definitely fail and our Relation Network can succeed due to using a deep relation module. Assuming 2D query and sample input embeddings to a relation module, Fig. 4(a) shows the space of 2D sample inputs for a fixed 2D query input. Each sample input (pixel) is colored according to whether it matches the fixed query or not. This In a real problem the difficulty of comparing embeddings may not be this extreme, but it can still be challenging. We qualitatively illustrate the challenge of matching two example Omniglot query images (embeddings projected to 2D, Figure 5(left)) by showing an analogous plot of real sample images colored by match (cyan) or mismatch (magenta) to two example queries (yellow). Under standard assumptions [39,36,26,7] the cyan matching samples should be nearest neighbours to the yellow query image with some metric (Euclidean, Cosine, Mahalanobis). But we can see that the match relation is more complex than this. In Figure 5(right), we instead plot the same two example queries in terms of a 2D PCA representation of each query-sample pair, as represented by the relation module's penultimate layer. We can see that the relation network has mapped the data into a space where the (mis)matched pairs are linearly separable." }, { "title": "Prior guided feature enrichment network for few-shot segmentation", "year": 2020.0, "authors": "H Tian; M Zhao; Z Shu; R Yang; Li; Jia", "arxiv_di": "2008.01449", "Introduction": "R APID development of deep learning has brought sig- nificant improvement to semantic segmentation. The iconic frameworks [60], [3] have profited a wide range of applications of automatic driving, robot vision, medical image, etc. The performance of these frameworks, however, worsens quickly without sufficient fully-labeled data or when working on unseen classes. Even if additional data is provided, fine-tuning is still time-and resource-consuming.\nTo address this issue, few-shot segmentation was proposed [33] where data is divided into a support set and a query set. As shown in Figure 1, images from both support and query sets are first sent to the backbone network to extract features. Feature processing can be accomplished by generating weights for the classifier [33], [41], cosinesimilarity calculation [5], [45], [23], or convolutions [15], [54], [49], [9], [1] to generate the final prediction.\nThe support set provides information about the target class that helps the model to make accurate segmentation prediction on the query images. This process mimics the scenario where a model makes the prediction of unseen classes on testing images (query) with few labeled data (support). Therefore, a few-shot model needs to quickly adapt to the new classes. However, the common problems of existing few-shot segmentation methods include generalization loss due to misuse of high-level features and spatial inconsistency between the query and support samples. In this paper, we mainly tackle these two difficulties.", "Methodology": "In this section, we first briefly describe the few-shot segmentation task in Section 3.1. Then, we present the prior generation method and the Feature Enrichment Module (FEM) in Sections 3.2 and 3.3 respectively. Finally, in Section 3.4, details of our proposed Prior Guided Feature Enrichment Network (PFENet) are discussed. Based on the proposed prior generation method and the feature enrichment module (FEM), we propose the Prior Guided Feature Enrichment Network (PFENet) as shown in Figure 3. The ImageNet [32] pre-trained CNN is shared by support and query images to extract features. The extracted middle-level support and query features are processed by 1×1 convolution to reduce the channel number to 256. After feature extraction and channel reduction, the feature enrichment module (FEM) enriches the query feature with the support feature and prior mask. On the output feature of FEM, we apply a convolution block (Figure 7(a)) followed by a classification head to yield the final prediction. Classification head is composed of one 3×3 convolution and 1×1 convolution with Softmax function as shown in Figure 7(b). For all backbone networks, we use the outputs of the last layers of conv3 x and conv4 x as middlelevel features M to generate the query and support features by concatenation, and take the output of the last layer of conv5 x as high-level features H to produce the prior mask.\nIn the 5-shot setting, we simply take the average of 5 pooled support features as the new support feature before concatenation with the query feature. Similarly, the final prior mask before the concatenation in FEM is also obtained by averaging five prior masks produced by one query feature with different support features. Parameters The parameters of our backbone network are fixed as those in [45], [54], [53]. Four parts in the baseline model are learnable: two 1×1 convolutions for reducing dimension number of the query and support features, FEM, one convolution block and one classification head. As shown in Table 6, our best model (Baseline + FEM + Prior) only has 10.8M trainable parameters that are much fewer than other methods shown in Table 1. The prior generation does not bring additional parameters to the model, and FEM with spatial sizes {60, 30, 15, 8} only brings 6.3M additional learnable parameters to the baseline (4.5M → 10.8M). To prove that the improvement brought by FEM is not due to more learnable parameters, we show results of the model with FEM ‡ that has more parameters (12.9M) but it yields even worse results than FEM (10.8M).\nSpeed PFENet based on ResNet-50 yields the best performance with 15.9 and 5.1 FPS in 1-and 5-shot setting respectively on an NVIDIA Titan V GPU. During evaluation, test images are resized to 473 × 473. As shown in Table 6, FEM does not affect the inference speed much (from 17.7 to 17.3 FPS). Though the proposed prior generation process slows down the baseline from 17.7 to 16.5 FPS, the final model is still efficient with 15+ FPS. Note that we include the processing time of the last block of ResNet in these experiments for a fair comparison.", "Conclusion": "We have presented the prior guided feature enrichment network (PFENet) with the proposed prior generation method and the feature enrichment module (FEM). The prior generation method boosts the performance by leveraging the cosine-similarity calculation on pre-trained high-level features. The prior mask encourages the model to localize the query target better without losing generalization power. FEM helps solve the spatial inconsistency by adaptively merging the query and support features at multiple scales with intermediate supervision and conditioned feature selection. With these modules, PFENet achieves new state-ofthe-art results on both PASCAL-5 i and COCO datasets without much model size increase and notable efficiency loss. Experiments in the zero-shot scenario further demonstrate the robustness of our work. Possible future work includes extending these two designs to few-shot object detection and few-shot instance segmentation.", "Experiment_and_Results": "Following [54], [28], we adopt the class mean intersection over union (mIoU) as our major evaluation metric for ablation study since the class mIoU is more reasonable than the foreground-background IoU (FB-IoU) as stated in [54]. The formulation follows mIoU = 1 C C i=1 IoU i , where C is the number of classes in each fold (e.g., C = 20 for COCO and C = 5 for PASCAL-5 i ) and IoU i is the intersection-over-union of class i. We also report the results of FB-IoU for comparison with other methods. For FB-IoU calculation on each fold, only foreground and background are considered (C = 2). We take average of results on all folds as the final mIoU/FB-IoU. As shown in Tables 1, 2 and 3, we build our models on three backbones VGG-16, ResNet-50 and ResNet-101 and report the mIoU/FB-IoU results respectively. By incorporating the proposed prior mask and FEM, our model significantly outperforms previous methods, reaching new state-of-theart on both PASCAL-5 i and COCO datasets. The PFENet can even outperform other methods on COCO with more than 10 points in terms of class mIoU. Our performance advantage on FB-IoU compared to PANet is relatively smaller than class mIoU on COCO, because FB-IoU is biased towards the background and classes that cover a large part of the foreground area. It is worth noting that our PFENet achieves the best performance with the fewest learnable parameters (10.4M for VGG based model and 10.8M for ResNet based models). Qualitative results are shown in Figure 8. As mentioned in the implementation details, evaluating 1,000 query-support pairs on PASCAL-5 i and COCO may cause instability on results. In this section, we show the analysis of result stability by conducting multiple experiments with different support samples. 9 show that the values of standard deviation are lower than 0.5 in both 1-shot and 5-shot setting, which shows the stability of our results on PASCAL-5 i with 1,000 pairs for evaluation. 57.9 58.1 57.9 58.0 57.6 57.9 0.187 COCO However, 1,000 pairs are not sufficient to provide reliable results for comparison as shown in Table 10, since the COCO validation set contains 40,137 images and 1,000 pairs could not even cover the entire 20 test classes. Based on this observation, we instead randomly sample 20,000 querysupport pairs to evaluate our models on four folds, and the results in Table 10 show that 20,000 pairs bring much more stable results than 1,000 pairs. As shown in Table 11, our base structure achieves 53.2 class mIoU on unseen classes without support samples, which even outperforms some models with five support samples on PASCAL-5 i in the few-shot setting of OSLSM [33]. Also, the proposed FEM tackles the spatial inconsistency in the zero-shot setting and brings 1.0 points mIoU improvement (from 53.2 to 54.2) to the baseline.", "Extra": "• Z. Tian (tianzhuotao@link.cuhk.edu.hk), H. Zhao mon semantic segmentation models rely heavily on highlevel features with semantic information. Experiments of CANet [54] show that simply adding high-level features during feature processing in a few-shot model causes performance drop. Thus the way to utilize semantic information in the few-shot setting is not straightforward. Unlike previous methods, we use ImageNet [32] pre-trained highlevel features of the query and support images to produce 'priors' for the model. These priors help the model to better identify targets in query images. Since the prior generation process is training-free, the resulting model does not lose the generalization ability to unseen classes, despite the frequent use of high-level information of seen classes during training. Besides, due to the limited samples, scale and pose of each support object may vary greatly from its query target, which we call spatial inconsistency. To arXiv:2008.01449v1 [cs.CV] 4 Aug 2020 tackle this problem, we propose a new module named Feature Enrichment Module (FEM) to adaptively enrich query features with the support features. Ablation study in Section 4.3 shows that merely incorporating the multi-scale scheme to tackle the spatial inconsistency is sub-optimal by showing that FEM provides conditioned feature selection that helps retain essential information passed across different scales. FEM achieves superior performance than other multi-scale structures, such as HRNet [44], PPM [60], ASPP [4] and GAU [53].\nFinally, based on the proposed prior generation method and Feature Enrichment Module (FEM), we build a new network -Prior Guided Feature Enrichment Network (PFENet). The ResNet-50 based PFENet only contains 10.8 M learnable parameters, and yet achieves new state-of-theart results on both PASCAL-5 i [33] and COCO [21] benchmark with 15.9 and 5.1 FPS with 1-shot and 5-shot settings respectively. Moreover, we manifest the effectiveness by applying our model to the zero-shot scenario where no labeled data is available. The result is surprising -PFENet sill achieves decent performance without major structural modification.\nOur contribution in this paper is threefold:\n•\nWe leverage high-level features and propose training-free prior generation to greatly improve prediction accuracy and retain high generalization.\n• By incorporating the support feature and prior information, our FEM helps adaptively refine the query feature with the conditioned inter-scale information interaction.\n• PFENet achieves new state-of-the-art results on both PASCAL-5 i and COCO datasets without compromising efficiency. Semantic segmentation is a fundamental topic to predict the label for each pixel. The Fully Convolutional Network (FCN) [34] is developed for semantic segmentation by replacing the fully-connected layer in a classification framework with convolutional layers. Following approaches, such as DeepLab [3], DPN [24] and CRF-RNN [62], utilize CRF/MRF to help refine coarse prediction. The receptive field is important for semantic segmentation; thus DeepLab [3] and Dilation [50] introduce the dilated convolution to enlarge the receptive field. Encoder-decoder structures [31], [10], [20] are adopted to help reconstruct and refine segmentation in steps. Contextual information is vital for complex scene understanding. ParseNet [22] applies global pooling for semantic segmentation. PSPNet [60] utilizes a Pyramid Pooling Module (PPM) for context information aggregation over different regions, which is very effective. DeepLab [3] develops atrous spatial pyramid pooling (ASPP) with filters in different dilation rates. Attention models are also introduced. PSANet [61] develops point-wise spatial attention with a bi-directional information propagation paradigm. Channelwise attention [55] and non-local style attention [56], [8], [51], [16] are also effective for segmentation. These methods work well on large-sample classes. They are not designed to deal with rare and unseen classes. They also cannot be easily adapted without fine-tuning. Few-shot learning aims at image classification when only a few training examples are available. There are meta-learning based methods [2], [11], [7] and metric-learning ones [43], [40], [37], [52]. Data is essential to deep models; therefore, several methods improve performance by synthesizing more training samples [57], [13], [47]. Different from fewshot learning where prediction is at the image-level, fewshot segmentation makes pixel-level predictions, which is much more challenging.\nOur work closely relates to metric-learning based fewshot learning methods. Prototypical network [37] is trained to map input data to a metric space where classes are represented as prototypes. During inference, classification is achieved by finding the closest prototype for each input image, because data belonging to the same class should be close to the prototype. Another representative metricbased work is the relation network [40] that projects query and support images to 1×1 vectors and then performs classification based on the cosine similarity between them. Few-shot segmentation places the general semantic segmentation in a few-shot scenario, where models perform dense pixel labeling on new classes with only a few support samples. OSLSM [33] first tackles few-shot segmentation by learning to generate weights of the classifier for each class. PL [5] applies prototyping [37] to the segmentation task. It learns a prototype for each class and calculates the cosine similarity between pixels and prototypes to make the prediction. More recently, CRNet [48] processes query and support images through a Siamese Network followed by a Cross-Reference Module to mine cooccurrent features in two images. PANet [45] introduces prototype alignment regularization that encourages the model to learn consistent embedding prototypes for better performance, and CANet [54] uses the iterative optimization module on the merged query and support feature to iteratively refine results.\nSimilar to CANet [54], we use convolution to replace the cosine similarity that may not well tackle complex pixelwise classification in the segmentation task. However, different from CANet, our baseline model uses fewer convolution operations and still achieves decent performance.\nAs discussed before, these few-shot segmentation methods do not sufficiently consider generalization loss and spatial inconsistency. Unlike PGNet [53] that uses a graphbased pyramid structure to refine results via Graph Attention Unit (GAU) followed by three residual blocks and an ASPP [4], we instead incorporate a few basic convolution operations with the proposed prior masks and FEM in a multi-scale structure to accomplish decent performance. A few-shot semantic segmentation system has two sets, i.e., the query set Q and support set S. Given K samples from support set S, the goal is to segment the area of unseen class C test from each query image I Q in the query set. Models are trained on classes C train (base) and tested on previously unseen classes C test (novel) in episodes (C train ∩ C test = ∅). The episode paradigm was proposed in [43] and was first applied to few-shot segmentation in [33]. Each episode is formed by a support set S and a query set Q of the same class c. The support set S consists of K samples S = {S 1 , S 2 , ..., S K } of class c, which we call 'K-shot scenario'. The i-th support sample S i is a pair of {I Si , M Si } where I Si and M Si are the support image and label of c respectively. For the query set, Q = {I Q , M Q } where I Q is the input query image and M Q is the ground truth mask of class c. The query-support pair {I Q , S} = {I Q , I S1 , M S1 , I S2 , M S2 , ..., I S K , M S K } forms the input data batch to the model. The ground truth M Q of the query image is invisible to the model and is used to evaluate the prediction on the query image in each episode. CANet [54] outperforms previous work by a large margin on the benchmark PASCAL-5 i dataset by extracting only middle-level features from the backbone (e.g., conv3 x and conv4 x of ResNet-50). Experiments in CANet also show that the high-level (e.g., conv5 x of ResNet-50) features lead to performance reduction. It is explained in [54] that the middle-level feature performs better since it constitutes object parts shared by unseen classes, but our alternative explanation is that the semantic information contained in the highlevel feature is more class-specific than the middle-level feature, indicating that the former is more likely to negatively affect model's generalization power to unseen classes. In addition, higher-level feature directly provides semantic information of the training classes C train , contributing more in identifying pixels belonging to C train and reducing the training loss than the middle-level information. Consequently, such behavior results in a preference for C train . The lack of generalization and the preference for the training classes are both harmful for evaluation on unseen test classes C test .\nIt is noteworthy that contrary to the finding that highlevel feature adversely affects performance in few-shot segmentation, prior segmentation frameworks [59], [31] exploit these features to provide semantic cues for final prediction. This contradiction motivates us to find a way to make use of high-level information in a training-class-insensitive way to boost performance in few-shot segmentation. In our work, we transform the ImageNet [32] pre-trained high-level feature containing semantic information into a prior mask that tells the probability of pixels belonging to a target class as shown in Figure 2. During training, the backbone parameters are fixed as those in [45], [54]. Therefore, the prior generation process does not bias towards training classes C train and upholds class-insensitivity during the evaluation on unseen test classes C test . Let I Q , I S denote the input query and support images, M S denote the binary support mask, F denote the backbone network, and X Q , X S denote the high-level query and support features. We have\nX Q = F(I Q ), X S = F(I S ) M S ,(1)\nwhere is the Hadamard product -the sizes of X Q and X S are both [h, w, c]. Note that the output of F is processed with a ReLU function. So the binary support mask M S removes the background in support feature by setting it to zero. Specifically, we define the prior Y Q of query feature X Q as the mask that reveals the pixel-wise correspondence between X Q and X S . A pixel of query feature X Q with a high value on Y Q means that this pixel has a high correspondence with at least one pixel in support feature. Thus, it is very likely to be in the target area of the query image. By setting the background on support feature to zero, pixels of query feature yield no correspondence with the background on support feature -they only correlate with the foreground target area. To generate Y Q , we first calculate the pixel-wise cosine similarity cos(x q , x s ) ∈ R between feature vectors of x q ∈ X Q and x s ∈ X S as cos(x q , x s ) = x T q x s x q x s q, s ∈ {1, 2, ..., hw}\nFor each x q ∈ X Q , we take the maximum similarity among all support pixels as the correspondence value c q ∈ R as\nc q = max s∈{1,2,...,hw} (cos(x q , x s )),(3)\nC Q = [c 1 , c 2 , ..., c hw ] ∈ R hw×1 .(4)\nThen we produce the prior mask\nY Q by reshaping C Q ∈ R hw×1 into Y Q ∈ R h×w×1 .\nWe process Y Q with a min-max normalization (Eq. ( 5)) to normalize the values to between 0 and 1, as shown in Figure 2. In Eq. ( 5), is set to 1e -7 in our experiments.\nY Q = Y Q -min(Y Q ) max(Y Q ) -min(Y Q ) + . (5\n)\nThe key point of our proposed prior generation method lies in the use of fixed high-level features to yield the prior mask by taking the maximum value from a similarity matrix of size hw × hw as given in Eqs. ( 2) and (3), which is rather simple and effective. Ablation study comparing other alternative methods used in [45], [28], [58] in Section 4.4 demonstrates the superiority of our method. Existing few-shot segmentation frameworks [54], [33], [15], [45], [28], [30], [35], [5] use masked global average pooling for extracting class vectors from support images before further processing. However, global pooling on support images results in spatial information inconsistency since the area of query target may be much larger or smaller than support samples. Therefore, using a global pooled support feature to directly match each pixel of the query feature is not ideal.\nA natural alternative is to add PPM [60] or ASPP [4] to provide multi-level spatial information to the feature. PPM and ASPP help the baseline model yield better performance (as demonstrated in our later experiments). However, these two modules are suboptimal in that: 1) they provide spatial information to merged features without specific refinement process within each scale; 2) the hierarchical relations across different scales are ignored.\nTo alleviate these issues, we disentangle the multi-scale structure and propose the feature enrichment module (FEM) to 1) horizontally interact the query feature with the support features and prior masks in each scale, and 2) vertically leverage the hierarchical relations to enrich coarse feature maps with essential information extracted from the finer feature via a top-down information path. After horizontal and vertical optimization, features projected into different scales are then collected to form the new query feature. Details of FEM are as follows. As shown in Figure 3, the feature enrichment module (FEM) takes the query feature, prior mask and support feature as input. It outputs the refined query feature with enriched information from the support feature. The enrichment process can be divided into three sub-processes of 1) inter-source enrichment that first projects input to different scales and then interacts the query feature with support feature and prior mask in each scale independently; 2) inter-scale interaction that selectively passes essential information between merged query-support features across different scales; and 3) information concentration that merges features in different scales to finally yield the refined query feature. An illustration of FEM with four scales and a top-down path for inter-scale interaction is shown in Figure 4.\nInter-Source Enrichment In FEM, B = [B 1 , B 2 , ..., B n ]\ndenotes n different spatial sizes for average pooling. They are in the descending order B 1 > B 2 > ... > B n . The input query feature X Q ∈ R h×w×c is first processed with adaptive average pooling to generate n sub-query features\nX F EM Q = [X 1 Q , X 2 Q , ..., X n Q ] of n different spatial sizes X i Q ∈ R B i ×B i ×c . n spatial sizes make the global-average pooled support feature X S ∈ R 1×1×c be expanded to different n feature maps X F EM S = [X 1 S , X 2 S , ..., X n S ] (X i S ∈ R B i ×B i ×c ), and the prior Y Q ∈ R h×w×1 is accordingly resized to Y F EM Q = [Y 1 Q , Y 2 Q , ..., Y n Q ] (Y i Q ∈ R B i ×B i ×1 ). Then, for i ∈ {1, 2, ..., n}, we concatenate X i Q , X i S\nand Y i Q , and process each concatenated feature with convolutions to generate the merged query features\nX i Q,m ∈ R B i ×B i ×c as X i Q,m = F 1×1 (X i Q ⊕ X i S ⊕ Y i Q ),(6)\nwhere F 1×1 represents the 1×1 convolution that yields the merged feature with c = 256 output channels. It is worth noting that tiny objects may not exist in the down-sampled feature maps. A topdown path adaptively passing information from finer features to the coarse ones is conducive to building a hierarchical relationship within our feature enrichment module. Now the interaction is between not only the query and support features in each scale (horizontal), but also the merged features of different scales (vertical), which is beneficial to the overall performance. The circled M in Figure 4 represents the inter-scale merging module M that interacts between different scales by selectively passing useful information from the auxiliary feature to the main feature to generate the refined feature X i Q,new . This process can be written as\nX i Q,new = M(X M ain,i Q,m , X Aux,i Q,m ),(7)\nwhere\nX M ain,i Q,m\nis the main feature and X Aux,i Q,m is the auxiliary feature for the i-th scale B i . For example, in an FEM with a top-down path for inter-scale interaction, finer feature (auxiliary) X i-1 Q,m needs to provide additional information to the coarse feature (main\n) X i Q,m (B i-1 > B i , i 2). In this case, X Aux,i Q,m = X i-1 Q,m and X M ain,i Q,m = X i Q,m .\nOther alternatives for inter-scale interaction include the bottomup path that enriches finer features (main) with information coming from the coarse ones (auxiliary), and the bidirectional variants, i.e., a top-down path followed by a bottom-up path, and a bottom-up path followed by a topdown path. The top-down path shows its superiority in Section 4.3.1.\nThe specific structure of the inter-scale merging module M is shown in Figure 5. We first resize the auxiliary feature to the same spatial size as the main feature. Then we use a 1×1 convolution α to extract useful information from the auxiliary feature conditioned on the main feature. Two 3×3 convolutions β followed are used to finish the interaction and output the refined feature. The residual link within the inter-scale merging module M is used for keeping the integrity of the main feature in the output feature X i Q,new . For those features that do not have auxiliary features (e.g., the first merged feature X 1 Q,m in the top-down path and the last merged feature X n Q,m in the bottom-up path), we simply ignore the concatenation with the auxiliary feature in Mthe refined feature is produced only by the main feature. After inter-scale interaction, n refined feature maps are obtained as X i Q,new , i ∈ {1, 2, ..., n}. Finally, the output query feature X Q,new ∈ R h×w×c is formed by interpolation and concatenation of n refined feature maps X i Q,new ∈ R h×w×c followed by an 1×1 convolution F 1×1 as feature with information coming from the support feature at each location under the guidance of prior mask and supervision of ground-truth. Moreover, the vertical interscale interaction supplements the main feature with the conditioned information provided by the auxiliary feature. Therefore, FEM yields greater performance gain on baseline than other feature enhancement designs (e.g., PPM [60], ASPP [4] and GAU [53]). Experiments in Section 4.3 provide more details.\nX Q,new = F 1×1 (X 1 Q,new ⊕ X 2 Q,new ... ⊕ X n Q,new ). We select the cross entropy loss as our loss function. As shown in Section 3.3.2 and Figure 3, for a FEM with n different spatial sizes, the intermediate supervision on\nX i Q,new (i ∈ {1, 2, ..., n}) generates n losses L i 1 (i ∈ {1, 2, ..., n}).\nThe final prediction of PFENet generates the second loss L 2 . The total loss L is the weighted sum of L i 1 and L 2 as\nL = σ n n i=1 L i 1 + L 2 , (9\n)\nwhere σ is used to balance the effect of intermediate supervision. We empirically set σ to 1.0 in all experiments. Datasets We use the datasets of PASCAL-5 i [33] and COCO [21] in evaluation. PASCAL-5 i is composed of PASCAL VOC 2012 [6] and extended annotations from SDS [12] datasets. 20 classes are evenly divided into 4 folds i ∈ {0, 1, 2, 3} and each fold contains 5 classes. Following OSLSM [33], we randomly sample 1,000 query-support pairs in each test. Following [28], we also evaluate our model on COCO by splitting four folds from 80 classes. Thus each fold has 20 classes. The set of class indexes contained in fold i is written as {4x -3 + i} where x ∈ {1, 2, ..., 20}, i ∈ {0, 1, 2, 3}. Note that the COCO validation set contains 40,137 images (80 classes), which are much more than the images in PASCAL-5 i . Therefore, 1,000 randomly sampled query-support pairs used in previous work are not enough for producing reliable testing results on 20 test classes. We instead randomly sample 20,000 query-support pairs during the evaluation on each fold, making the results more stable than testing on 1,000 query-support pairs used in previous work. Stability statistics are shown in Section 4.7.\nFor both PASCAL-5 i and COCO, when testing the model on one fold, we use the other three folds to train the model for cross-validation. We take the average of five testing results with different random seeds for comparison as shown in Tables 9 and10.\nExperimental Setting Our framework is constructed on PyTorch. We select VGG-16 [36], ResNet-50 [14] and ResNet-101 [14] as our backbones for fair comparison with other methods. The ResNet we use is the dilated version used in previous work [28], [54], [15]. The VGG we use is the original version [36]. All backbone networks are initialized with ImageNet [32] pretrained weights. Other layers are initialized by the default setting of PyTorch. We use SGD as our optimizer. The momemtum and weight decay are set to 0.9 and 0.0001 respectively. We adopt the 'poly' policy [3] to decay the learning rate by multiplying (1 -currentiter maxiter ) power where power equals to 0.9.\nOur models are trained on PASCAL-5 i for 200 epochs as that of [54] with learning rate 0.0025 and batch size 4. For experiments on COCO, models are trained for 50 epochs with learning rate 0.005 and batch size 8. Parameters of the backbone network are not updated. During training, samples are processed with mirror operation and random rotation from -10 to 10 degrees. Finally, we randomly crop 473 × 473 patches from the processed images as training samples. During the evaluation, each input sample is resized to the training patch size but with respect to its original aspect ratio by padding zero, then the prediction is resized back to the original label sizes. Finally, we directly output the single-scale results without fine-tuning and any additional post-processing (such as multi-scale testing and DenseCRF [18]). Our experiments are conducted on an NVIDIA Titan V GPU and Intel Xeon CPU E5-2620 v4 @ 2.10GHz. The code and trained models will be made publicly available. The proposed feature enrichment module (FEM) adaptively enriches the query feature by merging with support features in different scales and utilizes an inter-scale path to vertically transfer useful information from the auxiliary features to the main features. To verify the effectiveness of FEM, we first compare different strategies for inter-scale interaction. It shows that the top-down information path brings a decent performance gain to the baseline without compromising the model size much. Then experiments with different designs for inter-source enrichment are presented followed by comparison with the other feature enrichment designs of HRNet [44], ASPP [4] and PPM [60]. We also compare the Graph Attention Unit (GAU) used in the recent state-of-the-art few-shot segmentation method PGNet [53] to refine the query feature. In these experiments, since our input images are resized to 473 × 473, the input feature map of the module (e.g., FEM, GAU) has the spatial size 60 × 60. In this section, we show experimental results and analysis on different vertical inter-scale interaction strategies to manifest the rationales behind our designs of FEM.\nAs mentioned in Section 3.3, there are four alternatives for the inter-scale interaction: top-down path (TD), bottomup path (BU), top-down + bottom-up path (TD+BU), and bottom-up + top-down path (BU+TD). Our experimental results in Table 4 show that TD and TD+BU help the basic FEM structure without (W/O) the information path accomplish better results than both BU and BU+TD. The model with TD+BU contains more learnable parameters (16.0M) than TD (10.8M), and yet yields comparable performance. We thus choose TD for inter-scale interaction.\nThese experiments prove that using the finer feature (auxiliary) to provide additional information to the coarse feature (main) is more effective than using the coarse feature (auxiliary) to refine the finer feature (main). It is because the coarse features are not sufficient for targeting the query classes during the later information concentration stage if the target object disappears in small scales.\nDifferent from common semantic segmentation where contextual information is the key for good performance, the way of representation and acquisition of query information is more important in few-shot segmentation. Our motivation for designing FEM is to match the query and support features in different scales to tackle the spatial inconsistency between the query and support samples. Thus, a downsampled coarse query feature without target information is less helpful for improving the quality of the final prediction as shown in the experiments comparing TD and BU. PPM [60] and ASPP [4] are two popular feature enrichment modules for semantic segmentation by providing multiresolution context, and HRNet [44], [39], [38] provides a new feature enrichment module for the segmentation taskit achieved SOTA results on semantic segmentation benchmarks. In few-shot segmentation, the Graph Attention Unit (GAU) has been used in PGNet [53] to refine the query feature with contextual information. We note the proposed FEM module yields even better few-shot segmentation performance.\nThe improvement brought by FEM stems from: 1) the fusions of query and support features in different spatial sizes (inter-source enrichment) since it encourages the following convolution blocks to process the concatenated features independently in different spatial resolutions, which is beneficial to predicting query targets in various scales; 2) the inter-scale interaction that selectively passes useful information from the auxiliary feature to supplement the main feature. The model without the vertical top-down information path (marked with WO) yields worse results in Table 5.\nWe implement the ASPP with dilation rates {1, 6, 12, 18} and it achieves close results to PPM. The dilated convolution is less effective than adaptive average pooling for fewshot segmentation [53]. In the following, we mainly make comparisons with PPM and GAU first since they both use the adaptive pooling to provide multi-scale information. Then, we make a discussion with the module proposed by HRNet. 5, the model with spatial sizes {60, 30, 15, 8} achieves better performance than the baseline (original size with spatial size {60}) and models that replace FEM with PPM and ASPP. Experiments of PSPNet [60] show that the Pyramid Pooling Module (PPM) with spatial sizes {6, 3, 2, 1} yields the best performance. When small spatial sizes are applied to FEM, it still outperforms PPM. But small spatial sizes are not optimal in FEM because the features pooled to spatial sizes like {6, 3, 2, 1} are too coarse for interaction and fusion of query and support features. Similarly, with small spatial size 4, the FEM with {60, 30, 15, 8, 4} yields inferior performance compared to using the model with spatial sizes {60, 30, 15, 8}. Hence, we select {60, 30, 15, 8} as the feature scales for the inter-source enrichment of FEM. [53] uses the graph attention mechanism to establish the element-to-element correspondence between the query and support features in each scale. Pixels of the support feature are weighed by the GAU and the new support feature is the weighted sum of the original support feature. Then the new support feature is concatenated with the query feature for further processing. We directly replace the FEM with GAU on our baseline and keep other settings for a fair comparison. GAU is implemented with the code provided by the authors. Our baseline with GAU achieves class mIoU 55.4 and 56.1 in 1-and 5shot evaluation respectively. Noticing the original feature scales in GAU are {60, 8, 4}, we also implement it with scales {60, 30, 15, 8} (denoted as GAU+) used in our FEM. GAU+ yields smaller mIoU than GAU (54.9 in 1-shot and 55.4 in 5-shot). Though GAU also forms a pyramid structure via adaptive pooling to capture the multi-level semantic information, it is less competitive than the proposed FEM (59.2 in 1-shot and 60.4 in 5-shot) because it misses the hierarchical inter-scale relationship that adaptively provides information extracted from other levels to help refine the merged feature. HRNet has shown its superiority on many vision tasks by maintaining a highresolution feature through all the networks and gradually fusing multi-scale features to enrich the high-resolution features. The proposed FEM can be deemed as a variant of HRB to tackle the few-shot segmentation problem. The inter-source enrichment of FEM is analogous to the multiresolution parallel convolution in HRB as shown in Figure 9. But the inter-scale interaction in FEM passes conditioned information from large to small scales rather than dense interaction among all scales without selection in HRB.\nFor comparison, we experiment with replacing the FEM in PFENet with HRB and generate feature maps in HRB with the same scales of those in FEM ({60, 30, 15, 8}). Results are listed in Table 6. Directly applying HRB to the baseline (Baseline + HRB) does yield better results than PPM and ASPP. Densely passing information without selection causes redundancy to the target feature and yields suboptimal results. Our solution is, in the multi-resolution fusion stage of HRB, to apply the proposed inter-scale merging module M to extract essential information from the auxiliary features as shown in Figure 10. The model with conditioned feature selection (HRB-Cond) accomplishes better performance.\nAs shown in Table 4, passing features from coarse to fine levels (in a bottom-up order) adversely affects inter-scale interaction. We accordingly remove all bottom-up paths in HRB and only allow top-down ones (denoted as HRB-TD). It is not surprising that HRB-TD achieves better performance than HRB, and adding conditioned feature selection (HRB-TD-Cond) brings even further improvement.\nThe best variant of HRB (i.e., HRB-TD-Cond) yields comparable results with FEM, and yet it brings much more learnable parameters (7.5M). Therefore, for few-shot seg- [60] and ASPP [4] on PASCAL-5 i . The backbone is ResNet-50. '{60, 30, 15, 8}': the input query feature is average-pooled into four scales {60, 30, 15, 8} and concatenate with the expanded support features respectively as shown in Figure 4. WO: without inter-scale interaction.\nMethods 1-Shot 5-Shot Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold- mentation, the conditioned feature selection mechanism of the proposed inter-scale merging module M is essential for improving the performance of the multi-resolution structures. Experimental results in Table 6 show that the prior improves models w/ and wo/ FEM. The cosine-similarity is widely used for tackling few-shot segmentation. PANet [45] uses the cosine-similarity to yield the intermediate and the final prediction masks; SG-One [58] and [28] both utilize the cosine-similarity mask from the mask pooled support feature to provide additional guidance. However, these methods overlooked two factors. First, the mask generation process contains trainable components and the generated mask is thus biased towards the base classes during training. Second, the discrimination loss is led by the masked average pooling on support features, since the most relevant information in the support feature may be overwhelmed by the irrelevant ones during the pooling operation. For example, the discriminative regions for \"cat & dog\" are mainly around their heads. The main bodies share similar characteristics (e.g., tailed quadrupeds), making representation produced by masked global average pooling lose the discriminative information contained in the support samples.\nIn the following, we first show the rationale behind our prior generation using the fixed high-level feature and taking the maximum pixel-wise correspondence value from the similarity matrix. Then we make a comparison with other methods to demonstrate the superiority of our strategy. We also include the analysis of the generalization ability on the unseen objects out of the ImageNet [32] dataset to further manifest the robustness of our method. In our design, we select the fixed high-level feature for the prior generation because it can provide sufficient semantic information for accurate segmentation without sacrificing the generalization ability. The proposed prior generation is independent of the training process. So it does not lead to loss of generalization power. The prior masks provide the bias-free prior information from high-level features for both seen and unseen data during the evaluation, while masks produced by learnable feature maps (e.g., [45], [58], [28]) are affected by parameter learning during training. As a result, the preference for the training classes is inevitable for these later masks during the inference. To show the superiority of our choice, we conduct experiments on different sources of features for generating prior masks. Table 7 shows that the mask generated by either learnable or fixed middle-level features (Prior LM or Prior F M ) is less improved than our Prior F H since the middle-level feature is less effective to reveal the semantic correspondence between the query and support features. However, the results of mask got by learnable highlevel feature (Prior LH ) are even significantly worse than that of our baseline due to the fact that the learnable high-level feature severely overfits to the base classes: the model relies on the accurate prior masks produced by the learnable highlevel feature for locating the target region of base classes during training and therefore it hardly generalizes to the previously unseen classes during inference.\nQualitative Analysis Generated prior masks are shown in Figure 11. Masks of unseen classes generated by learnable high-level feature maps (L-H) cannot reveal the potential region-of-interest clearly while using the fixed high-level feature maps (F-H) keeps the general integrity of the target region. Compared to high-level features, prior masks produced by middle-level ones (L-M and F-M) are more biased towards the background region.\nTo help explain the quantitative results and those in Learnable high-level features. FM: Fixed middle-level features. FH: Fixed high-level features. Prior: Prior mask got by taking the maximum similarity value. Prior-A: Prior mask got by the average similarity value. Prior-P: Prior mask generated with the mask-pooled support feature.\nPrior-FW: Prior mask got by the feature weighting mechanism proposed in [28]. Figure 11, embedding visualization is shown in Figure 12 where 1,000 samples of base classes (gray) and 1,000 samples of novel classes (colored in green, red, purple, blue and orange) are processed by the backbone followed by t-SNE [42]. Based on the overlapping area between the clusters of the base and novel classes, we draw two conclusions. First, the middle-level features in Figures 12 (a In our model, the prior mask acts as a pixel-wise indicator for each query image. As given in Eq. ( 3), taking the maximum correspondence value from the pixel-wise similarity between the query and support features indicates that there exists at least one pixel/area in the support image that has close semantic relation to the query pixel with a high prior value. It is beneficial to reveal most of the potential targets on query images. Other alternatives include using mask pooled support feature to generate the similarity mask as [45], [58], [28], and taking the average value rather than the maximum value from the pixel-wise similarity.\nTo verify the effectiveness of our design, we train two additional models in Table 7: one with prior masks generated by averaging similarities (Prior-A F H ), and another whose prior masks are obtained by the mask-pooled support feature (Prior-P F H ). They both perform less satisfyingly than the proposed strategy (Prior F H ).\nWe note the following fact. Our prior generation method takes the maximum value from a similarity matrix of size hw × hw to generate the prior mask of size h × w (Eq. ( 3)), in contrast to Prior-P forming the mask from the similarity matrix of size hw × 1, the difference of speed is rather small because computational complexities of the two mask generation methods are much smaller than that of the rest of network. The FPS values of Prior Some other methods also use the similarity mask as an intermediate guidance for improving performance (e.g. [45], [58], [28]). Their masks are obtained by the learnable maskpooled support and learnable query feature that is then used for further processing the making final prediction. The strategy of this type of method is similar to Prior-P LM .\nIn [28], the good discrimination ability of features makes activation high on the foreground and low elsewhere. We follow Eqs. ( 3)-( 6) in [28] to implement the feature weighting mechanism on both the query and support features used for prior mask generation. In [28], the weighting mechanism is directly applied to learnable features, and we offer two choices in our model: the learnable middle-and high-level features. However, it does not perform better for Prior-FW LM and Prior-FW LH . Results of Prior-FW F H demonstrates the effectiveness of our feature selection strategy (with fixed high-level features) for prior generation. Our feature selection strategy is complementary to the weighting mechanism of [28]. Many objects of PASCAL-5 i and COCO have been included in ImageNet [32] for backbone pre-training. For those previously unseen objects, the backbone still provides strong semantic cues to help identify the target area in query images with the information provided by the support images. The class 'Person' in PASCAL-5 i is not contained in ImageNet, and the baseline with the prior mask achieves 15.81 IoU, better than that without the prior mask (14.38). However, the class 'Person' is not rare in ImageNet samples even if their labels are not 'Person'.\nTo further demonstrate our generalization ability to totally unseen objects, we conduct experiments on the recently proposed FSS-1000 [19] dataset where the foreground IoU is used as the evaluation metric. FSS-1000 is composed of 1,000 classes, among which 486 classes are not included in any other existing datasets [19] 1 . We train our models with ResNet-50 backbone on the seen classes for 100 epochs with batch size 16 and initial learning rate 0.01, and then test them on the unseen classes. The number of query-support pairs sampled for testing is equal to five times the number of unseen samples.\nAs shown in Table 8, the baseline with the prior mask achieves 80.8 and 81.4 foreground IoU in 1-and 5-shot evaluations respectively that outperform the vanilla baseline (79.7 and 80.1) by more than 1.0 foreground IoU in both settings. The visual illustration is given in Figure 13 where the target regions can still be highlighted in the prior masks even if these objects were not witnessed by the ImageNet pre-trained backbone. In OSLSM [33], two backbone networks are trained to achieve few-shot segmentation. However, backbone parameters in recent work [54], [45] are kept to prevent overfitting. There is no experiment to show what effect the backbone training has. To reach a better understanding of how the backbone affects our method, the results of four models trained with all parameters in the backbone are shown in the last four rows of Table 6.\n1. In practice, 420 unseen classes are filtered out. The author of FSS-1000 has clarified in email that they \"have made incremental changes to the dataset to improve class balance and label quality so the number may have changed. Please do experiments according to the current version.\"\nThe additional trainable backbone parameters cause significant performance reduction due to the overfitting of training classes. Moreover, the backbone training nearly doubles the training time of each batch because an additional parameter update is required. It does not, however, affect the inference speed. As shown in the results, the improvement that FEM and prior mask bring to models with trainable backbones is less significant than on those with fixed backbones. We note that the prior masks in this section are produced by learnable high-level features because the whole backbone is trainable. The learnable high-level features bring worse performance to the fixed backbone as shown in Table 7, but they are beneficial to the trainable backbone. On 5-shot evaluation, the prior yields higher performance gain compared to FEM, because the prior is averaged over five support samples, providing a more accurate prior mask than 1-shot for query images to combat overfitting. Finally, the model with both FEM and the prior still outperforms the baseline model, which demonstrates the robustness of our proposed design even with all learnable parameters. Zero-shot learning aims at learning a model that is robust even when no labeled data is given. It is an extreme case of few-shot learning. To further demonstrate the robustness of our proposed PFENet in the extreme case, we modify our model by replacing the pooled support features with class label embeddings. Note that our proposed prior generation method requires support features. Therefore the prior is not applicable and we only verify FEM on the baseline with VGG-16 backbone in the zero-shot setting. Embeddings of Word2Vec [27] and FastText [25] are trained on Google News [46] and Common Crawl [26] respectively. The concatenated feature of Word2Vec and FastText embeddings directly replaces the pooled support feature in the original model without normalization. Therefore the structural change on the model structure is the first learnable 1×1 convolution for reducing the support feature channel. Its input channel number 768 (512 + 256) in the original few-shot model (VGG-16 backbone) is updated to 600 (300 + 300) in the zero-shot model. Analysis on values of mean and std. of five test results (class mIoU) on COCO with different numbers of test query-support pairs (1,000 and 20,000). The model is based on VGG-16 [36]. 20,000 query-support pairs yield more stable results with a lower standard deviation than 1,000 query-support pairs. ." }, { "title": "Attention is all you need", "year": 2017.0, "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan Gomez; Łukasz Kaiser; Illia Polosukhin", "arxiv_di": "1706.03762", "Introduction": "Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35,2,5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38,24,15].\nRecurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states h t , as a function of the previous hidden state h t-1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.\nAttention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2,19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.\nIn this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.", "Related_Work": "The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.\nSelf-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4,27,28,22].\nEnd-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].\nTo the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17,18] and [9].", "Methodology": "Most competitive neural sequence transduction models have an encoder-decoder structure [5,2,35]. Here, the encoder maps an input sequence of symbol representations (x 1 , ..., x n ) to a sequence of continuous representations z = (z 1 , ..., z n ). Given z, the decoder then generates an output sequence (y 1 , ..., y m ) of symbols one element at a time. At each step the model is auto-regressive [10], consuming the previously generated symbols as additional input when generating the next. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. The Transformer uses multi-head attention in three different ways:\n• In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38,2,9].\n• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.\n• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to -∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2. To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the . We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.\nIn Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.\nIn Table 3 rows (B), we observe that reducing the attention key size d k hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [9], and observe nearly identical results to the base model.", "Dataset": "We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.", "Conclusion": "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.\nFor translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.\nWe are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.\nThe code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.", "Extra": "Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network. We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension d model = 512.\nDecoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i. An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum Scaled Dot-Product Attention Multi-Head Attention of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. We call our particular attention \"Scaled Dot-Product Attention\" (Figure 2). The input consists of queries and keys of dimension d k , and values of dimension d v . We compute the dot products of the query with all keys, divide each by √ d k , and apply a softmax function to obtain the weights on the values.\nIn practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as:\nAttention(Q, K, V ) = softmax( QK T √ d k )V(1)\nThe two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1\n√ d k\n. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.\nWhile for small values of d k the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of d k [3]. We suspect that for large values of d k , the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradientsfoot_0 . To counteract this effect, we scale the dot products by 1\n√ d k . Instead of performing a single attention function with d model -dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to d k , d k and d v dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding d v -dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.\nMulti-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.\nMultiHead(Q, K, V ) = Concat(head 1 , ..., head h )W O\nwhere\nhead i = Attention(QW Q i , KW K i , V W V i )\nWhere the projections are parameter matrices\nW Q i ∈ R dmodel×d k , W K i ∈ R dmodel×d k , W V i ∈ R dmodel×dv and W O ∈ R hdv×dmodel .\nIn this work we employ h = 8 parallel attention layers, or heads. For each of these we use\nd k = d v = d model /h = 64.\nDue to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.\nFFN(x) = max(0, xW 1 + b 1 )W 2 + b 2(2)\nWhile the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1.\nThe dimensionality of input and output is d model = 512, and the inner-layer has dimensionality d f f = 2048. Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension d model . We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [30]. In the embedding layers, we multiply those weights by √ d model .\n(n 2 • d) O(1) O(1) Recurrent O(n • d 2 ) O(n) O(n) Convolutional O(k • n • d 2 ) O(1) O(log k (n)) Self-Attention (restricted) O(r • n • d) O(1) O(n/r) Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add \"positional encodings\" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension d model as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [9].\nIn this work, we use sine and cosine functions of different frequencies:\nP E (pos,2i) = sin(pos/10000 2i/dmodel ) P E (pos,2i+1) = cos(pos/10000 2i/dmodel )\nwhere pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 • 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, P E pos+k can be represented as a linear function of P E pos .\nWe also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x 1 , ..., x n ) to another sequence of equal length (z 1 , ..., z n ), with x i , z i ∈ R d , such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.\nOne is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.\nThe third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.\nAs noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.\nA single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(log k (n)) in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k\n• n • d + n • d 2 ).\nEven with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.\nAs side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. This section describes the training regime for our models. We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). We used the Adam optimizer [20] with β 1 = 0.9, β 2 = 0.98 and ϵ = 10 -9 . We varied the learning rate over the course of training, according to the formula:\nlrate = d -0.5 model • min(step_num -0.5 , step_num • warmup_steps -1.5 )(3)\nThis corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000. We employ three types of regularization during training: Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of P drop = 0.1. During training, we employed label smoothing of value ϵ ls = 0.1 [36]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.\n6 Results On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.\nOn the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate P drop = 0.1, instead of 0.3.\nFor the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [38]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [38].\nTable 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPUfoot_1 . To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37].\nWe trained a 4-layer transformer with d model = 1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences [37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting.\nWe performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we increased the maximum output length to input length + 300. We used a beam size of 21 and α = 0.3 for both WSJ only and the semi-supervised setting.\nOur results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8].\nIn contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the Berkeley-Parser [29] even when training only on the WSJ training set of 40K sentences. The Full attentions for head 5. Bottom: Isolated attentions from just the word 'its' for attention heads 5 and 6. Note that the attentions are very sharp for this word. The" }, { "title": "Panet: Few-shot image semantic segmentation with prototype alignment", "year": 2019.0, "authors": "Kaixin Wang; Jun Hao Liew; Yingtian Zou; Daquan Zhou; Jiashi Feng", "arxiv_di": "1908.06391", "Introduction": "Deep learning has greatly advanced the development of semantic segmentation with a number of CNN based architectures like FCN [13], SegNet [1], DeepLab [2] and PSPNet [29]. However, training these models typically requires large numbers of images with pixel-level annotations which are expensive to obtain. Semi-and weaklysupervised learning methods [26,3,9,15] alleviate such requirements but still need many weakly annotated training images. Besides their hunger for training data, these models also suffer rather poor generalizability to unseen classes. To deal with the aforementioned challenges, few-shot learning, which learns new concepts from a few annotated examples, has been actively explored, mostly concentrating on image Figure 1: Overview of our model (PANet) for few-shot segmentation. PANet first maps the support and query images into embedding features (circles and triangles respectively) and learns prototypes for each class (blue and yellow solid circles). Segmentation over the query is then performed by matching its features to a nearest prototype within the embedding space (dashed lines). PANet further introduces a prototype alignment regularization during training to align the prototypes from support and query images within the embedding space by performing few-shot segmentation reversely from query to support (right panel). Segmentation masks with dashed border denote ground truth annotations.\nclassification [25,23,24,18,6,20,12,14] and a few targeting at segmentation tasks [21,17,4,28,4,8].\nExisting few-shot segmentation methods generally learn from a handful of support images and then feed learned knowledge into a parametric module for segmenting the query. However, such schemes have two drawbacks and thus generalize unsatisfactorily. First, they do not differentiate the knowledge extraction and segmentation process, which may be problematic since the segmentation model representation is mixed with the semantic features of the support. We therefore propose to separate these two parts as prototype extraction and non-parametric metric learning. The prototypes are optimized to be compact and robust representations for each semantic class and the non-parametric metric learning performs segmentation through pixel-level matching within the embedding space. Moreover, instead of using the annotations of the support only for masking as in previous methods, we propose to leverage them also for supervising the few-shot learning process. To this end, we introduce a novel prototype alignment regularization by performing the few-shot segmentation in a reverse direction. Namely, the query image together with its predicted mask is considered as a new support set and used to segment the previous support images. In this way, the model is encouraged to generate more consistent prototypes between support and query, offering better generalization performance.\nAccordingly, we develop a Prototype Alignment Network (PANet) to tackle few-shot segmentation, as shown in Figure 1. PANet first embeds different foreground objects and background into different prototypes via a shared feature extractor. In this way, each learned prototype is representative for the corresponding class and meanwhile is sufficiently distinguishable from other classes. Then, each pixel of the query image is labeled by referring to the classspecific prototypes nearest to its embedding representation. We find that even with only one support image per class, PANet can provide satisfactory segmentation results, outperforming the state-of-the-arts. Furthermore, it imposes a prototype alignment regularization by forming a new support set with the query image and its predicted mask and performing segmentation on the original support set. We find this indeed encourages the prototypes generated from the queries to align well with those of the supports. Note that the model is regularized only in training and the query images should be not confused with the testing images.\nThe structure design of the proposed PANet has several advantages. First, it introduces no extra learnable parameters and thus is less prone to over-fitting. Second, within PANet, the prototype embedding and prediction are performed on the computed feature maps and therefore segmentation requires no extra passes through the network. In addition, as the regularization is only imposed in training, the computation cost for inference does not increase.\nOur few-shot segmentation model is a generic one. Any network with a fully convolutional structure can be used as the feature extractor. It also learns well from weaker annotations, e.g., bounding boxes or scribbles, as shown in experiments. To sum up, the contributions of this work are:\n• We propose a simple yet effective PANet for few-shot segmentation. The model exploits metric learning over prototypes, which differs from most existing works that adopt a parametric classification architecture.\n• We propose a novel prototype alignment regularization to fully exploit the support knowledge to improve the few-shot learning.\n• Our model can be directly applied to learning from a few examples with weak annotations.\n• Our PANet achieves mIoU of 48.1% and 55.7% on PASCAL-5 i for 1-shot and 5-shot settings, outperforming state-of-the-arts by a margin up to 8.6 %. [23] and can be seen as an extension of it to dense prediction tasks, enjoying a simple design yet high performance.", "Related_Work": "Few-shot segmentation Few-shot segmentation is receiving increasing interest recently. Shaban et al. [21] first proposed a model for few-shot segmentation using a conditioning branch to generate a set of parameters θ from the support set, which is then used to tune the segmentation process of the query set. Rakelly et al. [16] concatenated extracted support features with query ones and used a decoder to generate segmentation results. Zhang et al. [28] used masked average pooling to better extract foreground/background information from the support set. Hu et al. [8] explored guiding at multiple stages of the networks. These methods typically adopt a parametric module, which fuses information extracted from the support set and generates segmentation. Dong et al.\n[4] also adopted the idea of prototypical networks and tackled few-shot segmentation using metric learning. However, the model is too complex, involving three training stages and complicated training configurations. Besides, their method extracts prototypes based on an image-level loss and uses prototypes as guidance to tune the segmentation of the query set rather than obtaining segmentation directly from metric learning. Comparatively, our model has a simpler design and is more similar to the Prototypical Network [23]. Besides, we adopt late fusion [17] to incorporate the annotation masks, making it easier to generalize to cases with sparse or updating annotations.", "Methodology": "We aim at obtaining a segmentation model that can learn fast to perform segmentation from only a few annotated images over new images from the same classes. As in previous works [21], we adopt the following model training and testing protocols. Suppose we are provided with images from two non-overlapping sets of classes C seen and C unseen . The training set D train is constructed from C seen and the test set D test is constructed from C unseen . We train the segmentation model M on D train and evaluate on D test .\nBoth the training set D train and testing set D test consist of several episodes. Each episode is composed of a set of support images S (with annotations) and a set of query images Q. Namely,\nD train = {(S i , Q i )} Ntrain i=1 and D test = {(S i , Q i )} Ntest i=1\n, where N train and N test denote the number of episodes for training and testing respectively.\nEach training/testing episode (S i , Q i ) instantiates a Cway K-shot segmentation learning task. Specifically, the support set S i has K image, mask pairs per semantic class and there are in total C different classes from C seen for training and from C unseen for testing, i.e.\nS i = {(I c,k , M c,k )} where k = 1, 2, • • • , K and c ∈ C i with |C i | = C.\nThe query set Q i contains N query image, mask pairs from the same set of classes C i as the support set. The model first extracts knowledge about the C classes from the support set and then applies the learned knowledge to perform segmentation on the query set. As each episode contains different semantic classes, the model is trained to generalize well. After obtaining the segmentation model M from the training set D train , we evaluate its few-shot segmentation performance on the test set D test across all the episodes. In particular, for each testing episode the segmentation model M is evaluated on the query set Q i given the support set S i . Different from existing few-shot segmentation methods which fuse the extracted support features with the query features to generate the segmentation results in a parametric way, our proposed model aims to learn and align compact and robust prototype representations for each semantic class in an embedding space. Then it performs segmentation within the embedding space via non-parametric metric learning.\nAs shown in Figure 2, our model learns to perform segmentation as follows. For each episode, it first embeds the support and query images into deep features by a shared backbone network. Then it applies the masked average pooling to obtain prototypes from the support set, as detailed in Section 3.3. Segmentation over the query images is performed by labeling each pixel as the class of the nearest prototype. A novel prototype alignment regularization (PAR) introduced in Section 3.5 is applied over the learning procedure to encourage the model to learn consistent embedding prototypes for the support and query.\nWe adopt a VGG-16 [22] network as the feature extractor following conventions. The first 5 convolutional blocks in VGG-16 are kept for feature extraction and other layers are removed. The stride of maxpool4 layer is set to 1 for maintaining large spatial resolution. To increase the receptive field, the convolutions in conv5 block are replaced by dilated convolutions with dilation set to 2. As the proposed PAR introduces no extra learnable parameters, our network is trained end-to-end to optimize the weights of VGG-16 for learning a consistent embedding space.", "Conclusion": "We propose a novel PANet for few-shot segmentation based on metric learning. PANet is able to extract robust prototypes from the support set and performs segmentation using non-parametric distance calculation. With the proposed PAR, our model can further exploit the support information to assist training. Without any decoder structure or post-processing step, our PANet outperforms previous work by a large margin.", "Experiment_and_Results": "We adopt a non-parametric metric learning method to learn the optimal prototypes and perform segmentation accordingly. Since segmentation can be seen as classification at each spatial location, we calculate the distance between the query feature vector at each spatial location with each computed prototype. Then we apply a softmax over the distances to produce a probability map Mq over semantic classes (including background). Concretely, given a distance function d, let P = {p c |c ∈ C i }∪{p bg } and F q denote the query feature map. For each p j ∈ P we have\nM (x,y) q;j = exp(-αd(F (x,y) q , p j )) pj ∈P exp(-αd(F (x,y) q , p j )) .(3)\nThe predicted segmentation mask is then given by M (x,y) q = arg max j M (x,y) q;j .\nThe distance function d commonly adopts the cosine distance or squared Euclidean distance. Snell et al. [23] claimed using squared Euclidean distance greatly outperforms using cosine distance. However, Oreshkin et al. [14] attributed the improvement to interaction of the different scaling of the metrics with the softmax function. Multiplying the cosine distance by a factor α can achieve comparable performance as using squared Euclidean distance. Empirically, we find that using cosine distance is more stable and gives better performance, possibly because it is bounded and thus easier to optimize. The multiplier α is fixed at 20 since we find learning it yields little performance gain.\nAfter computing the probability map Mq for the query image via metric learning, we calculate the segmentation loss L seg as follows:\nL seg = - 1 N x,y pj ∈P 1[M (x,y) q = j] log M (x,y) q;j ,(5)\nwhere M q is the ground truth segmentation mask of the query image and N is the total number of spatial locations.\nOptimizing the above loss will derive suitable prototypes for each class.", "Extra": "Our model learns representative and well-separated prototype representation for each semantic class, including the background, based on the prototypical network [23]. Instead of averaging over the whole input image [23], PANet leverages the mask annotations over the support images to learn prototypes for foreground and background separately. There are two strategies to exploit the segmentation masks i.e., early fusion and late fusion [17]. Early fusion masks the support images before feeding them into the feature extractor [21,8,4]. Late fusion directly masks over the feature maps to produce foreground/background features separately [28,16]. In this work, we adopt the late fusion strategy since it keeps the input consistency for the shared feature extractor. Concretely, given a support set S i = {(I c,k , M c,k )}, let F c,k be the feature map output by the network for the image I c,k . Here c indexes the class and k = 1, . . . , K indexes the support image. The prototype of class c is computed via masked average pooling [28]:\np c = 1 K k x,y F (x,y) c,k 1[M (x,y) c,k = c] x,y 1[M (x,y) c,k = c] ,(1)\nwhere (x, y) indexes the spatial locations and 1(•) is an indicator function, outputting value 1 if the argument is true or 0 otherwise. In addition, the prototype of background is computed by\np bg = 1 CK c,k x,y F (x,y) c,k 1[M (x,y) c,k / ∈ C i ] x,y 1[M (x,y) c,k / ∈ C i ] .(2)\nThe above prototypes are optimized end-to-end through non-parametric metric learning as explained below. In previous works, the support annotations are used only for masking, which actually does not adequately exploit the support information for few-shot learning. In this subsection, we elaborate on the prototype alignment regularization (PAR) that exploits support information better to guide the few-shot learning procedure and helps enhance generalizability of the resulted model from a few examples.\nIntuitively, if the model can predict a good segmentation mask for the query using prototypes extracted from the support, the prototypes learned from the query set based on the predicted masks should be able to segment support images well. Thus, PAR encourages the resulted segmentation model to perform few-shot learning in a reverse direction, i.e., taking the query and the predicted mask as the new support to learn to segment the support images. This imposes a mutual alignment between the prototypes of support and query images and learns richer knowledge from the support. Note all the support and query images here are from the training set D train .\nFigure 2 illustrates PAR in details. After obtaining a segmentation prediction for the query image, we perform masked average pooling accordingly on the query features and obtain another set of prototypes P = {p c |c ∈ C i } ∪ {p bg }, following Eqns. ( 1) and (2). Next, the nonparametric method introduced in Section 3.4 is used to predict the segmentation masks for the support images. The predictions are compared with the ground truth annotations to calculate a loss L PAR . The entire procedure for implementing PAR can be seen as swapping the support and query set. Concretely, within PAR, the segmentation probability of the support image I c,k is given by\nM (x,y) c,k;j = exp(-αd(F (x,y) c,k , pj )) pj ∈{ pc, pbg} exp(-αd(F (x,y) c,k , pj )) ,(6)\nand the loss L PAR is computed by\nL PAR = - 1 CKN c,k,x,y pj ∈P 1[M (x,y) q = j] log\nM (x,y) q;j .\n(7) Without PAR, the information only flows one-way from the support set to the query set. By flowing the information back to the support set, we force the model to learn a consistent embedding space that aligns the query and support prototypes. The aligning effect of the proposed PAR is validated by experiments in Section 4.3.\nThe total loss for training our PANet model is thus\nL = L seg + λL PAR .\nwhere λ serves as regularization strength and λ = 0 reduces to the model without PAR. In our experiments, we keep λ as 1 since different values give little improvement. The whole training and testing procedures for PANet on few-shot segmentation are summarized in Algorithm 1. Our model is generic and is directly applicable to other types of annotations. First, it accepts weaker annotations on the support set, such as scribbles and bounding boxes indicating the foreground objects of interest. Experiments in Section 4.4 show that even with weak annotations, our model is still able to extract robust prototypes from the support set and give comparably good segmentation results for the query images. Compared with pixel-level dense annotations, weak annotations are easier and cheaper to obtain [9]. Second, by adopting late fusion [17], our model can quickly adapt to updated annotations with little computation overhead and thus can be applied in interactive segmentation. We leave this for future works. Datasets We follow the evaluation scheme proposed in [21] and evaluate our model on the PASCAL-5 i [21] dataset. The dataset is created from PASCAL VOC 2012 [5] with SBD [7] augmentation. The 20 categories in PASCAL VOC are evenly divided into 4 splits, each containing 5 categories. Models are trained on 3 splits and evaluated on the rest one in a cross-validation fashion. The categories in each 1) and ( 2) Predict the segmentation probabilities and masks for the query image using Eqns. ( 3) and ( 4) Compute the loss L seg as in Eqn. ( 5) Extract prototypes P from the query set Q i using Eqns. ( 1) and ( 2) Predict segmentation probabilities for the support images using Eqn. ( 6) Compute the loss L PAR as in Eqn. (7) Compute the gradient and optimize via SGD end for each episode\n(S i , Q i ) ∈ D test do\nExtract prototypes P from the support set S i using Eqns. ( 1) and ( 2) Predict the segmentation probabilities and masks for the query image using Eqns. ( 3) and ( 4) end split can be found in [21]. During testing, previous methods randomly sample 1,000 episodes for evaluation but we find it is not enough to give stable results. In our experiments, we average the results from 5 runs with different random seeds, each run containing 1,000 episodes.\nFollowing [8], we also evaluate our model on a more challenging dataset built from MS COCO [11]. Similarly, the 80 object classes in MS COCO are evenly divided into 4 splits, each containing 20 classes. We follow the same scheme for training and testing as on the PASCAL-5 i . N query = 1 is used for all experiments.\nEvaluation metrics We adopt two metrics for model evaluation, mean-IoU and binary-IoU. Mean-IoU measures the Intersection-over-Union (IoU) for each foreground class and averages over all the classes [21,28]. Binary-IoU treats all object categories as one foreground class and averages the IoU of foreground and background [16,4,8]. We mainly use the mean-IoU metric because it considers the differences between foreground categories and therefore more accurately reflects the model performance. Results w.r.t. the binary-IoU are also reported for clear comparisons with some previous methods. We initialize the VGG-16 network with the weights pre-trained on ILSVRC [19] as in previous works [21,4,28]. Input images are resized to (417, 417) and augmented using random horizontal flipping. The model is trained end-to-end by SGD with the momentum of 0.9 for 30,000 iterations. The learning rate is initialized to 1e-3 and reduced by 0.1 every 10,000 iterations. The weight decay is 0.0005 and the batch size is 1. Method 1-shot 5-shot ∆ #Params split-1 split-2 split-3 split-4 Mean split-1 split-2 split-3 split-4 Mean Mean OSLSM [21] 33 1: Results of 1-way 1-shot and 1-way 5-shot segmentation on PASCAL-5 i dataset using mean-IoU metric. ∆ denotes the difference between 1-shot and 5-shot. †: The results of co-FCN in mean-IoU metric are reported by [28].\nMethod 1-shot 5-shot ∆ FG-BG [16] 55.0 --Fine-tuning [16] 55.1 55.6 0.5 OSLSM [21] 61.3 61.5 0.2 co-FCN [16] 60.1 60.2 0.1 PL [4] 61.2 62.3 1.1 A-MCG [8] 61.2 62.2 1.0 SG-One [28] 63.9 65.9 2.0 PANet-init 58.9 65.7 6.8 PANet 66.5 70.7 4.2\nTable 2: Results of 1-way 1-shot and 1-way 5-shot segmentation on PASCAL-5 i dataset using binary-IoU metric. ∆ denotes the difference between 1-shot and 5-shot.\nBaselines We set a baseline model which is initialized with the weights pre-trained on ILSVRC [19] but not further trained on PASCAL-5 i , denoted as PANet-init. We also compare our PANet with two baseline models FG-BG and fine-tuning from [16]. FG-BG trains a foregroundbackground segmentor which is independent of the support and fine-tuning is used to tune a pre-trained foregroundbackground segmentor on the support. PASCAL-5 i Table 1 compares our model with other methods on PASCAL-5 i dataset in mean-IoU metric. Our model outperforms the state-of-the-art methods in both 1shot and 5-shot settings while using fewer parameters. In the 5-shot task, our model achieves significant improvement of 8.6%. Using binary-IoU metric, as shown in Table 2, our model also achieves the highest performance. It is worth noting that our method does not use any decoder module or post-processing techniques to refine the results.\nAs Tables 1 and2 show, the performance gap between 1shot and 5-shot settings is small in other methods (less than 3.1% in mean-IoU), implying these methods obtain little improvement with more support information. In contrast, our model yields much more significant performance gain (up to 7.6% in mean-IoU) since it learns more effectively from the support set. The evaluation results of our baseline As in [4,28], we evaluate our model on multi-way fewshot segmentation tasks. Without loss of generality, we perform evaluations on 2-way 1-shot and 2-way 5-shot segmentation tasks. Table 3 summarizes the results. Our PANet outperforms previous works by a large margin of more than 20%.\nQualitative results for 1-way and 2-way segmentation are shown in Figure 3 and Figure 4. Without any decoder structure or post-processing, our model gives satisfying segmentation results on unseen classes with only one annotated support image. This demonstrates the strong learning and generalization abilities of our model. Note that the prototype extracted from the same support image can be used to successfully segment the query images with appearance variations. For example, in Figure 3 row 1, our model successfully segments bicycles: cluttered with other objects (1st example), viewed from a different perspective (2nd example), with only parts shown (3rd example). On the other hand, prototypes extracted from one part of the object can be used to segment whole objects of the same class (row 2 in Figure 3). It demonstrates that the proposed PANet is We also present some challenging cases that fail our model. As the first failure case in Figure 3 shows, our model tends to give segmentation results with unnatural patches, possibly because it predicts independently at each location. But this can be alleviated by post-processing. From the second failure case, we find our model is unable to distinguish between chairs and tables since they have similar prototypes in the embedding space.\nMS COCO Table 4 shows the evaluation results on MS COCO dataset. Our model outperforms the previous A-MCG [8] by 7.2% in 1-shot setting and 8.2% in 5-shot setting. Compared to PASCAL VOC, MS COCO has more object categories, making the differences between two evaluation metrics more significant. Qualitative results on MS COCO are shown in Figure 3. The proposed PAR encourages the model to learn a consistent embedding space which aligns the support and query prototypes. Apart from minimizing the distances between the support and query prototypes, the models trained with PAR get better results (shown in Table 5) as well as faster convergence of the training process. Aligning embedding prototypes By flowing the information from the query set back to the support set via PAR, our model can learn a consistent embedding space and align the prototypes extracted from the support and query set. To verify this, we randomly choose 1,000 episodes from PASCAL-5 i split-1 in the 1-way 5-shot task. Then for each episode we calculate the Euclidean distance between prototypes extracted from the query set and the support set. The averaged distance computed by models with PAR is 32.2, much smaller than 42.6 by models without PAR. With PAR, our model is able to extract prototypes that are better aligned in the embedding space. Speeding up convergence In our experiments, we observe that models trained with PAR converge faster than models without it, as reflected from the training loss curve in Figure 5. This shows the PAR accelerates convergence and helps the model reach a lower loss, especially in 5-shot setting, because with PAR the information from the support set can be better exploited. We further evaluate our model with scribble and bounding box annotations. During testing, the pixel-level annotations of the support set are replaced by scribbles or bounding boxes which are generated from the dense segmentation masks automatically. Each bounding box is obtained from one randomly chosen instance mask in each support image. As Table 6 shows, our model works pretty well with very sparse annotations and is robust to the noise brought by the bounding box. In 1-shot learning case, the model performs comparably well with two different annotations, but for 5shot learning, using scribbles outperforms using bounding box by 2%. A possible reason is with more support information, scribbles give more representative prototypes while bounding boxes introduce more noise. Qualitative results of using scribble and bounding box annotations are shown in Figure 6." }, { "title": "Learning meta-class memory for few-shot semantic segmentation", "year": 2021.0, "authors": "Zhonghua Wu; Xiangxi Shi; Guosheng Lin; Jianfei Cai", "arxiv_di": "2108.02958", "Introduction": "With the development of convolution neural networks (CNNs), fully supervised image semantic segmentation [12,3] has achieved great success in both speed and accuracy. However, the state-of-the-art image segmentation methods usually require abundant pixel-level annotations which requires huge human labeling efforts. If we want to segment a new class that has not been seen in the training set, we usually need to label thousands of images for The top part of Fig. 1 shows the typical pipeline of the state-of-the-art (SOTA) few-shot image segmentation methods [34,33]. Firstly, a pre-trained CNN network is used to extract the features of both support and query images. Then, the two features are typically processed by convolutional layers and compared for similarity so as to generate the segmentation map for the query image. Essentially, these methods treat the few-shot segmentation task as a conditional binary foreground-background segmentation problem, i.e. to find and segment the most relevant regions in the query image based on the given support images and their masks, regardless of the class information.\nThe class-agnostic design in SOTA is understandable. This is because by class-agnostic design the interaction / comparison between query and support features in base classes can be transferred to novel classes. However, we argue that although different classes of objects are quite different, there are still some common attributes or middlelevel knowledge shareable among them, which we call meta-class information. Similar observations have been made in [36,10,31] for classification and detection tasks, where some low-level information (e.g. circle, dot) and middle-level information (e.g. wings, limbs) are shared among different classes.\nMotivated by this, in this paper, we propose a novel Meta-class Memory Module (MMM) to learn middle-level meta-class embeddings shareable between base and novel classes for few-shot segmentation. As shown in the lower part of Fig. 1, a set of meta-class memory embeddings is introduced into the SOTA pipeline, which can be learned through the back-propagation during the base class training. The meta-class memory embeddings are then used to attend the middle-level features of query and support images to obtain meta-class activation maps. This can be considered as aligning both query and support middle-level features to the meta-class embeddings. Based on the obtained metaclass activation maps of query and support images, we then perform interaction / comparison between them to propagate the support mask information from support activation maps to query activation maps, and the fused query activation maps are finally used for the query mask prediction.\nIn addition, when it comes to the k-shot scenarios, which means more than one support images are given, previous methods usually apply an average operation [33] on the few support image features to obtain the class prototype feature. However, we observe that some support images are in low quality which is hard to represent the support class. Thus, we further propose a Quality Measurement Module (QMM) to obtain the quality measure for each support image. Based on the quality measure, the features from all support images are fused via weighted sum to get a better class prototype feature.\nIn our experiments, we follow the design of [25] to per-form training and testing for a fair comparison. We evaluate our proposed method on four different splits with 1-shot and 5-shot settings on PASCAL-5 i [23] and COCO [13] datasets. Our method is able to achieve state-of-the-art results on both datasets under both 1-shot and 5-shot settings.\nOur main contributions can be summarized as follows:\n• For few-shot semantic image segmentation, to our knowledge, we are the first one to introduce a set of learnable embeddings to memorize the meta-class information during base class training that can be transferred to novel classes during testing. Specifically, a Meta-class Memory Module (MMM) is proposed to generate the meta-class activation maps for both support and query images, which is helpful for the final query mask prediction.\n• For k-shot scenarios, a Quality Measurement Module (QMM) is proposed to measure the quality of all the support images so as to effectively fuse all the support features. With QMM, our model is able to pay more attention to the high quality support samples for better query image segmentation.\n• Extensive experiments on PASCAL-5 i and COCO datasets show that our proposed method performs the best in all settings. Specifically, our method significantly outperforms SOTA on the large scale dataset COCO, with 5.1% mean mIoU gain, as our memory embeddings are able to learn a universal meta-class representation.", "Methodology": "Fig. 2 gives an overview of our proposed meta-class memory based network (MM-Net) for few shot semantic segmentation. It consists of three major modules, meta-class memory module (MMM), activation propaga-tion module (APM) and foreground confidence module (FCM), as well as two off-the-shelf modules, feature extraction backbone and feature enrichment module (FEM) [25]. MMM is particularly novel, which learns the metaclass features that can be shared among all base and novel classes and generate meta-class activation maps for support and query images, respectively. APM is used to propagate support mask information to the query activation maps for the query mask generation. FCM is to retain the conventional interactions of the high-level features of the query and support images. Moreover, we also propose an additional Quality Measurement Module (QMM) to measure the quality of different support images in k-shot settings so as to fuse the information from different support images in a better way. In the following, we describe the major modules of our MM-Net in detail.", "Conclusion": "In this paper, we have proposed a novel Meta-class Memory based few shot semantic segmentation method (MM-Net) with the major components of MMM, APM, FCM and QMM. The key novelty of our method lies in MMM, where we introduced a set of learnable meta-class embeddings to allow the common knowledge transfer between base classes and novel classes. Another novelty is from QMM, which can measure the quality of each support image so as to better fuse the support features. With all these components, our MM-Net has significantly improved the SOTA results on both PASCAL-5 i and COCO datasets.", "Extra": "Semantic segmentation [29,35,15,14] is a task of classifying each pixel in an image into a specified category and has been applied in various fields [30,24]. State-of-the-art segmentation methods are usually based on the Fully Convolutional Network (FCN) [18], which uses a classification network as the backbone and replaces fully connected layers with convolutional layers to predict the dense segmentation map. Later, to obtain a higher resolution prediction and have a larger receptive field of the network, DeepLab [1,2] proposed to use dilated convolutions which insert holes to the convolutional filters instead of using the conventional convolution with downsampling. Recently, Chen et al. further explored the effect of atrous convolutions, multi-grid, atrous spatial pyramid pooling, difference backbones, and different training sizes in DeepLab V3 [3] and DeepLab V3+ [4]. These methods usually require abundant pixellevel annotations for all the classes during training and cannot generalize to novel classes with only a few labelled images. orange) is introduced to learn the meta-class features that can be shared among all base and novel classes and generate meta-class activation maps for support and query images, respectively. Then, Activation Propagation Module (APM) (purple) is used to propagate support mask information to the query activation maps for the query mask generation. Meanwhile, foreground confidence module (FCM) (yellow) is used to obtain a confidence map from the high-level image features. Finally, the fused query activation maps are concatenated with the foreground confidence map and fed into FEM [25] for the final query segmentation mask prediction (green). Few-shot semantic segmentation [6,8,27,21,23,5] aims to give a dense segmentation prediction for new class query images with only a few labeled support images. CANet [34] proposed Dense Comparison Module (DCM) and Iterative Optimization Module (IOM) to give a dense prediction and iteratively refine the prediction. Similarly, the prototype alignment regularization was used in PANet [28] which encourages the model to learn more consistent embedding prototypes. Later, PGNet [33] used a Graph Attention Unit (GAU) to build the local similarity between support and query images. Liu et al. [16] proposed to use a Siamese Network on query and support images to get the co-occurred features between the two images. More recently, following the practice in PGNet [33] which uses the pyramid structure to refine results, PFENet [25] used a multi-scale decoder Feature Enrichment Module (FEM) to incorporate the prior masks and query features to give a better segmentation map prediction.\nUnlike all the existing few-shot semantic segmentation methods, we introduce a concept of meta-class memory that could learn a set of shareable meta-class representations among base and novel classes. Most state-of-the-art recognition methods require a large number of training images with abundant annotations which often need tremendous human labeling efforts. Metalearning methods [7], also known as learning to learn, have been introduced to better transfer existing knowledge to novel classes or get faster training on new given data. One popular set of approaches [11,22] are to learn a metalearner which could help deep neural networks optimize faster when given new data on unseen classes. Another set of meta-learning approaches [7] are introduced to learn a better parameter initialization which could be fast to optimize with fewer training data. Metric learning methods [26] are belonging to another type that use a certain similarity measurement to obtain the classification results over different classes. On the other hand, Munkhdalai et al. [19] proposed an external memory model across different tasks and can shift its inductive biases via fast parameterization for rapid generalization on new tasks.\nDifferent from these meta-learning approaches, we do not have a meta-learner. Instead, we construct a meta-class memory to capture representative middle-level features for better transfer between base and novel classes for few-shot semantic segmentation. Meta-class Memory module (MMM) aims to learn the meta-class information that can be shared among all classes and use them to encode / classify the middle-level features of the query or support image. As shown in Fig. 2, the input to MMM includes the features of either query image I Q or support image I S , and a set of meta-class embeddings M 1 ...M N , each with a dimension of D, where we set N as 50 and D as 256. These meta-class embeddings can be learned during the network training through the back propagation. The outputs of MMM are the meta-class activation maps corresponding to the given image. In particular, we firstly use the ResNet50 as a feature extractor to extract the features for both support and query images. Similar to the previous few-shot segmentation works [34,33], the feature extractor is pre-trained on the image classification task. We choose the features from the 2nd and 3rd levels, since the middle-level features are better for transfer, same as the observation in [34]. Then, we apply a channel-wise concatenation of the 2nd and 3rd level features, followed by a 3 × 3 convolution layer to get the feature maps F Q and F S for the query and support images, respectively. Then, we compute the similarity between the meta-class memory M and image feature maps F to get the meta-class activation maps Act Q and Act S for query and support images, respectively:\nAct n (x, y) = σ(F (x, y) T M n ),(1)\nwhere Act n (x, y) indicates the nth meta-class activation map at the spatial location (x, y) obtained by the nth embedding, and σ(•) is the Sigmoid function to normalize the value between 0 to 1. With the two memory activation maps Act S and Act Q encoded by the N meta-class embeddings, which are of the dimensions of H × W × N and H × W is the spatial dimensions, the purpose of this activation propagation module (APM) is to propagate the label information (support mask) from the support image to the unlabeled query image for the mask generation.\nFor APM, we adopt an approach similar to [33] but using our unique memory activation maps, where we treat the vector at each spatial location of the activation maps as a node. Fig. 3 illustrates the process of APM. In particular, we denote h q ∈ Act Q as a query node and h s ∈ Act S as a support node, where h ∈ R N , and q, s ∈ {1, 2, ..., HW }. Then, we calculate the cosine similarity e q,s = cos(h q , h s ) between all node pairs, with one from Act Q and the other from Act S : e q,s = h T q h s h q h s q, s ∈ {1, 2, ..., HW }.\n(\n)2\nFor each query node h q , we obtain an H × W similarity map e q , which is then element-wise multiplied with the support mask to keep the similarity of the foreground support nodes while setting the similarity of the background support nodes to -∞, followed by Softmax to generate the weights:\nw q,s =\nexp(e q,s ) HW k=1 exp(e q,k )\n.\nWe then do a weighted sum over all support node features and multiply it with the original query node feature:\nv q = HW s=1 w q,s h s ,(4)\nh q = h q v q(5)\nwhere denotes element-wise product. Combining all the fused query features h q , we obtain the fused activation map Act Q ∈ R H×W ×N . Here, (4) essentially selects the most similar foreground support nodes and (5) highlights the query nodes matched with foreground support nodes while suppressing the query nodes matched with background support nodes, all in the context of the metaclass representation. Inspired by PFENet [25], which concludes that the high level features can give a guidance mask telling the probability of pixels belonging to the target class. Thus, we further introduce a foreground confidence module (FCM) to produce a high-level foregournd confidence map. Compared with the previous process based on MMM and APM, which facilitates the interactions of the query and support images via middle-level meta-class features, FCM facilitates their interactions via high-level within-class features, i.e. the 4th level features of the pre-trained ResNet50.\nFor simplicity, we reuse F Q and F S for the high level backbone feature maps for the query and support images, respectively. To generate the foreground confidence map C Q , we first update F S by element-wise multiplying it with the support mask. Then, similar to that in APM, we compute cosine similarity cos(f q , f s ) between all pairs of feature nodes of f q ∈ F Q and f s ∈ F S as\ncos(f q , f s ) = f T q f s f q f s q, s ∈ {1, 2, ..., HW }.(6)\nFor each f q , we take the maximum similarity among all support nodes as the foreground probability value c q ∈ R as\nc q = max s∈{1,2,...,HW } (cos(f q , f s )).(7)\nWe then reshape all the probability values c q into the foreground confidence map C Q ∈ R H×W . Finally, we normalize all the values in C Q by a min-max normalization:\nC Q = C Q -min(C Q ) max(C Q ) -min(C Q ) + ,(8)\nwhere is set to 10 -7 . With the fused query attention map Act Q from APM and the foreground confidence map C Q from FCM, we apply a channel-wise concatenation of the two maps, and then pass it to the feature enrichment module (FEM) [25] to generate the final segmentation mask. The diagram in Fig. 2 is only for one-shot setting. When it comes to K-shot (K > 1) settings, more than one support images are given. A common way is to average the features [33] extracted from the support images and then pass the averaged feature for further processing. Such a simple average feature fusion might not be good, since some support images could be of poor quality for generating the class prototype features. Thus, we further propose a Quality Measurement Module (QMM) to select high quality support features.\nSpecifically, we make use of the cosine similarity e q,s in (2) with -∞ on the background regions, same as in Section 3.2. For the kth support image, we have e k q,s . Then, we compute the quality measure for the kth support image and the qth query node:\np k q = HW s=1\n(σ(e k q,s )), k ∈ {1, 2, ..., K}, q ∈ {1, 2, ..., HW } (9) where σ(•) is a Sigmoid function. Essentially, (9) suggests that for the qth query node, the larger similarity sum from the the kth support image, the higher quality / weight we rank the support image. After that, we reshape p k q , q ∈ {1, 2, ..., HW } into a map P k Q ∈ R H×W aligning with the fused activation map Act Q . With the obtained K maps P k Q , k ∈ {1, 2, ..., K}, we further apply softmax over the k dimension to normalize the quality maps across different support images. Finally, we treat each quality map P k Q as a weight map, multiply with the corresponding fused activation map Act k Q , and sum them together. In this way, we obtain the final weighted average map Act Q , which is then passed to FEM for segmentation prediction. Image Segmentation loss is used to supervised the segmentation mask generation. Specifically, following PFENet [25], we apply multiple cross entropy losses, with L seg2 on the final segmentation prediction ŶQ and L i seg1 (i ∈ {1, 2, ..., L}) on the intermediate masks Ŷ i Q . Memory Reconstruction Loss. To avoid the meta-class memory from learning similar embeddings, we propose a memory reconstruction loss function to encourage learning meaningful and diverse meta-class embeddings. Specifically, we firstly apply a channel-wise Softmax function over all the activation maps Act n (x, y) obtained in (1) as\n ct n (x, y) = exp(Act n (x, y)) N k=1 exp(Act k (x, y)\n).\nThen, we use  ct n (x, y) and the meta-class embeddings M to reconstruct the original image features F (x, y) as\nF (x, y) = N n=1 Âct n (x, y)M n .(11)\nEssentially, ( 10) and ( 11) are to select the most similar meta-class embeddings to obtain the reconstructed feature F . Reshaping F into D × HW , we then compute the correlation matrix C f ∈ R HW ×HW :\nC f = F T F. (12\n)\nFinally, we define the reconstruction loss L Recon as a crossentropy loss to maximize the log-likelihood of the diagonal elements in C f . This reconstruction loss encourages different meta-class embeddings to be different. This is because, if all M n are similar, it will not be able to well reconstruct the diverse original feature F . The overall loss function can be summarized as\nL = α L L i=1 L Seg1 + βL i Seg2 + γL Recon ,(13)\nwhere α, β and γ are the trade-off parameters, being set as 1, 1 and 0.1, respectively. Datasets. We follow PFENet [25] to conduct experiments on PASCAL-5 i [23] and COCO [13] datasets. PASCAL-5 i combines PASCAL VOC 2012 with external annotations from SDS dataset [9]. It contains 20 classes, divided into 4 folds with 5 classes per fold. We randomly sample 5000 support-query pairs for testing. For COCO, following [25], we split its 80 classes into 4 folds with 20 classes per fold. The class indexes in fold i are selected according to 4x -3 + i, where x ∈ [1,20] and i ∈ [1,4]. We random select 20,000 support and query pairs for testing. Experiment setting. For fair comparisons with previous methods, we consider multiple backbones including VGG-16, ResNet-50 and ResNet-50-v2. Here, VGG-16 and ResNet-50 are the commonly used backbone networks and ResNet-50-v2 is a modified version by PFENet [25], where the standard 7 × 7 convolution layers are replaced with a few 3 × 3 convolutional layers. All the backbone networks are pre-trained on the ImageNet classification task and fixed during our model training. We use SGD for training the rest of the network layers with momentum and weight decay being 0.9 and 10 -4 , respectively. In addition, we use a learning rate of 0.0025 and a batch size of 4 to train our model for both 1-shot and 5-shot settings. All of our experiments are conducted on one NVIDIA RTX 2080Ti GPU. Evaluation metrics. Following the previous works [34,16], we adopt the class mean intersection over union (mIoU) as our evaluation metric for ablation studies and final comparisons. Tables 1 and2 show the 1-shot and 5-shot mIoU results of different methods on PASCAL-5 i and COCO datasets, respectively. We list the training size and the backbone used in the previous methods. For CANet, PGNet, CRNet and PMM methods, they all use the image size of 321 × 321 with the standard ResNet-50 backbone to extract the features. However, PPNet and PFENet use larger image sizes. As observed in [3], a larger image size usually gives a better segmentation performance. Moreover, PFENet also uses a more powerful ResNet-50-v2 backbone. For a fair comparison, we report the performance of our method under different image sizes and backbones. We can see that our MM-Net achieves the best results under all training conditions.\nFor the experiments on COCO dataset, PFENet uses the image size of 641 × 641 (as specified in their released code) with ResNet-101-v2 as backbone. Due to our GPU memory restriction, we still use ResNet-50-v2 as our model backbone and 473 × 473 as our training size. Despite this, our 1-shot results still outperform PFENet by 5.1%, as shown in Table 2. In addition, Fig. 4 gives a few qualitative testing results on fold-0 of PASCAL-5 i dataset.\nFor the inference speed and GPU memory consumption, our proposed MM-Net consumes 2466 MiB GPU memory (17 FPS), comparable to previous SOTA PFENet's 1920 MiB GPU memory consumption (42 FPS). The slightly more memory consumption and running time of MM-Net are due to our introduced Meta-class Memory Module. Table 1. 1-shot and 5-shot mIoU results on PASCAL 5 i dataset. We list the backbone and training size used by each method. Our MM-Net outperforms the state-of-the-art under all the experiment settings. 3 shows that 50 embeddings yield the best performance. This suggests that the network learns more meaningful and effective features with 50 meta-classes. Thus, we use 50 meta-class memory embeddings for our following experiments.\nEffect of Meta-class Memory Module. We construct two baselines to show the effectiveness of our proposed MMM. Figure 5. Visual results of the meta-class activation maps Act. We random select 5 from all 50 activation maps and all activation maps are obtained from the same meta-class memory. In the 1st row, we can see the meta-class memory highlights human's head, torso, edge, etc.\nThe first baseline is that we only use the foreground confidence map for the mask decoding (denoted as 'FCM' in Table 4). The 2nd baseline is that we add the middle-level image features for the mask prediction. Specifically, we extract the middle-level features (2nd and 3rd levels) of support and query images. Instead of computing the meta-class activate maps, we direct pass the image features to APM and FEM to predict the query mask. This baseline is denoted as 'FCM+Feat' in Table 4.\nAs can be seen in Table 4, our proposed method 'FCM+MMM' improves the segmentation performance with 2.9% gain in mean mIoU compared with directly using the middle-level features for decoding ('FCM+Feat'). This suggests that our proposed meta-class memory module is able to give better class prototype information for the query mask prediction. Fig. 5 gives some examples of the computed meta-class activation maps Act, where we random select 5 from all 50 meta-class activation maps and all the maps are obtained from the same learned meta-class memory. As we can see, different meta-class embeddings memorize different metaclass features and capture different patterns in the images, e.g., capturing human head, torso, edge, etc, in the first row. Features for memory learning. We conduct ablation experiments to analyze which level of feature is better for the memory learning. Table 5 shows that fusing 2nd and 3rd level features ('2+3') yields the best performance. Our conjecture is that because the 2nd-level features capture more edge information and the 3rd-level features capture more part and object information, a combination of them leads to learning better meta-class memory embeddings. Effect of Activation Propagation Module. With the metaclass activation maps of query and support images, instead of using APM, one simple way to propagate the support mask information to query activation maps is to apply a global average pooling within the foreground region on the support activation maps to get a global average foreground representation vector, and then element-wise multiply it with each node in the query activation maps. This is denoted as 'MMM(2+3)+Global' in Table 5. It can be seen that the APM is able to improve the mIoU by 0.6%. Effect of Memory Reconstruction Loss. Table 6 shows the ablation study on our proposed memory reconstruction loss. It can be seen that the results with the loss are much better than the one without using the loss, clearly demonstrating its effectiveness. Note that the reconstruction loss can be applied for the reconstructions of different features, including query features, support features and both features. We can see that applying the loss for support features leads to the overall best performance. Effect of Quality Measurement Module. Table 7 shows the effectiveness of our proposed Quality Measurement Module. For the baseline method (ours w/o QMM), we use APM to obtain 5 fused activation maps independently from 5 different support images, and then we follow the conventional way to average the 5 maps and pass the averaged map to FEM for the segmentation mask generation. Our proposed QMM improves mIoU by 0.8%. This suggests that high quality support samples are more helpful for the query segmentation mask prediction. A. Effect of Foreground Confidence Module.\nTable A shows the effectiveness of the Foreground Confidence Module (FCM). The baseline method is that we directly pass the fused meta-class activate maps to FEM for the query mask prediction (denoted as 'Ours w/o FCM'). As shown, the FCM improves mIoU by 2.9%, which suggests that high level features are helpful for the query segmentation mask prediction. We further implement the attention mechanism of CANet into our method. Specifically, same as CANet, we concatenate the support and query features and pass them to two convolutional layers and a global average pooling layer to obtain the importance weights. The weights are used to fuse class activation maps and generate the final weighted activation map for final mask prediction. As shown in Table B, with the attention mechanism of CANet (denote as \"Ours w/ Attn\"), our model obtains mIoU of 61.7% in the PASCAL 5 i dataset, which is lower than ours with QMM (63.4%). This indicates that QMM is more suitable for our method. C. Visualization results for ablation studies \"Features for memory learning\"\nWe visualize the meta-class activate maps obtained from different level features in Fig A . As shown in the figure, the memory learned from 2nd level features has high activation on local information (e.g. edge) and the one from 3rd level features has high activation on the whole object or parts. A combination of them leads to learning better meta-class memory embeddings, which are able to have high activation on meta-class regions." }, { "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": 2021.0, "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose Alvarez; Ping Luo", "arxiv_di": "2105.15203", "Related_Work": "Semantic Segmentation. Semantic segmentation can be seen as an extension of image classification from image level to pixel level. In the deep learning era [12][13][14][15][16], FCN [1] is the fundamental work of semantic segmentation, which is a fully convolution network that performs pixel-to-pixel classification in an end-to-end manner. After that, researchers focused on improving FCN from different aspects such as: enlarging the receptive field [17-19, 5, 2, 4, 20]; refining the contextual information [21-", "Methodology": "This section introduces SegFormer, our efficient, robust, and powerful segmentation framework without hand-crafted and computationally demanding modules. As depicted in Figure 2, SegFormer consists of two main modules: (1) a hierarchical Transformer encoder to generate high-resolution coarse features and low-resolution fine features; and (2) a lightweight All-MLP decoder to fuse these multi-level features to produce the final semantic segmentation mask.\nGiven an image of size H × W × 3, we first divide it into patches of size 4 × 4. Contrary to ViT that uses patches of size 16 × 16, using smaller patches favors the dense prediction task. We then use these patches as input to the hierarchical Transformer encoder to obtain multi-level features at {1/4, 1/8, 1/16, 1/32} of the original image resolution. We then pass these multi-level features to the All-MLP decoder to predict the segmentation mask at a H 4 × W 4 × N cls resolution, where N cls is the number of categories. In the rest of this section, we detail the proposed encoder and decoder designs and summarize the main differences between our approach and SETR. We now compare our results with existing approaches on the ADE20K [72], Cityscapes [71] and COCO-Stuff [73] datasets. ADE20K and Cityscapes: Table 2 summarizes our results including parameters, FLOPS, latency, and accuracy for ADE20K and Cityscapes. In the top part of the table, we report real-time approaches where we include state-of-the-art methods and our results using the MiT-B0 lightweight encoder. In the bottom part, we focus on performance and report the results of our approach and related works using stronger encoders.\nAs shown, on ADE20K, SegFormer-B0 yields 37.4% mIoU using only 3.8M parameters and 8.4G FLOPs, outperforming all other real-time counterparts in terms of parameters, flops, and latency. For instance, compared to DeeplabV3+ (MobileNetV2), SegFormer-B0 is 7.4 FPS, which is faster and keeps 3.4% better mIoU. Moreover, SegFormer-B5 outperforms all other approaches, including the previous best SETR, and establishes a new state-of-the-art of 51.8%, which is 1.6% mIoU better than SETR while being significantly more efficient. [43] ResNet-101 IM-1K 80.1 CCNet [41] ResNet-101 IM-1K 81.9 OCNet [21] ResNet-101 IM-1K 80.1 Axial-DeepLab [74] AxiaiResNet-XL IM-1K 79.9 SETR [7] ViT IM-22K 81.0 SETR [7] ViT IM-22K, Coarse 81. On Cityscapes test set, we follow the common setting [20] and merge the validation images to the train set and report results using Imagenet-1K pre-training and also using Mapillary Vistas [76].\nAs reported in Table 3", "Conclusion": "In this paper, we present SegFormer, a simple, clean yet powerful semantic segmentation method which contains a positional-encoding-free, hierarchical Transformer encoder and a lightweight All-MLP decoder. It avoids common complex designs in previous methods, leading to both high efficiency and performance. SegFormer not only achieves new state of the art results on common datasets, but also shows strong zero-shot robustness. We hope our method can serve as a solid baseline for semantic segmentation and motivate further research. One limitation is that although our smallest 3.7M parameters model is smaller than the known CNN's model, it is unclear whether it can work well in a chip of edge device with only 100k memory. We leave it for future work.\n• S i : the stride of the overlapping patch embedding in Stage i;\n• P i : the padding size of the overlapping patch embedding in Stage i;\n• C i : the channel number of the output of Stage i;\n• L i : the number of encoder layers in Stage i;\n• R i : the reduction ratio of the Efficient Self-Attention in Stage i;\n• N i : the head number of the Efficient Self-Attention in Stage i;\n• E i : the expansion ratio of the feed-forward layer [78] in Stage i;\nTable 6 shows the detailed information of our MiT series. To facilitate efficient discussion, we assign the code name B0 to B5 for MiT encoder, where B0 is the smallest model designed for real-time, while B5 is the largest model designed for high performance.", "Experiment_and_Results": "Datasets: We used three publicly available datasets: Cityscapes [71], ADE20K [72] and COCO-Stuff [73]. ADE20K is a scene parsing dataset covering 150 fine-grained semantic concepts consisting of 20210 images. Cityscapes is a driving dataset for semantic segmentation consisting of 5000 fineannotated high resolution images with 19 categories. COCO-Stuff covers 172 labels and consists of 164k images: 118k for training, 5k for validation, 20k for test-dev and 20k for the test-challenge.\nImplementation details: We used the mmsegmentationfoot_0 codebase and train on a server with 8 Tesla V100. We pre-train the encoder on the Imagenet-1K dataset and randomly initialize the decoder.\nDuring training, we applied data augmentation through random resize with ratio 0.5-2.0, random horizontal flipping, and random cropping to 512 × 512, 1024×1024, 512 × 512 for ADE20K, Cityscapes and COCO-Stuff, respectively. Following [9] we set crop size to 640 × 640 on ADE20K for our largest model B5. We trained the models using AdamW optimizer for 160K iterations on ADE20K, Cityscapes, and 80K iterations on COCO-Stuff. Exceptionally, for the ablation studies, we trained the models for 40K iterations. We used a batch size of 16 for ADE20K and COCO-Stuff, and a batch size of 8 for Cityscapes. The learning rate was set to an initial value of 0.00006 and then used a \"poly\" LR schedule with factor 1.0 by default. For simplicity, we did not adopt widely-used tricks such as OHEM, auxiliary losses or class balance loss. During evaluation, we rescale the short side of the image to training cropping size and keep the aspect ratio for ADE20K and COCO-Stuff. For Cityscapes, we do inference using sliding window test by cropping 1024 × 1024 windows. We report semantic segmentation performance using mean Intersection over Union (mIoU). In Figure 5, we present more qualitative results on Cityscapes, ADE20K and COCO-Stuff, compared with SETR and DeepLabV3+.\nCompared to SETR, our SegFormer predicts masks with significantly finer details near object boundaries because our Transformer encoder can capture much higher resolution features than SETR, which preserves more detailed texture information. Compared to DeepLabV3+, SegFormer reduces long-range errors benefit from the larger effective receptive field of Transformer encoder than ConvNet.", "Extra": "Transformer Block 1 ! \" × # \" ×𝐶 $ ! % × # % ×𝐶 & ! '& × # '& ×𝐶 \" ! $( × # $( ×𝐶 ' ! \" × # \" ×4𝐶 MLP ! \" × # \" ×𝑁 )*+ Transformer Block 2 Transformer Block 3 Transformer Block 4 Overlap Patch Merging Efficient Self-Attn Mix-FFN ×𝑁 UpSample MLP ! \" !\"# × # \" !\"# ×𝐶 $ ! \" !\"# × # \" !\"# ×𝐶 ! % × # % ×𝐶 Figure 2: The proposed SegFormer framework consists of two main modules: A hierarchical Transformer encoder to extract coarse and fine features; and a lightweight All-MLP decoder to directly fuse these multi-level features and predict the semantic segmentation mask. \"FFN\" indicates feed-forward network.\n29]; introducing boundary information [30][31][32][33][34][35][36][37]; designing various attention modules [38][39][40][41][42][43][44][45][46]; or using AutoML technologies [47][48][49][50][51]. These methods significantly improve semantic segmentation performance at the expense of introducing many empirical modules, making the resulting framework computationally demanding and complicated. More recent methods have proved the effectiveness of Transformer-based architectures for semantic segmentation [7,46]. However, these methods are still computationally demanding.\nTransformer backbones. ViT [6] is the first work to prove that a pure Transformer can achieve state-of-the-art performance in image classification. ViT treats each image as a sequence of tokens and then feeds them to multiple Transformer layers to make the classification. Subsequently, DeiT [52] further explores a data-efficient training strategy and a distillation approach for ViT. More recent methods such as T2T ViT [53], CPVT [54], TNT [55], CrossViT [56] and LocalViT [57] introduce tailored changes to ViT to further improve image classification performance.\nBeyond classification, PVT [8] is the first work to introduce a pyramid structure in Transformer, demonstrating the potential of a pure Transformer backbone compared to CNN counterparts in dense prediction tasks. After that, methods such as Swin [9], CvT [58], CoaT [59], LeViT [60] and Twins [10] enhance the local continuity of features and remove fixed size position embedding to improve the performance of Transformers in dense prediction tasks.\nTransformers for specific tasks. DETR [52] is the first work using Transformers to build an end-toend object detection framework without non-maximum suppression (NMS). Other works have also used Transformers in a variety of tasks such as tracking [61,62], super-resolution [63], ReID [64], Colorization [65], Retrieval [66] and multi-modal learning [67,68]. For semantic segmentation, SETR [7] adopts ViT [6] as a backbone to extract features, achieving impressive performance. However, these Transformer-based methods have very low efficiency and, thus, difficult to deploy in real-time applications. We design a series of Mix Transformer encoders (MiT), MiT-B0 to MiT-B5, with the same architecture but different sizes. MiT-B0 is our lightweight model for fast inference, while MiT-B5 is the largest model for the best performance. Our design for MiT is partly inspired by ViT but tailored and optimized for semantic segmentation.\nHierarchical Feature Representation. Unlike ViT that can only generate a single-resolution feature map, the goal of this module is, given an input image, to generate CNN-like multi-level features. These features provide high-resolution coarse features and low-resolution fine-grained features that usually boost the performance of semantic segmentation. More precisely, given an input image with a resolution of H × W × 3, we perform patch merging to obtain a hierarchical feature map\nF i with a resolution of H 2 i+1 × W 2 i+1 × C i , where i ∈ {1, 2, 3, 4}\n, and C i+1 is larger than C i . Overlapped Patch Merging. Given an image patch, the patch merging process used in ViT, unifies a N × N × 3 patch into a 1 × 1 × C vector. This can easily be extended to unify a 2 × 2 × C i feature path into a 1 × 1 × C i+1 vector to obtain hierarchical feature maps. Using this, we can shrink our hierarchical features from\nF 1 ( H 4 × W 4 × C 1 ) to F 2 ( H 8 × W 8 × C 2 )\n, and then iterate for any other feature map in the hierarchy. This process was initially designed to combine non-overlapping image or feature patches. Therefore, it fails to preserve the local continuity around those patches. Instead, we use an overlapping patch merging process. To this end, we define K, S, and P , where K is the patch size, S is the stride between two adjacent patches, and P is the padding size. In our experiments, we set K = 7, S = 4, P = 3 ,and K = 3, S = 2, P = 1 to perform overlapping patch merging to produces features with the same size as the non-overlapping process.\nEfficient Self-Attention. The main computation bottleneck of the encoders is the self-attention layer. In the original multi-head self-attention process, each of the heads Q, K, V have the same dimensions N × C, where N = H × W is the length of the sequence, the self-attention is estimated as:\nAttention(Q, K, V ) = Softmax( QK T √ d head )V.(1)\nThe computational complexity of this process is O(N 2 ), which is prohibitive for large image resolutions. Instead, we use the sequence reduction process introduced in [8]. This process uses a reduction ratio R to reduce the length of the sequence of as follows:\nK = Reshape( N R , C • R)(K) K = Linear(C • R, C)( K),(2)\nwhere K is the sequence to be reduced, Reshape( N R , C • R)(K) refers to reshape K to the one with shape of N R × (C • R), and Linear(C in , C out )(•) refers to a linear layer taking a C in -dimensional tensor as input and generating a C out -dimensional tensor as output. Therefore, the new K has dimensions N R × C. As a result, the complexity of the self-attention mechanism is reduced from\nO(N 2 ) to O( N 2 R ).\nIn our experiments, we set R to [64,16,4, 1] from stage-1 to stage-4. Mix-FFN. ViT uses positional encoding (PE) to introduce the location information. However, the resolution of PE is fixed. Therefore, when the test resolution is different from the training one, the positional code needs to be interpolated and this often leads to dropped accuracy. To alleviate this problem, CPVT [54] uses 3 × 3 Conv together with the PE to implement a data-driven PE. We argue that positional encoding is actually not necessary for semantic segmentation. Instead, we introduce Mix-FFN which considers the effect of zero padding to leak location information [69], by directly using a 3 × 3 Conv in the feed-forward network (FFN). Mix-FFN can be formulated as:\nx out = MLP(GELU(Conv 3×3 (MLP(x in )))) + x in ,(3)\nwhere x in is the feature from the self-attention module. Mix-FFN mixes a 3 × 3 convolution and an MLP into each FFN. In our experiments, we will show that a 3 × 3 convolution is sufficient to provide positional information for Transformers. In particular, we use depth-wise convolutions for reducing the number of parameters and improving efficiency. SegFormer incorporates a lightweight decoder consisting only of MLP layers and this avoiding the hand-crafted and computationally demanding components typically used in other methods. The key to enabling such a simple decoder is that our hierarchical Transformer encoder has a larger effective receptive field (ERF) than traditional CNN encoders.\nThe proposed All-MLP decoder consists of four main steps. First, multi-level features F i from the MiT encoder go through an MLP layer to unify the channel dimension. Then, in a second step, features are up-sampled to 1/4th and concatenated together. Third, a MLP layer is adopted to fuse the concatenated features F . Finally, another MLP layer takes the fused feature to predict the segmentation mask M with a H 4 × W 4 × N cls resolution, where N cls is the number of categories. This lets us formulate the decoder as:\nFi = Linear(C i , C)(F i ), ∀i Fi = Upsample( W 4 × W 4 )( Fi ), ∀i F = Linear(4C, C)(Concat( Fi )), ∀i M = Linear(C, N cls )(F ),(4)\nwhere M refers to the predicted mask, and Linear(C in , C out )(•) refers to a linear layer with C in and C out as input and output vector dimensions respectively. Effective Receptive Field Analysis. For semantic segmentation, maintaining large receptive field to include context information has been a central issue [5,19,20]. Here, we use effective receptive field (ERF) [70] as a toolkit to visualize and interpret why our MLP decoder design is so effective on Transformers. In Figure 3, we visualize ERFs of the four encoder stages and the decoder heads for both DeepLabv3+ and SegFormer. We can make the following observations:\n• The ERF of DeepLabv3+ is relatively small even at Stage-4, the deepest stage.\n• SegFormer's encoder naturally produces local attentions which resemble convolutions at lower stages, while able to output highly non-local attentions that effectively capture contexts at Stage-4.\n• As shown with the zoom-in patches in Figure 3, the ERF of the MLP head (blue box) differs from Stage-4 (red box) with a significant stronger local attention besides the non-local attention.\nThe limited receptive field in CNN requires one to resort to context modules such as ASPP [18] that enlarge the receptive field but inevitably become heavy. Our decoder design benefits from the non-local attention in Transformers and leads to a larger receptive field without being complex. The same decoder design, however, does not work well on CNN backbones since the overall receptive field is upper bounded by the limited one at Stage-4, and we will verify this later in Table 1d, More importantly, our decoder design essentially takes advantage of a Transformer induced feature that produces both highly local and non-local attention at the same time. By unifying them, our MLP decoder renders complementary and powerful representations by adding few parameters. This is another key reason that motivated our design. Taking the non-local attention from Stage-4 alone is not enough to produce good results, as will be verified in Table 1d. SegFormer contains multiple more efficient and powerful designs compared with SETR [7]:\n• We only use ImageNet-1K for pre-training. ViT in SETR is pre-trained on larger ImageNet-22K.\n• SegFormer's encoder has a hierarchical architecture, which is smaller than ViT and can capture both high-resolution coarse and low-resolution fine features. In contrast, SETR's ViT encoder can only generate single low-resolution feature map.\n• We remove Positional Embedding in encoder, while SETR uses fixed shape Positional Embedding which decreases the accuracy when the resolution at inference differs from the training ones.\n• Our MLP decoder is more compact and less computationally demanding than the one in SETR. This leads to a negligible computational overhead. In contrast, SETR requires heavy decoders with multiple 3×3 convolutions. Influence of the size of model. We first analyze the effect of increasing the size of the encoder on the performance and model efficiency. Figure 1 shows the performance vs. model efficiency for ADE20K as a function of the encoder size and, Table 1a summarizes the results for the three datasets.\nThe first thing to observe here is the size of the decoder compared to the encoder. As shown, for the lightweight model, the decoder has only 0.4M parameters. For MiT-B5 encoder, the decoder only takes up to 4% of the total number of parameters in the model. In terms of performance, we can observe that, overall, increasing the size of the encoder yields consistent improvements on all the datasets. Our lightweight model, SegFormer-B0, is compact and efficient while maintaining a competitive performance, showing that our method is very convenient for real-time applications. On the other hand, our SegFormer-B5, the largest model, achieves state-of-the-art results on all three datasets, showing the potential of our Transformer encoder.\nInfluence of C, the MLP decoder channel dimension. We now analyze the influence of the channel dimension C in the MLP decoder, see Section 3.2. In Table 1b we show performance, flops, and parameters as a function of this dimension. We can observe that setting C = 256 provides a very competitive performance and computational cost. The performance increases as C increases; however, it leads to larger and less efficient models. Interestingly, this performance plateaus for channel dimensions wider than 768. Given these results, we choose C = 256 for our real-time models SegFormer-B0, B1 and C = 768 for the rest. Mix-FFN vs. Positional Encoder (PE). In this experiment, we analyze the effect of removing the positional encoding in the Transformer encoder in favor of using the proposed Mix-FFN. To this end, we train Transformer encoders with a positional encoding (PE) and the proposed Mix-FFN and perform inference on Cityscapes with two different image resolutions: 768×768 using a sliding window, and 1024×2048 using the whole image.\nTable 1c shows the results for this experiment. As shown, for a given resolution, our approach using Mix-FFN clearly outperforms using a positional encoding. Moreover, our approach is less sensitive to differences in the test resolution: the accuracy drops 3.3% when using a positional encoding with a lower resolution. In contrast, when we use the proposed Mix-FFN the performance drop is reduced to only 0.7%. From these results, we can conclude using the proposed Mix-FFN produces better and more robust encoders than those using positional encoding.\nEffective receptive field evaluation. In Section 3.2, we argued that our MLP decoder benefits from Transformers having a larger effective receptive field compared to other CNN models. To quantify this effect, in this experiment, we compare the performance of our MLP-decoder when used with CNN-based encoders such as ResNet or ResNeXt. As shown in Table 1d, coupling our MLP-decoder with a CNN-based encoder yields a significantly lower accuracy compared to coupling it with the proposed Transformer encoder. Intuitively, as a CNN has a smaller receptive field than the Transformer (see the analysis in Section 3.2), the MLP-decoder is not enough for global reasoning. In contrast, coupling our Transformer encoder with the MLP decoder leads to the best performance. Moreover, for Transformer encoder, it is necessary to combine low-level local features and high-level non-local features instead of only high-level feature. Model robustness is important for many safety-critical tasks such as autonomous driving [77]. In this experiment, we evaluate the robustness of SegFormer to common corruptions and perturbations. To this end, we follow [77] and generate Cityscapes-C, which expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from noise, blur, weather and digital categories. We compare our method to variants of DeeplabV3+ and other methods as reported in [77]. The results for this experiment are summarized in Table 5.\nOur method significantly outperforms previous methods, yielding a relative improvement of up to 588% on Gaussian Noise and up to 295% on snow weather. The results indicate the strong robustness of SegFormer, which we envision to benefit safety-critical applications where robustness is important. In Figure 6, we select some representative images and effective receptive field (ERF) of DeepLabV3+ and SegFormer. Beyond larger ERF, the ERF of SegFormer is more sensitive to the context of the image. We see SegFormer's ERF learned the pattern of roads, cars, and buildings, while DeepLabV3+'s ERF shows a relatively fixed pattern. The results also indicate that our Transformer encoder has a stronger feature extraction ability than ConvNets. In this section, we detailed show the zero-shot robustness compared with SegFormer and DeepLabV3+. Following [77], we test 3 severities for 4 kinds of \"Noise\" and 5 severities for the rest 12 kinds of corruptions and perturbations.\nAs shown in Figure 7, with severity increase, DeepLabV3+ shows a considerable performance degradation. In contrast, the performance of SegFormer is relatively stable. Moreover, SegFormer has significant advantages over DeepLabV3+ on all corruptions/perturbations and all severities, demonstrating excellent zero-shot robustness. We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. In this section, we list some important hyper-parameters of our Mix Transformer (MiT) encoder. By changing these parameters, we can easily scale up our encoder from B0 to B5.\nIn summary, the hyper-parameters of our MiT are listed as follows:\n• K i : the patch size of the overlapping patch embedding in Stage i; Transformer Encoder\nTransformer Encoder\nStage 3\nTransformer Encoder\nTransformer Encoder" }, { "title": "Prototype mixture models for few-shot semantic segmentation", "year": 2020.0, "authors": "Boyu Yang; Chang Liu; Bohao Li; Jianbin Jiao; Qixiang Ye", "arxiv_di": "2008.03898", "Introduction": "Substantial progress has been made in semantic segmentation [1,2,3,4,5,6,7,8,9]. This has been broadly attributed to the availability of large datasets with mask annotations and convolutional neural networks (CNNs) capable of absorbing the annotation information. However, annotating object masks for large-scale datasets is laborious, expensive, and can be impractical [10,11,12]. It is also not consistent with cognitive learning, which can build a model upon few-shot supervision [13].\nGiven a few examples, termed support images, and the related segmentation masks [14], few-shot segmentation aims to segment the query images based on a feature representation learned on training images. It remains a challenging problem when we consider that target category is not included in the training data while objects within the support and query image significantly differ in appearance and pose.\nBy introducing the metric learning framework, Shaban et al. [15], Zhang et al. [16], and Dong et al. [17] contributed early few-shot semantic segmentation methods. They also introduced the concept of \"prototype\" which refers to a weight vector calculated with global average pooling guided by ground-truth masks embedded in feature maps. Such a vector squeezing discriminative information across feature channels is used to guide the feature comparison between support image(s) and query images for semantic segmentation.\nDespite clear progress, we argue that the commonly used prototype model is problematic when the spatial layout of objects is completely dropped by global average pooling, Fig. 1(upper). A single prototype causes semantic ambiguity around various object parts and deteriorates the distribution of features [18]. Recent approaches have alleviated this issue by prototype alignment [19], feature boosting [14], and iterative mask refinement [20]. However, the semantic ambiguity problem caused by global average pooling remains unsolved.\nIn this paper, we propose prototype mixture models (PMMs) and focus on solving the semantic ambiguity problem in a systematic manner. During the training procedure, the prototypes are estimated using an Expectation-Maximization (EM) algorithm, which treats each deep pixel (a feature vector) within the mask region as a positive sample. PMMs are primarily concerned with representing the diverse foreground regions by estimating mixed prototypes for various object parts, Fig. 1(lower). They also enhance the discriminative capacity of features by modeling background regions.\nThe few-shot segmentation procedure is implemented in a metric learning framework with two network branches (a support branch and a query branch), Fig. 2. In the framework, PMMs are utilized in a duplex manner to segment a query image. On the one hand, they are regarded as spatially squeezed representation, which match (P-Match) with query features to activate feature channels related to the object class. On the other hand, each vector is regarded as a C-dimensional linear classifier, which multiplies (P-Conv) with the query features in an element-wised manner to produce a probability map. In this way, the channel-wised and spatial semantic information of PMMs is fully explored to segment the query image.\nThe contributions of our work are summarized as follows:\n-We propose prototype mixture models (PMMs), with the target to enhance few-shot semantic segmentation by fully leveraging semantics of limited support image(s). PMMs are estimated using an Expectation-Maximization (EM) algorithm, which is integrated with feature learning by a plug-andplay manner. -We propose a duplex strategy, which treats PMMs as both representations and classifiers, to activate spatial and channel-wised semantics for segmentation. -We assemble PMMs to RPMMs using a residual structure and significantly improve upon the state-of-the-arts.", "Related_Work": "Semantic Segmentation. Semantic segmentation, which performs per-pixel classification of a class of objects, has been extensively investigated. State-ofthe-art methods, such as UNet [2], PSPNet [1], DeepLab [3,4,5], are based on fully convolutional networks (FCNs) [21]. Semantic segmentation has been updated to instance segmentation [8] and panoptic segmentation [22], which shared useful modules, e.g., Atrous Spatial Pyramid Pooling (ASPP) [4] and multi-scale feature aggregation [1], with few-shot segmentation. The clustering method used in SegSort [7], which partitioned objects into parts using a divide-and-conquer strategy, provides an insight for this study. Few-shot Learning. Existing methods can be broadly categorized as either: metric learning [23,24,18], meta-learning [25,26,27,28], or data argumentation. Metric learning based methods train networks to predict whether two images/regions belong to the same category. Meta-learning based approaches specify optimization or loss functions which force faster adaptation of the parameters to new categories with few examples. The data argumentation methods learn to generate additional examples for unseen categories [29,30].\nIn the metric learning framework, the effect of prototypes for few-shot learning has been demonstrated. With a simple prototype, e.g., a linear layer learned on top of a frozen CNN [31], state-of-the-art results can be achieved based on a simple baseline. This provides reason for applying prototypes to capture representative and discriminative features.\nFew-shot Segmentation. Existing few-shot segmentation approaches largely followed the metric learning framework, e.g., learning knowledge using a prototype vector, from a set of support images, and then feed learned knowledge to a metric module to segment query images [19].\nIn OSLSM [15], a two-branch network consisting of a support branch and a query branch was proposed for few-shot segmentation. The support branch is devoted to generating a model from the support set, which is then used to tune the segmentation process of an image in the query branch. In PL [17], the idea of prototypical networks was employed to tackle few-shot segmentation using metric learning. SG-One [16] also used a prototype vector to guide semantic segmentation procedure. To obtain the squeezed representation of the support image, a masked average pooling strategy is designed to produce the prototype vector. A cosine similarity metric is then applied to build the relationship between the guidance features and features of pixels from the query image. PANet [19] further introduced a prototype alignment regularization between support and query branches to fully exploit knowledge from support images for better generalization. CANet [20] introduced a dense comparison module, which effectively exploits multiple levels of feature discriminativeness from CNNs to make dense feature comparison. With this approach comes an iterative optimization module which renes segmentation masks. The FWB approach [14] focused on discriminativeness of prototype vectors (support features) by leveraging foregroundbackground feature differences of support images. It also used an ensemble of prototypes and similarity maps to handle the diversity of object appearances.\nAs a core of metric learning in few-shot segmentation, the prototype vector was commonly calculated by global average pooling. However, such a strategy typically disregards the spatial extent of objects, which tends to mix semantics from various parts. This unintended mixing seriously deteriorates the diversity of prototype vectors and feature representation capacity. Recent approaches alleviated this problem using iterative mask refinement [20] or model ensemble [14]. However, issues remain when using single prototypes to represent object regions and the semantic ambiguity problem remains unsolved.\nOur research is inspired by the prototypical network [32], which learns a metric space where classification is performed using distances to the prototype of each class. The essential differences are twofold: (1) A prototype in prototypical network [32] represents a class of samples while a prototype in our approach represents an object part; (2) The prototypical network does not involve mixing prototypes for a single sample or a class of samples.\n3 The Proposed Approach", "Methodology": "During training, features S ∈ R W ×H×C for the support image are considered as a sample set with W × H C-dimensional samples. S is spatially partitioned into foreground samples S + and background samples S -, where S + corresponds to feature vectors within the mask of the support image and S -those outside the mask. S + is used to learn foreground PMMs + corresponding to object parts, Fig. 3, and S -to learn background PMMs -. Without loss generality, the models and learning procedure are defined for PMMs, which represent either PMMs + or PMMs -. Models. PMMs are defined as a probability mixture model which linearly combine probabilities from base distributions, as\np(s i |θ) = K k=1 w k p k (s i |θ)(1)\nwhere w k denotes the mixing weights satisfying 0 ≤ w k ≤ 1 and\nK k=1 w k = 1\n. θ denotes the model parameters which are learned when estimating PMMs. s i ∈ S denotes the i th feature sample and p k (s i |θ) denotes the k th base model, which is a probability model based on a Kernel distance function, as\np k (s i |θ) = β(θ)e Kernel(si,µ k ) ,(2)\nwhere β(θ) is the normalization constant. µ k ∈ θ is one of the parameter. For the Gaussian mixture models (GMMs) with fixed co-variance, the Kernel function is a radial basis function (RBF),\nKernel(s i , µ k ) = -||(s i -µ k )|| 2 2 .\nFor the von Missies-Fisher (VMF) model [33], the kernel function is defined as a cosine distance function, as Kernel(s i , µ k ) =\nµ T k si ||µ k ||2||si||2\n, where µ k is the mean vector of the k th model. Considering the metric learning framework used, the vector distance function is more appropriate in our approach, as is validated in experiments. Based on the vector distance, PMMs are defined as\np k (s i |θ) = β c (κ)e κµ T k si ,(3)\nwhere θ = {µ, κ}. β c (κ) =\nκ c/2-1 (2π) c/2 I c/2-1 (κ)\nis the normalization coefficient, and I ν (•) denotes the Bessel function. κ denotes the concentration parameter, which is empirically set as κ = 20 in experiments.\nModel Learning. PMMs are estimated using the EM algorithm which includes iterative E-steps and M-steps. In each E-step, given model parameters and sample features extracted, we calculate the expectation of the sample s i as\nE ik = p k (s i |θ) K k=1 p k (s i θ) = e κµ T k si K k=1 e κµ T k si .(4)\nIn each M-step, the expectation is used to update the mean vectors of PMMs, as Estimate µ -upon S -by iterative EM steps defined by Eqs. 4 and 5; Activate Q using P-Match and P-Conv defined on µ + and µ -, Fig. 4; Predict a segmentation mask and calculate the segmentation loss; Update α to minimize the cross-entropy loss at the query branch, Fig. 2. end for where N = W × H denotes the number of samples.\nµ k = N i=1 E ik s i N i=1 E ik ,(5)\nQ Upsampling Conv Q μ + μ - Element-\nAfter model learning, the mean vectors µ + = {µ + k , k = 1, ..., K} and µ -= {µ - k , k = 1, ..., K} are used as prototype vectors to extract convolutions features for the query image. The mixture coefficient w k is ignored so that each prototype vectors have same importance for semantic segmentation. Obviously, each prototype vector is the mean of a cluster of samples. Such a prototype vector can represent a region around an object part in the original image for the reception field effect, Fig. 3. To further enhance the model representative capacity, we implement model ensemble by stacking multiple PMMs, Fig. 5. Stacked PMMs, termed residual PMMs (RPMMs), leverage the residual from the previous query branch to supervise the next query branch for fine-grained segmentation. This is computationally easier as it pursuits the minimization of residuals between branches rather than struggling to combine multiple models to fit a ground-truth mask.\nRPMMs not only further improve the performance but also defines a new model ensemble strategy. This incorporates the advantages of model residual learning, which is inspired by the idea of side-output residual [34,35] but has the essential difference to handle models instead of features. This is also different from the ensemble of experts [14], which generates an ensemble of the support features guided by the gradient of loss. CANet (a single prototpye) Fig. 6. Activation maps by PMMs and CANet [20]. PMMs produce multiple probability maps and fuse them to a mixed map, which facilities activating and segmenting complete object extent (first two rows) or multiple objects (last row). CANet that uses a single prototype to segment object tends to miss object parts. (Best viewed in color)\nDatasets. We evaluate our model on Pascal-5 i and COCO-20 i . Pascal-5 i is a dataset specified for few-shot semantic segmentation in OSLSM [15], which consists of the Pascal VOC 2012 dataset with extra annotations from extended SDS [36]. 20 object categories are partitioned into four splits with three for training and one for testing. At test time, 1000 support-query pairs were randomly sampled in the test split [20]. Following FWB [14], we create COCO-20 i from MSCOCO 2017 dataset. The 80 classes are divided into 4 splits and each contains 20 classes and the val dataset is used for performance evaluation. The other setting is the same as that in Pascal-5 i .\nEvaluation Metric. Mean intersection over-union (mIoU) which is defined as the mean IoUs of all image categories was employed as the metric for performance evaluation. For each category, the IoU is calculated by IoU= TP TP+FP+FN , where TP, FP and FN respectively denote the number of true positive, false positive and false negative pixels of the predicted segmentation masks. In Fig. 6, we visualize probability maps produced by positive prototypes of PMMs. We also visualize and compare the activation maps and segmentation masks produced by PMMs and CANet. PMMs produce multiple probability maps and fuse them to a mixed probability map, which facilities activating complete object extent (first two rows). The advantage in terms of representation capacity is that PMMs perform better than CANet when segmenting multiple objects within the same image (last row). By comparison, CANet using a single prototype to activate object tends to miss object parts or whole objects. The probability maps produced by PMMs validate our idea, i.e., prototypes correlated to multiple regions and alleviate semantic ambiguity. Table 1. Ablation study. 'Mean' denotes mean mIoU on Pascal-5 i with PMMs using three prototypes. The first row is the baseline method without using the PMMs or the RPMMs method. In Fig. 7, we compare the segmentation results by the baseline method and the proposed modules. The segmentation results show that PMMs + (P-Match) can improve the recall rate by segmenting more target pixels. By introducing background prototypes, PMMs reduce the false positive pixels, which validates that the background mixture models can improve the discriminative capability of the model. RPMMs further improve the segmentation results by refining object boundaries about hard pixels.", "Conclusion": "We proposed prototype mixture models (PMMs), which correlate diverse image regions with multiple prototypes to solve the semantic ambiguity problem. During training, PMMs incorporate rich channel-wised and spatial semantics from limited support images. During inference, PMMs are matched with query features in a duplex manner to perform accurate semantic segmentation. On the large-scale MS COCO dataset, PMMs improved the performance of few-shot segmentation, in striking contrast with state-of-the-art approaches. As a general method to capture the diverse semantics of object parts given few support examples, PMMs provide a fresh insight for the few-shot learning problem.", "Experiment_and_Results": "Implementation Details. Our approach utilizes CANet [20] without iterative optimization as the baseline, which uses VGG16 or ResNet50 as backbone CNN for feature extraction. During training, four data augmentation strategies including normalization, horizontal flipping, random cropping and random resizing are used [20]. Our approach is implemented upon the PyTorch 1.0 and run on Nvidia 2080Ti GPUs. The EM algorithm iterates 10 rounds to calculate PMMs for each image. The network with a cross-entropy loss is optimized by SGD with the initial learning of 0.0035 and the momentum of 0.9 for 200,000 iterations with 8 pairs of support-query images per batch. The learning rate reduces following the \"poly\" policy defined in DeepLab [4]. For each training step, the categories in the train split are randomly selected and then the support-query pairs are randomly sampled in the selected categories.", "Extra": "The few-shot segmentation task is to classify pixels in query images into foreground objects or backgrounds by solely referring to few labeled support images containing objects of the same categories. The goal of the training procedure is to learn a segmentation model that is trained by numbers of images different from the task query image categories. The training image set is split into many small subsets and within every subset one image serves as the query and the other(s) as the support image(s) with known ground-truth(s). Once the model is trained, the segmentation model is fixed and requires no optimization when tested on a new dataset [20]. The proposed few-shot segmentation model follows a metric learning framework, Fig. 2, which consists of two network branches i.e., the support branch (above) and the query branch (below). Over the support branch, PMMs are estimated for the support image(s). In the support and query branches, two CNNs with shared weights are used as the backbone to extract features. Let S ∈ R W ×H×C denote the features of the support image where W ×H denotes the resolution of feature maps and C the number of feature channels. The features for a query image are denoted as Q ∈ R W ×H×C . Without loss of generality, the network architecture and models are illustrated for 1-shot setting, which can be extended to 5-shot setting by feeding five support images to the PMMs to estimate prototypes.\nQ Q Q Prototype Mixture During inference, the learned prototype vectors µ + = {µ + k , k = 1, ..., K} and µ -= {µ - k , k = 1, ..., K} are duplexed to activate query features for semantic segmentation, Fig. 2.\nPMMs as Representation (P-Match). Each positive prototype vector squeezes representative information about an object part and all prototypes incorporate representative information about the complete object extent. Therefore, prototype vectors can be used to match and activate the query features Q, as\nQ = P-Match(µ + k , Q), k = 1, ..., K,(6)\nwhere P-Match refers to an activation operation consists of prototype upsampling, feature concatenation, and semantic segmentation using convolution, Fig. 4. The convolution operation on concatenated features implements a channelwise comparison, which activates feature channels related to foreground while suppressing those associated with backgrounds. With P-Match, semantic information about the extent of the complete object is incorporated into the query features for semantic segmentation.\nPMMs as Classifiers (P-Conv). On the other hand, each prototype vector incorporating discriminative information across feature channels can be seen as a classifier, which produces probability maps M k = {M + k , M - k } using the P-Conv operation, as\nM k = P-Conv(µ + k , µ - k , Q), k = 1, ..., K.(7)\nAs shown in Fig. 4, P-Conv first multiplies each prototype vector with the query feature Q in an element-wise manner. The output maps are then converted to probability maps M k by applying Softmax across channels. After P-Conv, the produced probability maps M + k , k = 1, ..., K and M - k , k = 1, ..., K are respectively summarized to two probability maps, as\nM + p = k M + k , M - p = k M - k ,(8)\nwhich are further concatenated with the query features to activate objects of interest, as\nQ = M + p ⊕ M - p ⊕ Q ,(9)\nwhere ⊕ denotes the concatenation operation.\nAfter the P-Match and P-Conv operations, the semantic information across channels and discriminative information related to object parts are collected from the support feature S to activate the query feature Q. in a dense comparison manner [20]. The activated query features Q are further enhanced with Atrous Spatial Pyramid Pooling (ASPP) and fed to a convolutional module to predict the segmentation mask, Fig. 2.\nSegmentation Model Learning. The segmentation model is implemented as an end-to-end network, Fig. 2. The learning procedure for the segmentation model is described in Algorithm 1. In the feed forward procedure, the support features are partitioned into backgrounds and foreground sample sets S + and S - according to the ground-truth mask. PMMs are learned on S + and S -and the learned prototype vectors µ + and µ -are leveraged to activate query features to predict segmentation mask of the query image. In the back-propagation procedure, the network parameters θ are updated to optimize the segmentation loss at the query branch. With multiple training iterations, rich feature representation about diverse object parts is absorbed into the backbone network. During the inference procedure, the learned feature representation together with PMMs of the support image(s) is used to segment the query image. PMMs. In Table 1, with P-Match modules, PMMs improve segmentation performance by 2.70% (54.63% vs. 51.93%), which validates that the prototypes generated by the PMMs perform better than the prototype generated by global average pooling. By introducing the duplex strategy, PMMs further improve the performance by 0.64% (55.27% vs. 54.63%), which validates that the probability map generated by the combination of foreground and background prototypes can suppress backgrounds and reduce false segmentation. In total, PMMs improve the performance by 3.34% (55.27% vs. 51.93%), which is a significant margin in semantic segmentation. This clearly demonstrates the superiority of the proposed PMMs over previous prototype methods.\nRPMMs. RPMMs further improve the performance by 1.07% (56.34% vs. 55.27%), which validates the effectiveness of the residual ensemble strategy. Residual from the query prediction output of the previous branch of PMMs can be used to supervise the next branch of PMMs, enforcing the stacked PMMs to reduce errors, step by step.\nNumber of Prototypes. In Table 2, ablation study is carried out to determine the number of prototypes using PMMs + with P-Match. K = 2 significantly outperforms K = 1, which validates the plausibility of introducing mixture prototypes. The best Pascal-5 i performance occurs at K = 2, 3, 4. When K = 3 the best mean performance is obtained. When K = 4, 5, the performance slightly decreases. One reason lies in that the PMMs are estimated on a single support image, which includes limited numbers of samples. The increase of K substantially decreases the samples of each prototype and increases the risk of over-fitting.\nKernel Functions. In Table 3, we compare the Gaussian and VMF kernels for sample distance calculation when estimating PMMs. The better results from VMF kernel show that the cosine similarity defined by VMF kernel is preferable.\nInference Speed. The size of PMMs model is 19.5M, which is slightly larger than that of the baseline CANet [20] (19M) but much smaller than that of OSLSM [15] (272.6M). Because the prototypes are 1×1×C dimensional vectors, they do not significantly increase the model size or computational cost. In one shot setting, with K = 3, our inference speed on single 2080Ti GPU is 26 FPS, With the 5-shot setting and a Resnet50 backbone, RPMMs achieve 1.50% (57.30% vs. 55.80%) performance improvement over the state-of-the-art, which is also significant. With the VGG16 backbone, our approach is comparable with the state-of-the-arts. Note that the PANet and FWB used additional k-shot fusion strategy while we do not use any post-processing strategy to fuse the predicted results from five shots. MS COCO. Table 6 displays the evaluation results on MS COCO dataset following the evaluation metric on COCO-20 i [14]. Baseline is achieved by running CANet without iterative optimization. PMMs and RPMMs again outperform state-of-the-art methods in both 1-shot and 5-shot settings. For the 1-shot setting, RPMMs improves the baseline by 4.47%, respectively outperforms the PANet and FWB methods by 9.68% and 9.39%.\nFor the 5-shot setting, it improves the baseline by 7.66%, and respectively outperforms the PANet and FWB by 5.82% and 11.87%, which are large margins for the challenging few-shot segmentation problem. Compared to PASCAL VOC, MS COCO has more categories and images for training, which facilities learning richer representation related to various object parts and backgrounds. Thereby, the improvement on MS COCO is larger than that on Pascal VOC." }, { "title": "Self-guided and cross-guided learning for few-shot segmentation", "year": 2021.0, "authors": "Bingfeng Zhang; Jimin Xiao; Terry Qin", "arxiv_di": "2103.16129", "Introduction": "Semantic segmentation has been making great progress with recent advances in deep neural network especially Fully Convolutional Network (FCN) [18]. Requiring sufficient and accurate pixel-level annotated data, state-of-theart semantic segmentation approaches can produce satisfying segmentation masks. However, these approaches heavily rely on massive annotated data. Their performance drops dramatically on unseen classes or with insufficient anno- . Motivation of our approach. Even using the same image as both support and query input, previous approaches cannot generate accurate segmentation under the guide of its ground-truth mask.\ntated data [33].\nFew-shot segmentation [8,14,20,24] is a promising method to tackle this issue. Compared to fully supervised semantic segmentation [3,5,11,13] which can solely segment the same classes in the training set, the objective of few-shot segmentation is to utilize one or a few annotated samples to segment new classes. Specifically, the data in few-shot segmentation is divided into two sets: support set and query set. This task requires to segment images from the query set given one or several annotated images from the support set. Thus, the key challenge of this task is how to leverage the information from the support set. Most approaches [6,17,30,35,32,26] adopt a Siamese Convolutional Neural Network (SCNN) to encode both support and query images. In order to apply the information from support images, they mainly use masked Global Average Pooling (GAP) [38] or other strengthened methods [19] to extract all foreground [30,35,16] or background [30] as one feature vector, which is used as a prototype to compute cosine distance [36] or make dense comparison [35] on query images.\nUsing a support feature vector extracted from the support image does facilitate the query image segmentation, but it does not carry sufficient information. Fig. 1 shows an extreme example where the support image and query image are exactly the same. However, even the existing best performing approaches fail to accurately segment the query image. We argue that when we use masked GAP or other methods [19] to encode a support image to a feature vector, it is unavoidable to lose some useful information due to the average operation. Using such a feature vector to guide the segmentation cannot make a precise prediction for pixels which need the lost information as support. Furthermore, for the multiple shot case such as 5-shot segmentation, the common practice is to use the average of predictions from 5 individual support images as the final prediction [36] or the average of 5 support vectors as the final support vector [30]. However, the quality of different support images is different, using an average operation forces all support images to share the same contribution.\nIn this paper, we propose a simple yet effective Self-Guided and Cross-Guided Learning approach (SCL) to overcome the above mentioned drawbacks. Specifically, we design a Self-Guided Module (SGM) to extract comprehensive support information from the support set. Through making an initial prediction for the annotated support image with the initial prototype, the covered and uncovered foreground regions are encoded to the primary and auxiliary support vectors using masked GAP, respectively. By aggregating both primary and auxiliary support vectors, better segmentation performances are obtained on query images.\nEnlightened by our proposed SGM, we propose a Cross-Guided Module (CGM) for multiple shot segmentation, where we can evaluate prediction quality from each support image using other annotated support images, such that the high-quality support image will contribute more in the final fusion, and vice versa. Compared to other complicated approaches such as the attention mechanism [35,34], our CGM does not need to re-train the model, and directly applying it during inference can improve the final performance. Extensive experiments show that our approach achieves new state-of-the-art performances on PASCAL-5 i and COCO-20 i datasets.\nOur contributions are summarized as follows:\n• We observe that it is unavoidable to lose some useful critical information using the average operation to obtain the support vector. To mitigate this issue, we propose a self-guided mechanism to mine more comprehensive support information by reinforcing such easily lost information, thus accurate segmentation mask can be predicted for query images.\n• We propose a cross-guided module to fuse multiple predictions from different support images for the multiple shot segmentation task. Without re-training the model, it can be directly used during inference to improve the final performance.\n• Our approach can be applied to different baselines to improve their performance directly. Using our ap-proach achieves new state-of-the-art performances on PASCAL-5 i (mIoU for 1-shot: 61.8%, 5-shot: 62.9%) and COCO-20 i datasets (mIoU for 1-shot: 37.0%, 5shot: 39.9%) for this task.", "Methodology": "The purpose of few-shot segmentation is to learn a segmentation model which can segment unseen objects provided with a few annotated images of the same class. We need to train a segmentation model on a dataset D train and Encoder Encoder Decoder\nSelf-Guided Module\nQuery FPM h× 𝑤 × 𝑑 1 × 1 × 𝑑 1 × 1 × 𝑑 1 × 1 × 𝑑 h× 𝑤 × 𝑑 h× 𝑤 × 3𝑑 C C Figure 2.\nThe framework of our SCL approach for 1-shot segmentation. We firstly use an encoder to generate feature maps Fs and Fq from a support image and a query image, respectively. Then masked GAP is used to generate the initial support vector vs. After that, our proposed self-guided module (SGM) takes vs and Fs as input and output two new support vectors vpri and vaux, which are then used as the support information to segment the query image. Encoders for support and query images share the same weights.\n(I i q , M i q ) N i=1\n, where M i q is only used for training. For clear description, we use S train and Q train to represent the training support set and query set, while S test and Q test for the test set. A model is learned using the training support set S train and query set Q train . Then the model is evaluated on D test using the test support set S test and query set Q test .", "Dataset": "We evaluate our approach on PASCAL-5 i and COCO-20 i dataset. PASCAL-5 i is proposed in OSLSM [23], which is built based on PASCAL VOC 2012 [7] and SBD dataset [9]. COCO-20 i is proposed in FWB [19], which is built based on MS-COCO [15] dataset.\nIn PASCAL-5 i , 20 classes are divided into 4 splits, in which 3 splits for training and 1 for evaluation. During evaluation, 1000 support-query pairs are randomly sampled from the evaluation set. For more details, please refer to OSLSM [23]. In COCO-20 i , the only difference with PASCAL-5 i is that it divides 80 classes to 4 splits. For more details, please refer to FWB [19]. For PASCAL-5 i , we evaluate our approach using both CANet [35] and PFENet [27] as baselines. For COCO-20 i , we evaluate our approach based on PFENet [27].\nFollowing [30], mean intersection-over-union (mIoU) and foreground-background intersection-over-union (FB-IoU) are used as evaluation metrics.", "Conclusion": "We propose a self-guided learning approach for few-shot segmentation. Our approach enables to extract comprehensive support information using our proposed self-guided module. Besides, in order to improve the drawbacks of average fusion for multiple support images, we propose a new cross-guided module to make highly quality support images contribute more in the final prediction, and vice versa. Extensive experiments show the effectiveness of our proposed modules. In the future, we will try to use the background information as extra support to improve our approach.", "Extra": "Fully supervised semantic segmentation, requiring to make pixel-level prediction, has been boosted by recent advances in Convolutional Neural Network (CNN) especially FCN [18]. Many network frameworks have been designed based on FCN. For example, UNet [21] adopted a multi-scale strategy and a convolution-deconvolution architecture to improve the performance of FCN [18], while PSPNet [37] was proposed to use the pyramid pooling module to generate object details. Deeplab [3,5] designed an Atrous Spatial Pyramid Pooling (ASPP) [4] module and used dilated convolution [2] to the FCN architecture. Most previous approaches adopt a metric learning strategy [10,28,25,1,12] for few-shot segmentation. For example, In PL [6], a two-branch prototypical network was proposed to segment objects using metric learning. SG-One [36] proposed to compute a cosine similarity between the generated single support vector and query feature maps to guide the segmentation process. CANet [35] designed a dense comparison module to make comparisons between the support vector and query feature maps. PANet [30] introduced a module to use the predicted query mask to segment the support images, where it still relied on the generated support vector. FWB [19] tried to enhance the feature representation of generated support vector using feature weighting while CRNet [16] focused on utilizing cooccurrent features from both query and support images to improve the prediction, and it still used a support vector to guide the final prediction. PPNet [17] tried to generate prototypes for different parts as support information. PFENet [27] designed a multi-scale module as decoder to utilize the generated single support vector.\nHowever, most approaches used masked GAP [38] or some more advanced methods such as FWB [19] to fuse all foreground or background features as a single vector, which unavoidably loses some useful information. Our proposed method tries to provide comprehensive support information using a self-guided approach. Fig. 2 shows our framework for 1-shot segmentation, which can be divided into the following steps:\n1) Both support and query images are input to the same encoder to generate their feature maps. After that, an initial support vector is generated using masked GAP from all foreground pixels of the support image.\n2) With the supervision of the support image mask, our SGM produces two new feature vectors including the primary and auxiliary support vectors, using the initial support vector and support feature map as input.\n3) In this step, the primary and auxiliary support vectors are concatenated with the query feature map to guide the segmentation of query images. Through a query Feature Processing Module (FPM) and a decoder, the segmentation mask for the query image is generated. Note that all encoders and decoders are shared. Self-Guided module (SGM) is proposed to provide comprehensive support information to segment the query image. The details of our SGM can be found in Fig. 3.\nSuppose the support image is I s , after passing through the encoder, its feature maps is F s . Then we use masked GAP to generate the initial support vector following previous approaches [35,36,39]:\nv s = hw i=1 F s (i) • [M s (i) = 1] hw i=1 [M s (i) = 1] , (1\n)\nwhere i is the index of the spatial position. h and w are the height and width of the feature map, respectively. is true, otherwise equals to 0. M s is a binary mask and M s (i) = 1 indicates the ith pixel belongs to class c. Note that M s needs to be downsampled to the same height and width as F s .\nBoth F s and v s are input to our proposed self-guided module (SGM). The initial feature vector v s is firstly duplicated and expanded to the same size with F s following [27,35], represented as V s , which is then concatenated with F s to generate a new feature map:\nF sv = Concat([F s , V s , V s ]),(2)\nwhere Concat(•) is the concatenation operator. Then, the probability map for the support image is generated after passing through the support FPM and the decoder:\nP s1 = softmax(D(FPM s (F sv ))),(3)\nwhere P s1 is the predicted probability map, i.e., P s1 ∈ R h×w×2 . D(•) means the decoder and details can be found in Sec. 5.1. softmax is the softmax layer. FPM s (•) is the support FPM, as shown in Fig. 3. According to the requirements of different decoders, we design two kinds of support FPMs: one for providing single-scale input to the decoder [35,32] and the other one for providing multi-scale input to the decoder [27].\nThen the predicted mask is generated from P s1 :\nMs = argmax(P s1 ),(4)\nwhere Ms is a binary mask, in which element 0 is the background and 1 is the indicator for being class c.\nUsing the predicted mask Ms and the ground-truth mask M s , we can generate the primary support vector v pri and the auxiliary support vector v aux :\nv pri = hw i=1 F s (i) • [M s (i) = 1] • [ Ms (i) = 1] hw i=1 [M s (i) = 1] • [ Ms (i) = 1] ,(5)\nv aux = hw i=1 F s (i) • [M s (i) = 1] • [ Ms (i) = 1] hw i=1 [M s (i) = 1] • [ Ms (i) = 1] .(6)\nIn Eq. ( 5), [M s (i) = 1] • [ Ms (i) = 1] indicates the correctly predicted foreground mask using the initial support vector v s as support. In Eq. ( 6),\n[M s (i) = 1] • [ Ms (i) = 1]\nindicates the missing foreground mask. From Eq. ( 5) and Eq. ( 6), it can be found that v pri keeps the main support information as it focuses on aggregating correctly predicted information, v aux focuses on collecting the lost critical information which cannot be predicted using v s . Fig. 4 shows more examples about the masks to produce v pri and v aux . It can be seen that v pri ignores some useful information unavoidably while v aux collect all the lost information in v pri .\nIn order to guarantee v pri can collect most information from the support feature map, a cross-entropy loss is used on P s1 predicted in Eq. (3) :\nL s1 ce = - 1 hw hw i=1 cj ∈{0,1} [M s (i) = c j ]log(P cj s1 (i)), (7\n)\nwhere 0 is the background class and 1 is the indicator for a specific foreground class c. P cj s1 (i) denotes the predicted probability belonging to class c j for pixel i. Then we duplicate and expand v pri and v aux to the same height and width with F s , represented as V s pri and V s aux , respectively. Following previous process, F s , V pri s and V aux s are concatenated to generate a new feature map F A s :\nF A s = Concat( F s , V pri s , V aux s ).(8)\nAfter that, the predicted probability map P s2 is generated based on the new feature map F A s :\nP s2 = softmax(D(FPM s (F A s ))).(9)\nSimilar with Eq. ( 7), we use a cross-entropy loss to ensure aggregating v pri and v aux together can produce accurate segmentation mask on the support image:\nL s2 ce = - 1 hw hw i=1 cj ∈{0,1} [M s (i) = c j ]log(P cj s2 (i)).(10)\nWe only use foreground pixels to produce support vectors since background is more complicated than the foreground. Therefore, we cannot guarantee the support vector from background is far away from that of the foreground. Using our proposed SGM, we generate the primary support vector v pri and auxiliary support vector v aux , where v pri contains the primary information of support image and v aux collects the lost information in v pri .\nUsing the same encoder with I s , we also generate the query feature map F q , then v pri and v aux are duplicated and expanded to the same height and width as F q , both of which are then concatenated with F q to generate a new feature map:\nF A q = Concat( F q , V pri q , V aux q ),(11)\nwhere F q is the feature map of query image I q , which is generated using the same encoder with the support image I s . V pri q and V aux q correspond to expanded results of v pri and v aux , respectively.\nThen F A q is input to a query FPM followed by a decoder to obtain the final prediction:\nP q = softmax(D(FPM q (F A q ))),(12)\nwhere FPM q (•) is the query FPM. P q is the predicted probability map. (More details about the query FPM and decoder can be found in Sec. 5.1 and our supplement material.)\nWe use a cross-entropy loss to supervise the segmentation of the query image:\nL q ce = - 1 hw hw i=1 cj ∈{0,1}\n[M q (i) = c j ]log(P cj q (i)), (13) where P cj q (i) denotes the predicted probability belonging to class c j for pixel i.\nThe overall training loss is defined as:\nL = L s1 ce + L s2 ce + L q ce ,(14)\nwhere L s1 ce , L s2 ce are the loss functions defined by Eq.( 7) and Eq.( 10) in Sec. 4.2. Enlightened by our SGM for 1-shot segmentation, we extend it to Cross-Guided Module (CGM) for the K-shot (K > 1) segmentation task. Among the K support images, each annotated support image can guide the query image segmentation individually. Based on this principle, we design our CGM where the final mask is fused using predictions from multiple annotated samples with high-quality support images contributing more and vice versa.\nFor K-shot segmentation task, there are K support images in one episode, i.e., the support set S = (I 1 s , M 1 s ), (I 2 s , M 2 s ), ..., (I K s , M K s ) . For the kth support image I k s , we can firstly use it as the support image and all K support images as query images to input to our proposed 1-shot segmentation model G. The predicted mask for the ith support image I i s is:\nM i|k s = argmax(G(I i s |I k s )),(15)\nwhere M i|k s is the predicted mask of I i s under the support of I k s . G(I i s |I k s ) outputs the predicted score map of I i s using I k s as the support image and I i s as the query image. The ground-truth mask M i s for image I i s is available. Thus, we can evaluate the confident score of I k s based on the IOU between the predicted masks and their ground-truth masks: Figure 5. Architecture of the query FPM and decoder in CANet [35]. CANet used the predicted probability map P q(t-1) from the previous iteration in its query FPM, and its decoder adopts single-scale residual layers following an ASPP module [4].\nU k s = 1 K K i=1 IOU( M i|k s , M i s ),(16)\nwhere IOU(•, •) is used to compute the intersection over union score. Then the final predicted score map for an given query image I q is:\nPq = softmax( 1 K K k=1 U k s G(I q |I k s )).(17)\nA support image with a larger U k s makes more contribution to the final prediction, and the generated support vector is more likely to provide sufficient information to segment query images, and vice versa.\nUsing CGM does not need to re-train a new model, and we can directly use the segmentation model from 1-shot task to make predictions. Thus, CGM can improve the performance during inference without re-training. Our SCL approach can be easily integrated into many existing few-shot segmentation approaches, and the effectiveness of our approach is evaluated using two baselines: CANet [35] and PFENet [27], both of which use masked GAP to generate one support vector for a support image. All decoders in our SGM share the same weights with the decoder in the baseline.\nWe use single-scale support FPM in our SGM when using CANet [35] as the baseline since its decoder adopted single-scale architecture. Besides, the query FPM in CANet [35] used the probability map P q(t-1) from the previous iteration in the cache to refine the prediction. Fig. 5 shows details of the query FPM and decoder in CANet [35].\nWe use multi-scale support FPM in our SGM when using PFENet [27] as the baseline since its decoder adopted a multi-scale architecture. Additionally, the query FPM in PFENet [27] used a prior mask from the pre-trained model on ImageNet [22] as extra support. More details can be found in our supplement material. Note that none of P q(t-1) or the prior mask is used in the support FPM in our SGM.\nAll training settings are the same as that in CANet [35] or PFENet [27]. The channel size d in Fig. 2 and Fig. 3 is set to 256. The batch size is 4 with 200 epochs used.\nThe learning rate is 2.5×10 -4 and weight decay is 5×10 -4 if CANet [35] is the baseline. The learning rate is 2.5×10 -3 and weight decay is 1×10 -4 if PFENet [27] is the baseline.\nDuring inference for the 1-shot task, we follow the same settings as in CANet [35] or PFENet [27]. For 5-shot segmentation, we directly use the segmentation model trained on 1-shot task. Following [30], we average the results from 5 runs with different random seeds as the final performance. All experiments are run on Nvidia RTX 2080Ti. In Table 1, we compare our approach with other stateof-the-art approaches on PASCAL-5 i . It can be seen that our approach achieves new state-of-the-art performances on both 1-shot and 5-shot tasks. Additionally, our approach significantly improves the performances of two baselines on 1-shot segmentation task, with mIoU increases of 2.1% and 1.0% for CANet [35] and PFENet [27], respectively. For the 5-shot segmentation task, our approach achieves 59.2% and 62.9% mIoU using CANet [35] and PFENet [27], respectively, both of which are direct improvement without re-training the model.\nIn Table 2, we compare our approach with others on the COCO-20 i dataset. Our approach outperforms other approaches by a large margin, with mIoU gain of 4.6% and 1.4% for 1-shot and 5-shot tasks, respectively.\nTable 3 shows the comparison between our approach and two baselines using FB-IoU on PASCAL-5 i . Our approach using PFENet [27] as the baseline achieves new state-ofthe-art performance. Besides, adopting our approach on CANet [35] obtain 4.1% and 1.1% FB-IoU increases for 1-shot and 5-shot tasks, respectively.\nIn Fig. 6, we report some qualitative results generated Table 1. Comparison with other state-of-the-arts using mIoU (%) as evaluation metric on Pascal-5 i for 1-shot and 5-shot segmentation. \"P.\" means Pascal. \"ours-SCL (CANet)\" and \"ours-SCL (PFENet)\" means CANet [35] and PFENet [27] are applied as baselines, respectively.\nMethod Backbone 1-shot 5-shot P.-5 0 P.-5 by our approach using PFENet [27] as the baseline. It can be seen that our approach produces integral segmentation masks covering object details. More experimental and qualitative results can be found in our supplement material. In this section, we conduct ablation studies on PASCAL-5 i using CANet [35] as the baseline and all results are average mIoU across 4 splits.\nWe firstly conduct an ablation study to show the influence of our proposed SGM and CGM in Table 4. For 1-shot, compared with the baseline, using SGM improves the performance by a large margin, being 2.1% and 4.1% for mIoU and FB-IoU, respectively. For 5-shot, using both SGM and CGM together obtains a 59.2% mIoU score, which is 3.3% higher compared to the baseline with the average method. Compared with the average method, our CGM directly increases the mIoU score by 0.5% when SGM is adopted. It is worth to notice that our CGM does not need to re-train the model and the gain is obtained in the inference stage. Table 5 shows the influence of the support vectors on the proposed SGM for 1-shot segmentation. If only v s Table 5. Ablation study of the support vectors in our proposed SGM on PASCAL-5 i for 1-shot segmentation. vs, vpri and vaux are initial, primary and auxiliary feature vectors generated by our SGM, respectively. Note that L s1 ce is used for vs. is adopted, the mIoU and FB-IoU scores are 55.6% and 67.3% respectively. Using SGM (with both v pri and v aux ) achieves 57.5% and 70.3% on mIoU and FB-IoU, with a significant gain of 1.9% and 3.0% on mIoU and FB-IoU, respectively. Besides, It can also be seen that when using v pri and v aux individually, it only achieves 56.6% and 51.4% on mIoU, both of which are much lower than using them jointly. Solely using v aux even performs worse than the baseline (only using v s ). Furthermore, we also evaluate the performance when using all support vectors (v s , v pri and v aux ) together, it can be seen that it does not improve the results, which also proves that v pri and v aux already provide sufficient information as support, demonstrating the effectiveness of our SGM. Note that when using all support vectors, channels of F A q should be increased to 4d. Table 6 studies the influence of loss functions L s1 ce and L s2 ce in SGM. Using both L s1 ce and L s2 ce significantly outperforms the baseline. If only L s1 ce is adopted without L s2 ce , the obtained mIoU score is 55.6%, being 1.9% lower than using both loss functions together. This is because L s2 ce provides one more step of training by treating the support image as query image, where both support vectors v pri and v aux are deployed. Similarly, if only L s2 ce is adopted without L s1 ce , the obtained performance is also lower than using both loss functions together. This is because using L s1 ce can ensure primary support vector v pri focus on extracting the main information while v aux focus on the lost information. Without L s1 ce , the roles of v pri and v aux get mixed and vague." }, { "title": "Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning", "year": 2019.0, "authors": "Chi Zhang; Guosheng Lin; Fayao Liu; Rui Yao; Chunhua Shen", "arxiv_di": 1903.02351, "Introduction": "Deep Convolutional Neural Networks have made significant breakthroughs in many visual understanding tasks including image classification [13,9,30], object detection [27,8,26], and semantic segmentation [16,2,20]. One crucial reason is the availability of large-scale datasets such as ImageNet [4] that enable the training of deep models. However, data labeling is expensive, particularly for dense prediction tasks, e.g., semantic segmentation and instance segmentation. In addition to that, after a model is trained, it is very difficult to apply the model to predict new classes. In contrast to machine learning algorithms, humans are able In this paper, we undertake the task of few-shot semantic segmentation that only uses a few annotated training images to perform segmentation on new classes. Previous work [29,24,5] on this task follows the design of two-branch structure which includes a support branch and a query branch. The support branch aims to extract information from the support set to guide segmentation in the query branch. We also adopt the two-branch design in our framework to solve the few-shot segmentation problem.\nOur network includes a two-branch dense comparison module, in which a shared feature extractor extracts representations from the query set and the support set for comparison. The design of the dense comparison module takes inspiration from metric learning [37,31] on image classification tasks where a distance function evaluates the similarity between images. However, different from image classification where each image has a label, image segmentation needs to make predictions on data with structured rep-resentation. It is difficult to directly apply metric learning to dense prediction problems. To solve this, one straightforward approach is to make comparisons between all pairs of pixels. However, there are millions of pixels in an image and comparison of all pixel pairs takes enormous computational cost. Instead, we aim to acquire a global representation from the support image for comparison. Global image features prove to be useful in segmentation tasks [19,40,3], which can be easily achieved by global average pooling.\nHere, to only focus on the assigned category, we use global average pooling over the foreground area to filter out irrelevant information. Then the global feature is compared with each location in the query branch, which can be seen as a dense form of the metric learning approach.\nUnder the few-shot setting, the network should be able to handle new classes that are never seen during training. Thus we aim to mine transferable representations from CNNs for comparison. As is observed in feature visualization literature [39,38], features in lower layers relate to low-level cues, e.g., edges and colors while features in higher layers relate to object-level concepts such as categories. We focus on middle-level features that may constitute object parts shared by unseen classes. For example, if the CNN learns a feature that relates to wheel when the model is trained on the class car, such feature may also be useful for feature comparison on new vehicle classes, e.g., truck and bus. We extract multiple levels of representations in CNNs for dense comparison.\nAs there exist variances in appearance within the same category, objects from the same class may only share a few similar features. Dense feature comparison is not enough to guide segmentation of the whole object area. Nevertheless, this gives an important clue of where the object is. In semi-automatic segmentation literature, weak annotations are given for class-agnostic segmentation, e.g., interactive segmentation with click or scribble annotations [36,14] and instance segmentation with bounding box or extreme point priors [10,21]. Transferable knowledge to locate the object region is learned in the training process. Inspired by semi-automatic segmentation tasks, we hope to gradually differentiate the objects from the background given the dense comparison results as priors. We propose an iterative optimization module (IOM) that learns to iteratively refine the predicted results. The refinement is performed in a recurrent form that the dense comparison result and the predicted masks are sent to an IOM for optimization, and the output is sent to the next IOM recurrently. After a few iterations of refinement, our dense comparison module is able to generate fine-grained segmentation maps. Inside each IOM, we adopt residual connections to efficiently incorporate the predicted masks in the last iteration step. Fig. 1 shows an overview of our network for one-shot segmentation.\nPrevious methods for k-shot segmentation is based on the 1-shot model.", "Related_Work": "Semantic Segmentation. Semantic segmentation is the task of classifying each pixel in an image to a set of predefined categories [16,2,20,15,17]. State-of-the-art methods are based on Fully Convolutional Networks (FCNs), which often employ a convolutional neural network (CNN) pre-trained for classification as the backbone architecture. To fit the task of dense prediction, fully connected layers are replaced by a convolutional layer that predicts the label of each pixel. In order to capture abstract feature representations, CNNs adopt consecutive pooling operations or convolution striding to decrease the spatial resolution of feature maps. However, this conflicts with dense prediction tasks where the output should be of high resolution. In order to balance the output resolution and receptive field of the network, dilated convolutions [2] are often used in dense prediction tasks. Dilation removes downsampling operations in the last few layers and inserts holes to convolutional filters to enlarge the receptive field. In our model, we also adopt dilated convolutions to maintain spatial resolution. In fully supervised segmentation, training an FCN model requires a large number of expensive pixel-level annotated images, and once a model is trained, it can not perform segmentation on new categories. Our model, on the other hand, can be generalized to any new categories with only a few annotated examples.\nFew-shot Learning. Few-shot learning aims to learn transferable knowledge that can be generalized to new classes with scarce labeled training data. There exist many formulations on few-shot classification, including recurrent neural network with memories [28,23], learning to finetune models [6,25], network parameter prediction [1,35], and metric learning [31,37,11]. Metric learning based methods achieve state-of-the-art performance in the fewshot classification tasks and they have the trait of being fast and predicting in a feed-forward manner. Our work is most related to Relation Network [37]. Relation Network meta-learns a deep distance metric to compare images and compute the similarity score for classification. The network consists of an embedding module which generates the representations of the images and a relation module that compares the embeddings and outputs a similarity score. Both modules are in the form of convolutional operations. The dense comparison module in our network can be seen as an extension of Relation Network in a dense form to tackle the task of segmentation.\nFew-shot Semantic Segmentation. Previous work on few-shot semantic segmentation employs two-branch structures. Shaban et al. [29] first adopt few-shot learning on semantic segmentation. The support branch directly predicts the weights of the last layer in the query branch for segmentation. In [24], the support branch generates an embedding which is fused to the query branch as additional features. Our network also follows the two-branch design. However, different from previous work where two branches have different structures, the two branches in our network share the same backbone network. The models in previous methods focus on the 1-shot setting, and when extending 1shot to k-shot, they apply 1-shot method independently to each support example and use non-learnable fusion methods to fuse individual predicted results at the image level or feature level. For example, Shaban et al. [29] propose to use logic OR operation to fuse individual predicted masks and Rakelly et al. [24] average the embedding in the support branch generated by different support examples. Instead, we adopt a learnable method through an attention mechanism to effectively fuse information from multiple support examples.", "Methodology": "We propose a new framework that solves the few-shot semantic segmentation problem. We begin with the illustration of our model in the 1-shot setting first without loss of generality. Our network consists of two modules: the dense comparison module (DCM) and the iterative optimization module (IOM). The DCM performs dense feature comparison between the support example and the query example, while IOM performs iterative refinement of predicted results. Fig. 2 (a) shows an overview of our framework. To generalize our network from 1-shot learning to k-shot learning, we adopt an attention mechanism to fuse information from different support examples. Moreover, we propose a new test setting that uses support images with bounding box annotations for few-shot segmentation, which is described subsequently. We compare our model with the state-of-the-art methods in Table 1. Table 1 (a) shows the results evaluated under the meanIoU evaluation metric and Table 1 (b) shows the results under the FB-IoU metric. For the performance of [29] under the FB-IoU metric, we quote the result reproduced in [24]. Our model significantly outperforms the state-ofthe-art methods under both evaluation metrics. Particularly, our meanIoU score outperforms the state-of-the-art results by 14.6% for the 1-shot task and 13.2% for the 5-shot task.\nQualitative Results. Fig. 5 shows some qualitative examples of our segmentation results. Note that given the same query image, our model is able to segment different classes when different support examples are presented (See the 5th and the 6th examples in Fig. 5).", "Conclusion": "We have presented CANet, a novel class-agnostic segmentation network with few-shot learning. The dense comparison module exploits multiple levels of feature in CNNs to perform dense feature comparison and the iterative optimization module learns to iteratively refines the predicted results. Our attention mechanism for solving the k-shot problem turns out to be more effective than non-learnable methods. Comprehensive experiments show the effectiveness of our framework, and the performance significantly outperforms all previous work.", "Experiment_and_Results": "To evaluate the performance of our proposed method, we conduct extensive experiments on the PASCAL VOC 2012 dataset and COCO dataset. Our network is trained end-toend. The loss function is the mean of cross-entropy loss over all spatial locations in the output map. Our network is trained using SGD for 200 epochs with the PyTorch library on Nvidia Tesla P100 GPUs. We set the learning rate to 0.0025 and set probability p r to 0.7. We use a mini-batch of 4 episodes for training on PASCAL-5 i and 8 on COCO. At inference time, we iteratively optimize the predicted results for 4 times after the initial prediction.\nEvaluation Metric. There is a minor difference of evaluation metrics in previous work. Shaban et al. [29] measure the per-class foreground Intersection-over-Union (IoU) and use the average IoU over all classes (meanIoU) to report the results. While in [24,5], they ignore the image categories and calculate the mean of foreground IoU and background IoU over all test images (FB-IoU). We choose the meanIoU evaluation metric for our analysis experiments due to the following reasons: 1) The numbers of test samples in different classes are not balanced (e.g., 49 of class sheep vs. 378 of class person). Ignoring the image categories may lead to a biased result towards the class with more images. Also, we can observe the effectiveness of our model in different classes with the meanIoU evaluation metric. 2) As most objects are small relative to the whole image, even though the model fails to segment any objects, the background IoU can still be very high, thus failing to reflect the capability of the model. 3) Foreground IoU is more often used in binary segmentation literature (e.g., video segmentation and interactive segmentation). Nevertheless, we still compare our results with previous work under both evaluation metrics.", "Extra": "Suppose that our model is trained on a dataset with the class set C train , our goal is to use the trained model to make the prediction on a different dataset with new classes C test where only a few annotated examples are available. Intuitively, we train the model to have the ability that for a new class c ∈ C train , our model is able to segment the class from the images when only sees a few pictures of this class. Once the model is trained, the parameters are fixed and require no optimization when tested on a new dataset.\nWe align training and testing with the episodic paradigm [33] to handle the few-shot scenario. Specifically, given a k-shot learning task, each episode is constructed by sampling 1) a support (training) set S = {(x i s , y i s (c))} k i=1 , where x i s ∈ R Hi×Wi×3 is an RGB image and y i s (c) ∈ R Hi×Wi is a binary mask for class c in the support image; and 2) a query (test) set Q = {x q , y q (c)} where x q is the query image and y q (c) is the ground-truth mask for class c in the query image. The input to the model is the support set S and the query image x q , and the output is the predicted mask ŷq (c) for class c in the query image. As there may be multiple classes in one query image x q , the ground truth query mask is different when a different label c is assigned. Fig. 1 shows an illustration of the task when k = 1. We develop a two-branch dense comparison module that densely compares each position in the query image with the support example, as shown in Fig. 2 (b). The module consists of two sub-modules: a feature extractor that extracts representations and a comparison module that performs feature comparison. Feature Extractor. The feature extractor aims to harvest different levels of representations from CNNs for feature matching. We use a ResNet-50 [9] as the backbone of the feature extractor. As done in previous few-shot segmentation work, the backbone model is pre-trained on Imagenet [4]. As is observed in CNN feature visualization literature [39,38], features in lower layers often relate to lowlevel cues, e.g., edges and colors while features in higher layers relate to object-level concepts such as object categories. In the few-shot scenario, our model should adapt to any unseen classes. Thus we can not assume that a feature corresponding to an unseen category is learned during training. Instead, we focus on middle-level features that may constitute object parts shared by unseen classes. The layers in ResNet are divided into 4 blocks based on the spatial resolution which naturally correspond to 4 different levels of representation. We choose features generated by block2 and block3 for feature comparison and abandon layers after block3. We use dilated convolutions [2] Dense Comparison. As there may be multiple object categories and cluttered backgrounds in the support image, we want to acquire an embedding that only corresponds to the target category for comparison. Here, we use global average pooling over the foreground area to squeeze the feature maps to a feature vector. Global image features turn out to be useful in segmentation tasks [19,40,3], which can be easily achieved by global average pooling. In our network, we only average features over the foreground area to filter out irrelevant areas. After we obtain the global feature vector from the support set, we concatenate the vector with all spatial locations in the feature map generated by the query branch. This operation aims to compare all the spatial locations in the query branch to the global feature vector from the support branch. Then, the concatenated feature maps go through another convolutional block with 256 3 × 3 convolutional filters for comparison.\nFor efficient implementation, we first bilinearly downsample the binary support mask to the same spatial size of the feature maps and then apply element-wise multiplication with the feature maps. As a result, features belonging to the background area become zero. Then we adopt global sum pooling and divide the resulting vector by the foreground area to obtain the average feature vector. We upsample the vector to the same spatial size of query features and concatenate them for dense comparison. As there exist variances in appearance within the same category, dense comparison can only match a part of the object, which may not be sufficiently powerful to accurately segment the whole object in the image. We observe that the initial prediction is an important clue about the rough position of the objects. We propose an iterative optimization module to optimize the predicted results iteratively. The structure is shown in Fig. 2 (c). The module's input is the feature maps generated by the dense comparison module and predicted masks from the last iteration. Directly concatenating feature maps with predicted masks as additional channels causes mismatch to the feature distribution as there is no predicted mask for the first forward pass. Instead, we propose to incorporate the predicted masks in a residual form:\nM t = x + F (x, y t-1 ),(1)\nwhere x is the output feature of the dense comparison module; y t-1 is the predicted masks from the last iteration step, and M t is the output of the residual block. Function F (•) is the concatenation of feature x and predicted masks y t-1 , followed by two 3 × 3 convolution blocks with 256 filters. Then we add two vanilla residual blocks with the same number of convolutional filters. On top of that, we use Atrous Spatial Pyramid Pooling module (ASPP) proposed in Deeplab V3 [3] to capture multi-scale information.\nThe module consists of four parallel branches that include three 3 × 3 convolutions with atrous rates of 6, 12, and 18 respectively and a 1 × 1 convolution. The 1 × 1 convolution is operated on the image-level feature which is achieved by global average pooling. Then the resulting vector is bilinearly upsampled to the original spatial size. The output features from 4 branches are concatenated and fused by an- is reset to empty masks with a probability of p r . This can be seen as dropout of the whole mask, an extension of the standard dropout [32]. In comparison to previous iterative refinement methods in segmentation literature [14,34,22], our method integrates the refinement scheme into the model with residual connection so that the whole model could run in a feed-forward manner and is trained end-to-end. In order to efficiently merge information in the k-shot setting, we use an attention mechanism to fuse the comparison results generated by different support examples. Specifically, we add an attention module parallel to the dense comparison convolution in DCM (see Fig. 3). The attention branch consists of two convolutional blocks. The first one has 256 3×3 filters, followed by 3×3 max pooling. The second one has one 3 × 3 convolution followed by a global average pooling. The result from the attention branch serves as the weight λ. Then, the weights from all support examples are normalized by a softmax function:\nλi = e λi k j=1 e λj . (2\n)\nThe final output is the weighted sum of features generated by different support samples. Method 1-shot 5-shot split-0 split-1 split-2 split-3 mean split-0 split-1 split-2 split-3 mean OSLSM [29] 33 (b) 1-shot and 5-shot results under the FB-IoU evaluation metric.\nTable 1 -Results on the PASCAL-5 i dataset. Our proposed method outperforms all previous methods under both evaluation metrics and sets a new state-of-the-art performance (bold). As the essence of our dense comparison module is to densely compare each location in the query image to the global representation provided by the support example, we explore a new form of support set annotation that uses bounding boxes. Compared with pixel-wise annotations, the bounding box annotation uses a rectangular box to denote the object area, which is often used in object detection tasks. Labeling bounding box annotations is much cheaper than pixel-wise labeling. We relax the support set by treating the whole bounding box area as the foreground. We test our model under this setting to evaluate the capability of our framework. The comparison of the two test settings is shown in Fig. 4. PASCAL-5 i is a dataset for few-shot semantic segmentation proposed in [29]. It is built on images from PASCAL VOC 2012 and extra annotations from SDS [7]. 20 object categories from PASCAL VOC are evenly divided into 4 splits with three splits for training and one split for testing. At test time, 1000 support-query pairs are randomly sampled in the test split. More details of PASCAL-5 i can be found in [29]. We implement extensive ablation experiments on the PASCAL-5 i dataset to inspect the effectiveness of different components in our network. All results are average mean-IoU over 4 splits on the PASCAL-5 i dataset.\nFeatures for Comparison. In Table 3, we compare our model variants that use different levels of feature in ResNet-50 for feature comparison. In all cases, we encode the features to 256 dimensions before comparison and we do not adopt iterative optimization. We experiment feature comparison with single block and multiple blocks. When single block is used for comparison, block3 performs the best. When multiple blocks are used for comparison, the combination of block2 and block3 achieves the best result. The reason is that block2 corresponds to relatively lowlevel cues, which alone is not enough to match object parts. While block4 corresponds to high-level features, e.g., categories, and incorporates a great number of parameters (2048 channels), which makes it hard to optimize under the fewshot setting. The combination of block2 and block3 is the best for matching class-agnostic object parts.\nWe also implement experiments with VGG16 as the feature extractor. We choose features of stage 2, 3, and 4 (out of 5). The final multi-scale test result with VGG as the backbone is 54.3%. Compared with the ResNet50 version (55.4%), the performance only drops by 1.1% and still significantly outperform the state-of-the-art results.\nIterative Optimization Module. To validate the effectiveness of our proposed iterative optimization module, we compare our network with a baseline model that does not employ additional IOM for optimization, i.e., the initial prediction from CANet(CANet-Init). We also compare our it- erative optimization scheme with DenseCRF [12], which is a post-processing method widely used in segmentation literature to refine segmentation maps. Table 4 shows the results of different model variants. As is shown, the iterative optimization yields 2.8% improvement over the initial prediction. DenseCRF does not significantly improve the few-shot segmentation prediction. We visualize the results and find that for the predicted masks which successfully locate most of the object region, DenseCRF can effectively improve segmentation results, particularly in the region of object boundaries. However, for failure masks, e.g., false localization of objects, DenseCRF expands false positive regions, which deteriorates the IoU score. Our IOM, on the other hand, can effectively fill the object region and remove irrelevant areas in a learnable way. We visualize the intermediate results of our iterative optimization process in Fig. 6. Attention vs. Feature Fusion vs. Mask Fusion. In the k-shot setting, we compare our attention mechanism to several solutions in previous work: 1) Feature-level average fusion. We experiment the method in [24], which is to average the features generated by different support examples. 2) Logic OR fusion for masks. Shaban et al. [29] use 1-shot model to make predictions with each support example and use logic OR operation to fuse individual predicted masks. Logic OR operation means that a position is predicted as foreground if any support example predicts it as foreground.\n3) Average fusion for masks. Moreover, we also experiment with average operation to fuse individual 1-shot predicted confidence maps. We report the results of CANet with different fusion solutions in Table 5. Our attention mechanism performs the best and brings the most increment over 1-shot baseline. This indicates that a learned attention module can be more effective in fusing information from different support examples than non-learnable fusion methods in feature level or image level. Using logic OR operation to fuse predicted masks does not show improvement over the 1-shot result.\nMulti-scale Evaluation. We also experiment multiscale evaluation as is commonly done in segmentation literature. Specifically, we re-scale the query image by [0.7, 1, 1.3 ] and average their predicted results. Multi-scale evaluation brings 1.4% and 1.3% meanIoU improvement in 1-shot and 5-shot settings, respectively. COCO 2014 [18] For the 1-shot task, we compare our network with the baseline model that does not employ additional iterative optimization (CANet-Init), and for the 5-shot task, we compare our attention mechanism with three non-learnable fusion methods described in Section 5.1.3. The result is shown in Table 6. In the 1-shot setting, our iterative optimization scheme brings 4.1% meanIoU improvement. Multi-scale evaluation shows extra 3.3% increase. In the 5shot setting, our attention mechanism outperforms all nonlearnable methods. Multi-scale evaluation obtains another 1.9% gain. G. Lin's participation was partly supported by the National Research Foundation Singapore under its AI Singapore Programme [AISG-RP-2018-003] and a MOE Tier-1 research grant [RG126/17 (S)]. R. Yao's participation was supported by the National Natural Scientific Foundation of China (NSFC) under Grant No. 61772530. We would like to thank NVIDIA for GPU donation." }, { "title": "Few-shot segmentation via cycle-consistent transformer", "year": 2021.0, "authors": "Gengwei Zhang; Guoliang Kang; Yi Yang; Yunchao Wei", "arxiv_di": 2106.0232, "Introduction": "Recent years have witnessed great progress in semantic segmentation [19,4,47]. The success can be largely attributed to large amounts of annotated data [48,17]. However, labeling dense segmentation masks are very time-consuming [45]. Semi-supervised segmentation [15,39,38] has been broadly explored to alleviate this problem, which assumes a large amount of unlabeled data is accessible. However, semi-supervised approaches may fail to generalize to novel classes with very few exemplars. In the extreme low data regime, few-shot segmentation [26,35] is introduced to train a segmentation model that can quickly adapt to novel categories. (c) Foreground pixel attention method. (d) Our Cycle-Consistent TRansformer (CyCTR) framework that enables all beneficial support pixel-level features (foreground and background) to be considered.\nMost few-shot segmentation methods follow a learning-to-learn paradigm where predictions of query images are made conditioned on the features and annotations of support images. The key to the success of this training paradigm lies in how to effectively utilize the information provided by support images. Previous approaches extract semantic-level prototypes from support features and follow a metric learning [29,7,35] pipeline extending from PrototypicalNet [28]. According to the granularity of utilizing support features, these methods can be categorized into two groups, as illustrated in Figure 1: 1) Class-wise mean pooling [35,46,44] (Figure 1(a)). Support features within regions of different categories are averaged to serve as prototypes to facilitate the classification of query pixels. 2) Clustering [18,41] (Figure 1(b)). Recent works attempt to generate multiple prototypes via EM algorithm or K-means clustering [41,18], in order to extract more abundant information from support images. These prototype-based methods need to \"compress\" support information into different prototypes (i.e. class-wise or cluster-wise), which may lead to various degrees of loss of beneficial support information and thus harm segmentation on query image. Rather than using prototypes to abstract the support information, [43,34] (Figure 1(c)) propose to employ the attention mechanism to extract information from support foreground pixels for segmenting query. However, such methods ignore all the background support pixels that can be beneficial for segmenting query image, and incorrectly consider partial foreground support pixels that are quite different from the query ones, leading to sub-optimal results. Many pixel-level support features are quite different from the query ones, and thus may confuse the attention. We incorporate cycle-consistency into attention to filter such confusing support features. Note that the confusing support features may come from foreground and background.\nIn this paper, we focus on equipping each query pixel with relevant information from support images to facilitate the query pixel classification.\nInspired by the transformer architecture [32] which performs feature aggregation through attention, we design a novel Cycle-Consistent Transformer (CyCTR) module (Figure 1(d)) to aggregate pixel-wise support features into query ones. Specifically, our CyCTR consists of two types of transformer blocks: the self-alignment block and the cross-alignment block. The selfalignment block is employed to encode the query image features by aggregating its relevant context information, while the cross-alignment aims to aggregate the pixel-wise features of support images into the pixel-wise features of query image. Different from self-alignment where Queryfoot_0 , Key and Value come from the same embedding, cross-alignment takes features from query images as Query, and those from support images as Key and Value. In this way, CyCTR provides abundant pixel-wise support information for pixel-wise features of query images to make predictions.\nMoreover, we observe that due to the differences between support and query images, e.g., scale, color and scene, only a small proportion of support pixels can be beneficial for the segmentation of query image. In other words, in the support image, some pixel-level information may confuse the attention in the transformer. Figure 2 provides a visual example of a support-query pair together with the label masks. The confusing support pixels may come from both foreground pixels and background pixels. For instance, point p 1 in the support image located in the plane afar, which is indicated as foreground by the support mask. However, the nearest point p 2 in the query image (i.e. p 2 has the largest feature similarity with p 1 ) belongs to a different category, i.e. background. That means, there exists no query pixel which has both high similarity and the same semantic label with p 1 . Thus, p 1 is likely to be harmful for segmenting \"plane\" and should be ignored when performing the attention. To overcome this issue, in CyCTR, we propose to equip the cross-alignment block with a novel cycle-consistent attention operation. Specifically, as shown in Figure 2, starting from the feature of one support pixel, we find its nearest neighbor in the query features. In turn, this nearest neighbor finds the most similar support feature. If the starting and the end support features come from the same category, a cycle-consistency relationship is established. We incorporate such an operation into attention to force query features only attend to cycle-consistent support features to extract information. In this way, the support pixels that are far away from query ones are not considered. Meanwhile, cycle-consistent attention enables us to more safely utilize the information from background support pixels, without introducing much bias into the query features.\nIn a nutshell, our contributions are summarized as follows: ( 2 Related Work", "Methodology": "In Table 1 and Table 2, we compare our method with other state-of-the-art few-shot segmentation approaches on Pascal-5 i and COCO-20 i respectively. It can be seen that our approach achieves new Table 3: Comparison with other methods using FB-IoU (%) on Pascal-5 i for 1-shot and 5-shot segmentation.\nMethod Backbone FB-IoU (%) 1-shot 5-shot A-MCG [13] Res-101 61.2 62.2 DAN [34] Res-101 71.9 72.3 PFENet [30] Res-101 72.9 73.5 CyCTR (Ours)\nRes-101 73.0 75.4\nstate-of-the-art performance on both Pascal-5 i and COCO-20 i . Specifically, on Pascal-5 i , to make fair comparisons with other methods, we report results with both ResNet-50 and ResNet-101. Our CyCTR achieves 64.0% mIoU with ResNet-50 backbone and 63.7% mIoU with ResNet-101 backbone for 1-shot segmentation, significantly outperforming previous state-ofthe-art results by 3.2% and 3.6% respectively. For 5-shot segmentation, our CyCTR can even surpass state-of-the art methods by 5.6% and 6.0% mIoU when using ResNet-50 and ResNet-101 backbones respectively. For COCO-20 i results in Table 2, our method also outperforms other methods by a large margin due to the capability of the transformer to fit more complex data. Besides, Table 3 shows the comparison using FB-IoU on PASCAL-5 i for 1-shot and 5-shot segmentation, our method also obtains the state-of-the-art performance. The results with different numbers of encoders (denoted as L) or hidden dimensions (denoted as d) are shown in Table 5a and 5b. While increasing L or d within a certain range, CyCTR achieves better results. We chose L = 2 as our default choice for accuracy-efficiency trade-off.", "Dataset": "We conduct experiments on two commonly used few-shot segmentation datasets, Pascal-5 i [10] (which is combined with SBD [11] dataset) and COCO-20 i [17], to evaluate our method. For Pascal-5 i , 20 classes are separated into 4 splits. For each split, 15 classes are used for training and 5 classes for test. At the test time, 1,000 pairs that belong to the testing classes are sampled from the validation set for evaluation. In COCO-20 i , we follow the data split settings in FWB [23] to divide 80 classes evenly into 4 splits, 60 classes for training and test on 20 classes, and 5,000 validation pairs from the 20 classes are sampled for evaluation. Detailed data split settings can be found in the supplementary materials. Following common practice [30,35,46], the mean intersection over union (mIoU) is adopted as the evaluation metric, which is the averaged value of IoU of all test classes. We also report the foreground-background IoU (FB-IoU) for comparison. In this Table 6 and Table 7, we provide the detailed split settings for datasets (Pascal 5 i and COCO-20 i ) used in our experiments, which follow the split settings proposed in [23].", "Conclusion": "In this paper, we design a CyCTR module to deal with the few-shot segmentation problem. Different from previous practices that either adopt semantic-level prototype(s) from support images or only use foreground support features to encode query features, our CyCTR utilizes all pixel-level support features and can effectively eliminate aggregating confusing and harmful support features with the proposed novel cycle-consistency attention. We conduct extensive experiments on two popular benchmarks, and our CyCTR outperforms previous state-of-the-art methods by a significant margin. We hope this work can motivate researchers to utilize pixel-level support features to design more effective algorithms to advance the few-shot segmentation research.\nA More Details The overall network architecture used in our experiments is shown in Figure 5. Following the common practice [30,35,44], query and support image are first feed into a shared backbone network to obtain general image features. Similar to [30], the backbone network is pretrained on ImageNet [25] and then completely kept fixed during few-shot segmentation training. Following [18,30,41], we use dilated version of ResNet [12] as the backbone network. Besides, middle-level features are processed by a 1x1 convolution to reduce the hidden dimension and high-level features are used to generate a prior map that concatenated with the middle-level feature. In details, the middle-level feature consists of the concatenation of features from the 3 rd and the 4 th block of ResNet (total 5 blocks including the stem block) with shape H × W × (512 + 1024) and is feed into a 1 × 1 convolution to reduce the dimension to H × W × d, where d is the hidden dimension that can be adjusted in our experiments. The high-level feature (from the 5 th block of ResNet) with shape H × W × 2048 is used to generate the prior mask as in [30], which compute the pixel-wise similarity between the query and support high-level features and keep the maximum similarity at each pixel and normalize (using min-max normalization) the similarity map to the range of [0, 1]. To enable the pixel-wise comparison, we also concatenate the mask averaged support feature to both query and support feature and processed by a 1x1 convolution before inputting into the transformer. The final segmentation result is obtained by reshaping the output sequence back to spatial dimensions and predicted by a small convolution head that is consisted of one 3x3 convolution, one ReLU activation, and a 1x1 convolution. Dice loss [21] is used as the training objective.\nBaseline setup: For the baseline of our method, we use two residual blocks [12] to merge the query feature. The support information comes from the concatenated support global feature and the prior map. During training, the foreground middle-level query feature from backbone network is averaged and concatenated with the middle-level support feature to predict the support mask for feature alignment. This auxiliary supervision is included in all of our experiments.", "Experiment_and_Results": "In Figure 4, we show some qualitative results generated by our model on Pascal-5 i . Our cycleconsistent attention can improve the segmentation quality by suppressing possible harmful information from support. For instance, without cycle-consistency, the model misclassifies trousers as \"cow\" in the first row, baby's hair as \"cat\" in the second row, and a fraction of mountain as \"car\" in the third row, while our model rectifies these part as background. However, in the first row, our CyCTR still segments part of the trousers as \"cow\" and the right boundary of the segmentation mask is slightly worse than the model without cycle-consistency. The reason comes from the extreme differences between query and support, i.e. the support image shows a \"cattle\" but the query image contains a milk cow. The cycle-consistency may over-suppress the positive region in support images. Solving such issue may be a potential direction to investigate to improve our method further.", "Extra": "Few-shot segmentation [26] is established to perform segmentation with very few exemplars. Recent approaches formulate few-shot segmentation from the view of metric learning [29,7,35]. For instance, [7] first extends PrototypicalNet [28] to perform few-shot segmentation. PANet [35] simplifies the framework with an efficient prototype learning framework. SG-One [46] leverage the cosine similarity map between the single support prototype and query features to guide the prediction. CANet [44] replaces the cosine similarity with an additive alignment module and iteratively refines the network output. PFENet [30] further designs an effective feature pyramid module and leverages a prior map to achieve better segmentation performance. Recently, [41,18,43] point out that only a single support prototype is insufficient to represent a given category. Therefore, they attempt to obtain multiple prototypes via EM algorithm to represent the support objects and the prototypes are compared with query image based on cosine similarity [18,41]. Besides, [43,34] attempt to use graph attention networks [33,40] to utilize all foreground support pixel features. However, they ignore all pixels in the background region by default. Besides, due to the large difference between support and query images, not all support pixels will benefit final query segmentation. Recently, some concurrent works propose to learn dense matching through Hypercorrelation Squeeze Networks [22] or mining latent classes [42] from the background region. Our work aims at mining information from the whole support image, but exploring to use the transformer architecture and from a different perspective, i.e., reducing the noise in the support pixel-level features. Transformer and self-attention were firstly introduced in the fields of machine translation and natural language processing [6,32], and are receiving increasing interests recently in the computer vision area. Previous works utilize self-attention as additional module on top of existing convolutional networks, e.g., Nonlocal [36] and CCNet [14]. ViT [8] and its following work [31] demonstrate the pure transformer architecture can achieve state-of-the-art for image recognition. On the other hand, DETR [3] builds up an end-to-end framework with a transformer encoder-decoder on top of backbone networks for object detection. And its deformable vairents [51] improves the performance and training efficiency. Besides, in natural language processing, a few works [2,5,27] have been introduced for long documents processing with sparse transformers. In these works, each Query token only attends to a pre-defined subset of Key positions. Our work is partially inspired by cycle-consistency learning [50,9] that is explored in various computer vision areas. For instance, in image translation, CycleGAN [50] uses cycle-consistency to align image pairs. It is also effective in learning 3D correspondence [49], consistency between video frames [37] and association between different domains [16]. These works typically constructs cycle-consistency loss between aligned targets (e.g., images). However, the simple training loss cannot be directly applied to few-shot segmentation because the test categories are unseen from the training process and no finetuning is involved during testing. In this work, we incorporate the idea of cycle-consistency into transformer to eliminate the negative effect of confusing or irrelevant support pixels.\n3 Methodology [30,35,44], episode training is adopted in this work for few-shot segmentation. Each episode is composed of k support images I s and a query image I q to form a k-shot episode {{I s } k , I q }, in which all {I s } k and I q contain objects from the same category. Then the training set and test set are represented by D train = {{I s } k , I q } Ntrain and D test = {{I s } k , I q } Ntest , where N train and N test is the number of episodes for training and test set. During training, both support masks M s and query masks M q are available for training images, and only support masks are accessible during testing. Following the general form in [32], a transformer block is composed of alternating layers of multi-head attention (MHA) and multi-layer perceptron (MLP). LayerNorm (LN) [1] and residual connection [12] are applied at the end of each block. Specially, an attention layer is formulated as\nAtten(Q, K, V ) = softmax( QK T √ d )V,(1)\nwhere\n[Q; K; V ] = [W q Z q ; W k Z kv ; W v Z kv ],\nin which Z q is the input Query sequence, Z kv is the input Key/Value sequence, W q , W k , W v ∈ R d×d denote the learnable parameters, d is the hidden dimension of the input sequences and we assume all sequences have the same dimension d by default.\nFor each Query element, the attention layer computes its similarities with all Key elements. Then the computed similarities are normalized via softmax, which are used to multiply the Value elements to achieve the aggregated outputs. When Z q = Z kv , it functions as self-attention mechanism.\nThe multi-head attention layer is an extention of attention layer, which performs h attention operations and concatenates consequences together. Specifically,\nMHA(Q, K, V ) = [head 1 , ..., head h ],(2)\nwhere\nhead m = Atten(Q m , K m , V m ) and the inputs [Q m , K m , V m ] are the m th group from [Q, K, V ] with dimension d/h. Our framework is illustrated in Figure 3(a). Generally, an encoder of our Cycle-Consistent TRansformer (CyCTR) consists of a self-alignment transformer block for encoding the query features and a cross-alignment transformer block to enable the query features to attend to the informative support features. The whole CyCTR module stacks L encoders. Figure 3: Framework of our proposed Cycle-Consistent TRansformer (CyCTR). Each encoder of CyCTR consists of two transformers blocks, i.e., the self-alignment block for utilizing global context within the query feature map and the cross-alignment block for aggregate information from support images. In the cross-alignment block, we introduce the multi-head cycle-consistent attention (shown on the right, with the number of heads h = 1 for simplicity). The attention operation is guided by the cycle-consistency among query and support features.\nSpecifically, for the given query feature X q ∈ R Hq×Wq×d and support feature X s ∈ R Hs×Ws×d , we first flatten them into 1D sequences (with shape HW × d) as inputs for transformer, in which a token is represented by the feature z ∈ R d at one pixel location. The self-alignment block only takes the flattened query feature as input. As context information of each pixel has been proved beneficial for segmentation [4,47], we adopt the self-alignment block to pixel-wise features of query image to aggregate their global context information. We don't pass support images through the self-alignment block, as we mainly focus on the segmentation performance of query images. Passing through the support images which don't coordinate with the query mask may do harm to the self-alignment on query images.\nIn contrast, the cross-alignment block performs attention between query and support pixel-wise features to aggregate relevant support features into query ones. It takes the flattened query feature and a subset of support feature (the sampling procedure is discussed latter) with size N s ≤ H s W s as Key/Value sequence Z kv .\nWith these two blocks, it is expected to better encoder the query features to facilitate the subsequent pixel-wise classification. When stacking L encoders, the output of the previous encoder is fed into the self-alignment block. The outputs of self-alignment block and the sampled support features are then fed into the cross-alignment block. According to the aforementioned discussion, the pure pixel-level attention may be confused by excessive irrelevant support features. To alleviate this issue, as shown in Figure 3(b), a cycleconsistent attention operation is proposed. We first go through the proposed approach for 1-shot case for presentation simplicity and then discuss it in the multiple shot setting.\nFormally, an affinity map A = QK T √ d , A ∈ R HqWq×Ns is first calculated to measure the correspondence between all query and support pixels. Then, for an arbitrary support pixel/token j (j ∈ {0, 1, ..., N s -1}, N s is the number of support pixels), its most similar query pixel/token i is obtained by\ni = argmax i A (i,j) ,(3)\nwhere i ∈ {0, 1, ..., H q W q -1} denotes the spatial index of query pixels. Since the query mask is not accessible, the label of query pixel i is unknown. However, we can in turn find its most similar support pixel j in the same way:\nj = argmax j A (i ,j) .(4)\nGiven the sampled support label M s ∈ R Ns , cycle-consistency is satisfied if M s(j) = M s(j ) .\nPrevious work [16] attempts to encourage the feature similarity between cycle-consistent pixels to improve the model's generalization ability within the same set of categories. However, in few-shot segmentation, the goal is to enable the model to fast adapt to novel categories rather than making the model fit better to training categories. Thus, we incorporate the cycle-consistency into the attention operation to encourage the cycle-consistent cross-attention. First, by traversing all support tokens, an additive bias B ∈ R Ns is obtained by\nB j = 0, ifM s(j) = M s(j ) -∞, ifM s(j) = M s(j ) ,\nwhere j ∈ {0, 1, ..., N s }. Then, for a single query token Z q(i) ∈ R d at location i, the support information is aggregated by\nCyCAtten(Q i , K i , V i ) = softmax(A (i) + B)V,(5)\nwhere i ∈ {0, 1, ..., H q W q } and A is obtained by QK T √ d . In the forward process, B is element-wise added with the affinity A (i) for Z q(i) to aggregate support features. In this way, the attention weight for the cycle-inconsistent support features become zero, implying that these irrelevant information will not be considered. Besides, the cycle-consistent attention implicitly encourages the consistency between the most relevant query and support pixel-wise features through backpropagation. Note that our method aims at removing support pixels with certain inconsistency, rather than ensuring all support pixels to form cycle-consistency, which is impossible without knowing the query ground truth labels.\nWhen performing self-attention in the self-alignment block, there may also exist the same issue, i.e. the query token may attend to irrelevant or even harmful features (especially when background is complex). According to our cycle-consistent attention, each query token should receive information from more consistent pixels than aggregating from all pixels. Due to the lack of query mask M q , it is impossible to establish the cycle-consistency among query pixels/tokens. Inspired by DeformableAttention [51], the consistent pixels can be obtained via a learnable way as ∆ = f (Q + Coord) and A = g(Q + Coord), where ∆ ∈ R HpWp×P is the predicted consistent pixels, in which each element δ ∈ R P in ∆ represents the relative offset from each pixel and P represents the number of pixels to aggregate. And A ∈ R HqWq×P is the attention weights. Coord ∈ R HqWq×d is the positional encoding [24] to make the prediction be aware of absolute position, and f (•) and g(•) are two fully connected layers that predict the offsets 4 and attention weights. Therefore, the self-attention within the self-alignment transformer block is represented as\nPredAtten(Q r , V r ) = P g softmax(A ) (r,g) V r+∆ (r,g) ,(6)\nwhere r ∈ {0, 1, ..., H q W q } is the index of the flattened query feature, both Q and V are obtained by multiplying the flattened query feature with the learnable parameter.\nGenerally speaking, the cycle-consistent transformer effectively avoids the attention being biased by irrelevant features to benefit the training of few-shot segmentation.\nMask-guided sparse sampling and K-shot Setting: Our proposed cycle-consistency transformer can be easily extended to K-shot setting where K > 1. When multiple support feature maps are provided, all support features are flattened and concatenated together as input. As the attention is performed at the pixel-level, the computation load will be high if the number of support pixels/tokens is large, which is usually the case under K-shot setting. In this work, we apply a simple maskguided sampling strategy to reduce the computation complexity and make our method more scalable. Concretely, given the k-shot support sequence Z s ∈ R kHsWs×d and the flattened support masks M s ∈ R kHsWs , the support pixels/tokens are obtained by uniformly sampling N f g tokens (N f g <= Ns 2 , where N s ≤ kH s W s ) from the foreground regions and N s -N f g tokens from the background regions in all support images. With a proper N s , the sampling operation reduces the computational complexity, and makes our algorithm more scalable with the increase of spatial size of support images. Additionally, this strategy helps balance the foreground-background ratio and also implicitly considers different sizes of various object regions in support images. Following previous works [30,35,44], both query and support images are first feed into a shared backbone (e.g., ResNet [12]) which is initialized with weights pretrained from ImageNet [25] to obtain general image features. Similar to [30], middle-level query features (the concatenation of query features from the 3 rd and the 4 th blocks of ResNet) are processed by a 1×1 convolution to reduce the hidden dimension. The high-level query features (from the 5 th block) are used to generate a prior map (the prior map is generated by calculating the pixel-wise similarity between query and support features, details can be found in the supplementary materials) and then are concatenated with the middle-level query features. The average masked support feature is also concatenated to provide global support information. The concatenated features are processed by a 1×1 convolution.\nThe output query features are then fed into our proposed CyCTR encoders. The output of CyCTR encoders is fed into a classifier to obtain the final segmentation results. The classifier consists of a 3×3 convolutional layer, a ReLU layer and a 1×1 convolutional layer. More details about our network structure can be found in the supplementary materials. In our experiments, the training strategies follow the same setting in [30]: training for 50 epochs on COCO-20 i and 200 epochs on Pascal-5 i . Images are resized and cropped to 473 × 473 for both datasets and we use random rotation from -10 • to 10 • as data augmentation. Besides, we use ImageNet [25] pretrained ResNet [12] as the backbone network and its parameters (including BatchNorms) are freezed. For the parameters except those in the transformer layers, we use the initial learning rate 2.5 × 10 -3 , momentum 0.9, weight decay 1 × 10 -4 and SGD optimizer with poly learning rate decay [4]. The mini batch size on each gpu is set to 4. Experiments are carried out on Tesla V100 GPUs. For Pascal-5 i , one model is trained on a single GPU, while for COCO-20 i , one model is trained with 4 GPUs. We construct our baseline as follows: as stated in Section 3.4, the middle-level query features from network are concatenated and merged with the global support feature and the prior map. This feature is processed by two residule blocks and input to the same classifier as our method. Dice loss [21] is used as the training objective. Besides, the middle-level query feature is averaged using the ground truth and concatenated with support feature to predict the support segmentation map, which produces an auxiliary loss for aligning features.\nThe same settings are also used in our method except that we use our cycle-consistent transformer to process features rather than the residule blocks. For the proposed cycle-consistent transformer, we set the number of sampled support tokens N s to 600 for 1-shot and 5 × 600 for 5-shot setting.\nThe number of sampled tokens is obtained according to the averaged number of foreground pixels among Pascal-5 i training set. For the self-attention block, the number of points P is set to 9. For other hyper-parameters in transformer blocks, we use L = 2 transformer encoders. We set the hidden dimension of MLP layer to 3×256 and that of input to 256. The number of heads for all attention layers is set to 8 for Pascal-5 i and 1 for COCO-20 i . Parameters in the transformer blocks are optimized with AdamW [20] optimizer following other transformer works [3,8,31], with learning rate 1 × 10 -4 and weight decay 1 × 10 -2 . Besides, we use Dropout with the probability 0.1 in all attention layers. To provide a deeper understanding of our proposed method, we show ablation studies in this section.\nThe experiments are performed on Pascal-5 i 1-shot setting with ResNet-50 as the backbone network, and results are reported in terms of mIoU. We perform ablation studies regarding each component of our CyCTR in Table 4. The first line is the result of our baseline, where we use two residual blocks to merge features as stated in Section 4.2. For all ablations in Table 4, the hidden dimension is set to 128 and two transformer encoders are used.\nThe mIoU results are averaged over four splits. Firstly, we only use the self-alignment block that only encodes query features. The support information in this case comes from the concatenated global support feature and the prior map used in [44]. It can already bring decent results, showing that the transformer encoder is effective for modeling context for few-shot segmentation. Then, we utilize the cross-alignment block but only with the vanilla attention operation in Equation 1. The mIoU increases by 0.4%, indicating that pixel-level features from support can provide additional performance gain. By using our proposed cycle-consistent attention module, the performance can be further improved by a large margin, i.e. 0.6% mIoU compared to the vanilla attention. This result demonstrates our cycle-consistent attention's capability to suppress possible harmful information from support. Besides, we assume some background support features may also benefit the query segmentation and therefore use the cycle-consistent transformer to aggregate pixel-level information from background support features as well. Comparing the last two lines in Table 4, we show that our way of utilizing beneficial background pixel-level support information brings 0.5% mIoU improvement, validating our assumption and the effectiveness of our proposed cycle-consistent attention operation.\nBesides, one may be curious about whether the noise can also be removed by predicting the aggregation position like the way in Equation 6for aggregating support features to query. Therefore, we use predicted aggregation instead of the cycle-consistent attention in the cross-alignment block, as denoted by CyCTR(pred) in Table 4. It does benefit the few-shot segmentation by aggregating useful information from support but is 0.9% worse than the proposed cycle-consistent attention. The reason lies in the dramatically changing support images under few-shot segmentation testing. The cycle-consistency is better than the learnable way as it can globally consider the varying conditional information from both query and support. We can stack more encoders or increase the hidden dimension of encoders to increase its capacity and validate the effectiveness of our CyCTR. Test classes PASCAL-5 0 aeroplane, bicycle, bird, boat, bottle PASCAL-5 1 bus, car, cat, chair, cow PASCAL-5 2 diningtable, dog, horse, motorbike, person PASCAL-5 3 potted plant, sheep, sofa, train, tv/monitor We provide more visualizations in Figure 6. We also provide the visualization of cycle-consistency relationships. In the first row, only a small part of the foreground region is activated while most foreground regions are valid in the second row. And in the second row, pixels on the \"person\" are shown in gray, which indicates that these pixels may have a negative impact on segmenting \"cat\". Ours without Cycle-consistency Ours Cycle-consistency" }, { "title": "SG-one: Similarity guidance network for one-shot semantic segmentation", "year": 2020.0, "authors": "Xiaolin Zhang; Yunchao Wei; Yi Yang; Thomas Huang", "arxiv_di": 1810.09091, "Introduction": "O BJECT Semantic Segmentation (OSS) aims at predicting the class label of each pixel. Deep neural networks have achieved tremendous success on the OSS tasks, such as U-net [1], FCN [2] and Mask R-CNN [3]. However, these algorithms trained with full annotations require many investments to the expensive labeling tasks. To reduce the budget, a promising alternative approach is to apply weak annotations for learning a decent network of segmentation. For example, previous works have implemented image-level labels [4]- [6], scribbles [7]- [9], bounding boxes [10], [11] and points [12]- [14] as cheaper supervision information. Whereas the main disadvantage of these weakly supervised methods is the lack of the ability for generalizing the learned models to unseen classes. For instance, if a network is trained to segment dogs using thousands of images containing various breeds of dogs, it will not be able to segment bikes without retraining the network using many images containing bikes.\nIn contrast, humans are very good at recognizing things with a few guidance. For instance, it is very easy for a Fig. 1. An overview of the proposed SG-One approach for testing a new class. Given a query image of an unseen category, e.g.. cow, its semantic object is precisely segmented with the reference to only one annotated example of this category.\nchild to recognize various breeds of dogs with the reference to only one picture of a dog. Inspired by this, one-shot learning dedicates to imitating this powerful ability of human beings. In other words, one-shot learning targets to recognize new objects according to only one annotated example. This is a great challenge for the standard learning methodology. Instead of using tremendous annotated instances to learn the characteristic patterns of a specific category, our target is to learn one-shot networks to generalize to unseen classes with only one densely annotated example.\nConcretely, one-shot image segmentation is to discover the object pixels of a query image with the reference to only one support image. The target objects in the support image is densely annotated. Current existing methods [15], [16] are all based on the Siamese framework [17]. Briefly, a pair of parallel networks is trained for extracting the features of labeled support images and query images, separately. These features are then fused to generate the probability maps of the target objects. The purpose of the network is actually to learn the relationship between the annotated support image and the query image within the highlevel feature space. These methods provide an advantage that the trained parameters of observed classes can be directly utilized for testing unseen classes without finetuning. Nevertheless, there are some weaknesses with these methods: 1) The parameters of using the two parallel networks are redundant, which is prone to overfitting and leading to the waste of computational resources; 2) combining the features of support and query images by mere multiplication is inadequate for guiding the query network to learn high-quality segmentation masks.\nTo overcome the above mentioned weaknesses, we propose a Similarity Guidance Network for One-Shot Semantic Segmentation (SG-One) in this paper. The fundamental idea of SG-One is to guide the segmentation process by effectively incorporating the pixel-wise similarities between the features of support objects and query images. Particularly, we firstly extract the highlevel feature maps of the input support and query images. Highlevel features are usually abstract and the embeddings of pixels belonging to the same category objects are close. The embeddings with respect to the background pixels are usually depressed and these embeddings are distant from the object embeddings. Therefore, we propose to obtain the representative vectors from support images by applying the masked average pooling operation. The masked average pooling operation can extract the object-related features by excluding the influences from background noises. Then, we get the guidance maps by calculating cosine similarities between the representative vectors and the features of query images at each pixel. The feature vectors corresponding to the pixels of objects in query images are close to the representative vectors extracted from support images, hence the scores in the guidance maps will be high. Otherwise, the scores in guidance maps will be low if the pixels belonging to the background. The generated guidance maps are applied to supply the guidance information of desired regions to the segmentation process. In detail, the position-wise feature vectors of query images are multiplied by the corresponding similarity values. Such a strategy can effectively contribute to activating the target object regions of query images following the guidance of support images and their masks. Furthermore, we adopt a unified network for producing similarity guidance and predicting segmentation masks of query images. Such a network is more capable of generalizing to unseen classes than the previous methods [15], [16].\nOur approach offers multiple appealing advantages over the previous state-of-the-arts, e.g.OSLSM [15] and co-FCN [16]. First, OSLSM and co-FCN incorporate the segmentation masks of support images by changing the input structure of the network or the statistic distribution of input images. Differently, we extract the representative vector from the intermediate feature maps with the masked average pooling operation instead of changing the inputs. Our approach does not harm the input structure of the network, nor harm the statistics of input data. Averaging only the object regions can avoid the influences from the background. Otherwise, when the background pixels dominate, the learned features will bias towards the background contents. Second, OSLSM and co-FCN directly multiply the representative vector to feature maps of query images for predicting the segmentation masks. SG-One calculates the similarities between the representative vector and the features at each pixel of query images, and the similarity maps are employed to guide the segmentation branch for finding the target object regions. Our method is superior in the process of segmenting the query images. Third, both OSLSM and co-FCN adopt a pair of VGGnet-16 networks for processing support and query images, separately. We employ a unified network to process them simultaneously. The unified network utilizes much fewer parameters, so as to reduce the computational burden and increase the ability to generalize to new classes in testing.\nThe overview of SG-One is illustrated in Figure 1. We apply two branches, i.e., similarity guidance branch, and segmenta-tion branch, to produce the guidance maps and segmentation masks. We forward both the support and query images through the guidance branch to calculate the similarity maps. The features of query images also forward through the segmentation branch for prediction segmentation masks. The similarity maps act as guidance attention maps where the target regions have higher scores while the background regions have lower scores. The segmentation process is guided by the similarity maps for getting the precise target regions. After the training phase, the SG-One network can predict the segmentation masks of a new class without changing the parameters. For example, a query image of an unseen class, e.g., cow, is processed to discover the pixels belong to the cow with only one annotated support image provided.\nTo sum up, our main contributions are three-fold:\n• We propose to produce robust object-related representative vectors using masked average pooling for incorporating contextual information without changing the input structure of networks. • We produce the pixel-wise guidance using cosine similarities between representative vectors and query features for predicting the segmentation masks. • We propose a unified network for processing support and query images. Our network achieves the cross-validate mIoU of 46.3% on the PASCAL-5 i dataset in one-shot segmentation setting, surpassing the baseline methods.", "Related_Work": "Object semantic segmentation (OSS) aims at classifying every pixel in a given image for distinguishing different objects or contents. OSS with dense annotations as supervision has achieved great success in precisely identifying various kinds of objects [18]- [25]. Recently, most of the works with impressive performance are based on deep convolutional networks. FCN [2] and U-Net [1] abandon fully connected layers and propose to only use convolutional layers for preserving relative positions of pixels. Based on the advantages of FCN, DeepLab proposed by Chen et al. [26], [27], is one of the best algorithms for segmentation. It employs dilated convolution operations to increase the receptive field, and meanwhile to save parameters in comparison with the large convolutional kernel methods. He et al. [3] proposes segmentation masks and detection bounding boxes can be predicted simultaneously using a unified network.\nWeakly object segmentation seeks an alternative approach for segmentation to reduce the expenses in labeling segmentation masks [14], [28]- [32]. Zhou [33] and Zhang [34], [35] propose to discover precise object regions using a classification network with only image-level labels. Wei [4], [5] object pixels according to the similarity of adjacent pixels and ground-truth scribble lines.\nVideo Object Segmentation is also a challenging problem of segmenting a specific object in a video clip given merely the annotation of the first frame [36]. The categories of training and testing sets are disjoint, which makes the task similar to our one-shot image segmentation. OSVOS [37] adopts a direct approach of learning a segmentation network on the training set, and then finetunes the trained network on the augmented first frames of testing sets. Although OSVOS achieves good segmentation performance, the main drawback is the latency of finetuning during testing a new video clip. PLM [38] applies a more sophisticated network to learn better feature embeddings by involving intermediate feature maps of both search and query frames. A simple crop method is also applied by estimating the approximate location of target objects according to the relationship between successive frames. SegFlow [39] leverages optical flow of moving objects to assist the segmentation process. Flownet [40] are internally embedded to the framework of SegFlow, and updated in an end-to-end way. Consequently, the segmentation network and flow network can benefit from each other to learn better segmentation masks as well as optical flows. VideoMatch [41] learns the represents of both foreground and background regions, because the successive video clips usually maintain similar or static background environments. Therefore, the learned robust represents can be easily applied to retrieve the target object regions of query images.\nFew-shot learning algorithms dedicates to distinguishing the patterns of classes or objects with only a few labeled samples [42], [43]. Networks should generalize to recognize new objects with few images based on the parameters of base models. The base models are trained using entirely different classes without any overlaps with the testing classes. Finn et al. [44] tries to learn some internal transferable representations, and these representations are broadly applicable to various tasks. Vinyals et al. [45] and Annadani et al. [46] propose to learn embedding vectors. The vectors of the same categories are close, while the vectors of the different categories are apart.", "Methodology": "Suppose we have three datasets: a training set\nL train = {(I i , Y i )} Ntrain i=1 , a support set L support = {(I i , Y i )} Nsupport i=1\nand a testing set\nL test = {I i } Ntest i=1\n, where I i is an image, Y i is the corresponding segmentation mask and N is the number of images in each set. Both the support set and training set have annotated segmentation masks. The support set and testing set share the same types of objects which are disjoint with the training set. We denote l ∈ Y as the semantic class of the mask Y . Therefore, we have {l train } ∩ {l support } = ∅, where ∩ denotes the intersection of the two sets. If there are K annotated images for each class, the target few-shot problem is named K-shot. Our purpose is to train a network on the training set L train , which can precisely predict segmentation masks Ŷtest on testing set L test according to the reference of the support set L support . Specifically, the predicted masks contains two classes, i.e., object class and background. If the objects in query images share the same category label with the annotated objects in support images, the corresponding values in the predicted masks are supposed to be 1 for indicating object pixels. Otherwise, the corresponding values should be 0 for indicating background pixels.\nIn order to better learn the connection between the support and testing set, we mimic this mechanism in the training process. For a query image I i , we construct a pair {(I i , Y i ), (I j , Y j )} by randomly selecting a support image I j whose mask Y j has the same semantic class as Y i . We are supposed to estimate the segmentation mask Ŷi with a function Ŷj = f θ ((I i , Y i ), I j ), where θ is the parameters of the function. In testing, (I i , Y i ) is picked from the support set L support and I j is an image from testing set L test . In this section, we firstly present the masked average pooling operation for extracting the object-related representative vector of annotated support images. Then, the similarity guidance method is introduced for combining the representative vectors and features of query images. The generated similarity guidance maps supply the information for precisely predicting the segmentation masks. Masked Average Pooling The pairs of support images and their masks are usually encoded into representative vectors. OSLSM [15] proposes to erase the background pixels from the support images by multiplying the binary masks to support images. co-FCN [16] proposes to construct the input block of five channels by concatenating the support images with their positive and negative masks. However, there are two disadvantages of the two methods. First, erasing the background pixels to zeros will change the statistic distribution of the support image set. If we apply a unified network to process both the query images and the erased support images, the variance of the input data will greatly increase. Second, concatenating the support images with their masks [16] breaks the input structure of the network, which will also prevent the implementation of a unified network.\nWe propose to employ masked average pooling for extracting the representative vectors of support objects. Suppose we have a support RGB image I ∈ R 3×w×h and its binary segmentation mask Y ∈ {0, 1} w×h , where w and h are the width and height of the image. If the output feature maps of I is F ∈ R c×w ×h , where c is the number of channels, w and h are width and height of the feature maps. We firstly resize the feature maps to the same size as the mask Y via bilinear interpolation. We denote the resized feature maps as F ∈ R c×w×h . Then, the i th element v i of the representative vector v is computed by averaging the pixels within the object regions on the i th feature map.\nv i = w,h x=1,y=1 Y x,y * F i,x,y w,h x=1,y=1 Y x,y ,(1)\nAs the discussion in FCN [2], fully convolutional networks are able to preserve the relative positions of input pixels. Therefore, through masked average pooling, we expect to extract the features of object regions while disregarding the background contents. Also, we argue that the input of contextual regions in our method is helpful to learn better object features. This statement has been discussed in DeepLab [26] which incorporates contextual information using dilated convolutions. Masked average pooling keeps the input structure of the network unchanged, which enables us to process both the support and query images within a unified network. Similarity Guidance One-shot semantic segmentation aims to segment the target object within query images given a support image of the reference object. As we have discussed, the masked average pooling method is employed to extract the representative vector v = (v 1 , v 2 , ..., v c ) of the reference object, where c is the number of channels. Suppose the feature maps of a query image I que is F que ∈ R c×w ×h . We employ the cosine distance to measure the similarity between the representative vector v and each pixel within F que following Eq. ( 2)\ns x,y = v * F que x,y ||v|| 2 * ||F que x,y || 2 ,(2)\nwhere s x,y ∈ [-1, 1] is the similarity value at the pixel (x, y). F que x,y ∈ R c×1 is the feature vector of query image at the pixel (x, y). As a result, the similarity map S integrates the features of the support object and the query image. We use the map S = {s x,y } as guidance to teach the segmentation branch to discover the desired object regions. We do not explicitly optimize the cosine similarity. In particular, we element-wisely multiply the similarity guidance map to the feature maps of query images from segmentation branch. Then, we optimize the guided feature maps to fit the corresponding ground truth masks. Similarity Guidance Branch is fed the extracted features of both query and support images. We apply this branch to produce the similarity guidance maps by combining the features of reference objects with the features of query images. For the features of support images, we implement three convolutional blocks to extract the highly abstract and semantic features, followed by a masked averaged pooling layer to obtain representative vectors. The extracted representative vectors of support images are expected to contain the highlevel semantic features of a specific object. For the features of query images, we reuse the three blocks and employ the cosine similarity layer to calculate the closeness between the representative vector and the features at each pixel of the query images. Segmentation branch is for discovering the target object regions of query images with the guidance of the generated similarity maps. We employ three convolutional layers with the kernel size of 3×3 to obtain the features for segmentation. The inputs of the last two convolutional layers are concatenated with the paralleling feature maps from Similarity Guidance Branch. Through the concatenation, Segmentation Branch can borrow features from the paralleling branch, and these two branches can communicate information during the forward and backward stages. We fuse the generated features with the similarity guidance maps by multiplication at each pixel. Finally, the fused features are processed by two convolutional layers with the kernel size of 3 × 3 and 1 × 1, followed by a bilinear interpolation layer. The network finally classifies each pixel to be the same class with support images or to Methods one-shot five-shot FG-BG [16] 55.1 55.6 OSLSM [15] 55.2 co-FCN [16] 60.1 60.8 PL+SEG [51] 61.2 62.3 SG-One(Ours)\n63.1 65.9\nTo summarize, SG-One can effectively predict segmentation masks on new classes without changing the parameters. Our similarity guidance method is better than the baseline methods in incorporating the support objects for segmenting unseen objects.\nFigure 3 shows the one-shot segmentation results using SG-One on unseen classes. We observe that SG-One can precisely distinguish the object regions from the background with the guidance of the support images, even if some support images and query images do not share much appearance similarities. We also show some failure cases to benefit the future researches. We ascribe the failure to 1) the target object regions are too similar to background pixels, e.g.the side of the bus and the car; 2) the target region have very uncommon features with the discovered discriminative regions, e.g.the vest of the dog, which may far distant with the representative feature of support objects. Figure 5 illustrates the similarity maps of cosine distance between the support objects and the query images. We try to segment the objects of the query images in the second row corresponding to the annotated objects of the support images in the first row. Note there exist distracting classes in the given query images. We only expect to segment the objects whose categories are consistent with the support images. With the reference to the extracted features of support objects, the corresponding regions in the query images are highlighted while the distracting regions and the background are depressed. The predicted masks can be precisely predicted with the guidance to the similairty maps. Five-shot Table III illustrates the five-shot segmentation results on the four divisions. As we have discussed, we apply two approaches to five-shot semantic segmentation. The approach of averaging the representative vectors from the five support images achieves 47.1% which significantly outperforms the current state-of-the-art, co-FCN, by 5.7%. This result is also better than the corresponding one-shot mIoU of 46.3%. Therefore, the averaged support vector has a better expressiveness of the features in guiding the segmentation process. Another approach is to solely fuse the final segmentation results by combining all of the detected object pixels. We do not observe any improvement of this approach, comparing to the oneshot result. It is notable that we do not specifically train a new network for five-shot segmentation. The trained network in a one-shot way is directly applied to predict the fiveshot segmentation results. Figure 4 compares the predicted segmentation masks of one-shot and five-shot. The segmentation masks of five-shot are slightly better than that from 1 The details of baseline methods e.g.1-NN, LogReg and Siamese, refer to OSLSM [15]. The results for co-FCN [16] are for our re-implemented version. Table IV reports the evaluation results regarding the same metric adopted in [16] for a fairer comparison. one-shot prediction. As we have also observed that five-shot testing can only improve the mIoU by 0.8 which is a marginal growth. We think the reason for this phenomenon is that the highlevel features for different objects sharing the same class labels are very close. Hence, averaging these features from different objects can only improve a little in terms of feature expressiveness which causes the five-shot increase is limited. On the other side, our target of the similarity learning is exactly to produce the aligned features for each category. So, the fiveshot results can only improve a little under the current one-shot segmentation settings.\nFor a fairer comparison, we also evaluate the proposed model regarding the same metric in co-FCN [16] and PL+SEG [51]. This metric firstly calculates the IoU of foreground and background, and then obtains the mean IoU of the foreground and background pixels. We still report the averaged mIoU on the four cross-validation datasets. Table IV compares SG-One with the baseline methods regarding this metric in terms of one-shot and five-shot semantic segmentation. It is obvious that the proposed approach outperforms all previous baselines. SG-One achieves 63.1% of one-shot segmentation and 65.9% of five-shot segmentation, while the most competitive baseline method PL+SEG only obtains 61.2% and 62.3%. The proposed network is trained end-to-end, and our results do not require any pre-processing or post-processing steps.", "Dataset": "Test classes PASCAL-5 0 aeroplane,bicycle,bird,boat,bottle PASCAL-5 1 bus,car,cat,chair,cow PASCAL-5 2 diningtable,dog,horse,motorbike,person PASCAL-5 3 potted plant,sheep,sofa,train,tv/monitor be background. We employ the cross-entropy loss function to optimize the network in an end-to-end manner.\nOne-Shot Testing One annotated support image for each unseen category is provided as guidance to segment the target semantic objects of query images. We do not need to finetune or change any parameters of the entire network. We only need to forward the query and support images through the network for generating the expected segmentation masks. K-Shot Testing Suppose there are K(K > 1) support images I i sup , i = {1, 2..., K} for each new category, we propose to segment the query image I que using two approaches. The first one is to ensemble the segmentation masks corresponding to the K support images following OSLSM [15] and co-FCN [16] based on Eq. ( 3)\nŶx,y = max( Ŷ 1 x,y , Ŷ 2 x,y , ..., Ŷ K x,y ),(3)\nwhere Ŷ i x,y , i = {1, 2, ..., K} is the predicted semantic label of the pixel at (x, y) corresponding to the support image I i sup . Another approach is to average the K representative vectors, and then use the averaged vector to guide the segmentation process. It is notable that we do not need to retrain the network using K-shot support images. We use the trained network in a one-shot manner to test the segmentation performance using K-shot support images. Following the evaluation method of the previous methods OSLSM [15] and co-FCN [16], we create the PASCAL-5 i using the PASCAL VOC 2012 dataset [47] and the extended SDS dataset [48]. For the 20 object categories in PASCAL VOC, we use cross-validation method to evaluate the proposed model by sampling five classes as test categories L test = {4i+1, ..., 4i+5} in Table I, where i is the fold number, while the left 15 classes are the training label-set L train . We follow the same procedure of the baseline methods e.g.OSLSM [15] to build the training and testing set. Particularly, we randomly sample image pairs from the training set. Each image pair have one common category label. One image is fed into the network as a support image accompanied by its annotated mask. Another image is treated as a query image and its mask is applied to calculate the loss. For a fair comparison, we use the same test set as OSLSM [15], which has 1,000 supportquery tuples for each fold.\nSuppose the predicted segmentation mask is { M } Ntest i=1 and the corresponding ground-truth annotation is {M } Ntest i=1 , given a specific class l. We define the Intersection over Union (IoU l ) of class l as T P l T P l +F P l +F N l , where the T P, F P and F N are the number of true positives, false positives and false negatives of the predicted masks. The mIoU is the average of IoUs over different classes, i.e.(1/n l ) l IoU l , where n l is the number of testing classes. We report the averaged mIoU on the four cross-validation datasets.", "Conclusion": "We present that SG-One can effectively segment semantic pixels of unseen categories using only one annotated example. We abandon the previous strategy [15], [16] and propose the masked average pooling approach to extract more robust object-related representative features. Extensive experiments show the masked average pooling approach is more convenient and capable of incorporating contextual information to learn better representative vectors. We also reduce the risks of the overfitting problem by avoiding the utilization of extra parameters through a unified network. We propose that a welltrained network on images of a single class can be directly applied to segment multi-class images. We present a pure endto-end network, which does not require any pre-processing or post-processing steps. More importantly, SG-One boosts the performance of one-shot semantic segmentation and surpasses the baseline methods. At the end, we analyze the relationship between one-shot video segmentation and our one-shot image semantic segmentation problems. The experiments show the superiority of the proposed SG-One in terms of segmenting video objects under fair comparison conditions. Code has been made available. We hope our simple and effective SG-One can serve as a solid baseline and help ease future research on one/few shot segmentation. There are still two problems and we will try to solve them in the future. First, due to the challenging settings of the one-shot segmentation problem, the latent distributions between the training classes and testing classes do not align, which prevents us from obtaining better features for input images. Second, the predicted masks sometimes can only cover part of the target regions and may include some background noises if the target object is too similar to the background.", "Extra": "We implement the proposed approach based on the VGG-16 network following the previous works [15], [16]. Stem takes the input of RGB images to extract middle-level features, and downsamples the images by the scale of 8. We use the first three blocks of the VGG-16 network as Stem. For the first two convolutional blocks of Similarity Guidance Branch, we adopt the structure of conv4 and conv5 of VGGnet-16 and remove the maxpooling layers to maintain the resolution of feature maps. One conv3×3 layer of 512 channels are added on the top without using ReLU after this layer. The following module is masked average pooling to extract the representative vector of support images. In Segmentation Branch, all of the convolutional layers with 3 × 3 kernel size have 128 channels. The last layer of conv1×1 kernel has two channels corresponding to categories of either object or background. All of the convolutional layers except for the third and the last one are followed by a ReLU layer. We will justify this choice in section IV-E.\nFollowing the baseline methods [15], [16], we use the pretrained weights on ILSVRC [49]. All input images remain the original size without any data augmentations. The support and query images are fed into the network, simultaneously. The difference from them is the support image just go through the guidance branch to obtain the representative vector. The query images goes through both the guidance branch for calculating the guidance maps and go through the segmentation branch for predicting the segmentation masks. We implement the network using PyTorch [50]. We train the network with the learning rate of 1e -5. The batch size is 1, and the weight decay is 0.0005. We adopt the SGD optimizer with the momentum of 0.9. All networks are trained and tested on NVIDIA TITAN X GPUs with 12 GB memory. Our source code is available at https://github.com/xiaomengyc/SG-One. One-shot Table II compares the proposed SG-One approach with baseline methods in one-shot semantic segmentation. We observe that our method outperforms all baseline models. The mIoU of our approach on the four divisions achieves 46.3%, which is significantly better than co-FCN by 5.2% and OSLSM by 5.5%. Compared to the baselines, SG-One earns the largest gain of 7.8% on PASCAL-5 1 , where the testing classes are bus, car, cat, chair and cow. co-FCN [16] constructs the input of the support network by concatenating the support images, positive and negative masks, and it obtains 41.1%. OSLSM [15] proposed to feed only the object pixels as input by masking out the background regions, and this method obtains 40.8%.\nOSVOS [37] adopts a strategy of finetuning the network using the support samples in testing, and it achieves only 32.6%. We conduct experiments to verify the ability of SG-One in segmenting images with multiple classes. We randomly select 1000 entries of the query and support images. Query images may contain objects of multiple classes. For each entry, we sample five annotated images from the five testing classes as support images. For every query image, we predict its segmentation masks with the images of different support classes. We fuse the segmentation masks of the five classes by comparing the classification scores. The mIoU on the four datasets is 29.4%. We conduct the same experiment to test the co-FCN algorithm [16], and co-FCN only obtains the mIoU of 11.5%. Therefore, we can tell that SG-One is much more robust in dealing with multi-cluass images, although it is trained with images of single class. Masked Average Pooling The masked average pooling method employed in the proposed SG-One network is superior in incorporating the guidance masks of support images. Shaban et al. [15] proposed to multiply the binary masks to the input support RGB images, so that the network can only extract features of target objects. co-FCN [16] proposed by Rakelly et al.concatenates the support RGB images with the corresponding positive masks, i.e.object pixels are 1 while background pixels are 0, and negative binary masks i.e.object pixels are 1 and background pixels are 0, constituting the inputs of 5 channels. We follow the instructions of these two methods and compare with our masked average pooling approach. Concretely, we firstly replace the masked average pooling layer by a global average pooling layer. Then, we implement two networks. 1) SG-One-masking adopts the methods in OSLSM [15], in which support images are multiplied by the binary masks to solely keep the object regions. 2) SG-One-concatenate adopts the methods in co-FCN [16], in which we concatenate the positive and negative masks to the support images forming an input with 5 channels. We add an extra input block (VGGnet-16) with 5 convolutional channels for adapting concatenated inputs, while the rest networks are exactly the same as the compared networks. Table V compares the performance of different methods in processing support images and masks. Our masked average pooling approach achieves the best results on every dataset. The mIoU of the four datasets is 46.3% using our method. The masking method (SG-One-masking) proposed in OSLSM [15] obtains 45.0% of the mIoU. The approach of co-FCN (SG-One-concat) only obtains 41.75%, which ascribes the modification of the input structure of the network. The modified input block cannot benefit from the pre-trained weights of processing low-level information. We also implement a network using the general GAP layer to extract representative vectors instead of using the binary masks of the support images. The network under such a setting achieves the mIoU of 42.2%, which is inferior to the proposed MAP method. So, it is necessary to mask out the pixels corresponding to the background regions for better representative vectors. In total, we can conclude that 1) a qualified method of using support masks is crucial for extracting high-quality object features; 2) the proposed masked average pooling method provides a superior way to reuse the structure of well-designed classification network for extracting object features of support pairs; 3) networks with 5-channel input cannot benefit from the pre-trained weights and the extra input block cannot be jointly trained with the query images. 4) the masked average pooling layer has superior generalization ability in segmenting unseen classes. Guidance Similarity Generating Methods We adopt the cosine similarity to calculate the distance between the object feature vector and the feature maps of query images. The definition of the cosine distance is to measure the angle between two vectors, and its range is in [-1, 1]. Correspondingly, we abandon the ReLU layers after the third convolutional layers of both guidance and segmentation branches. By doing so, we increase the variance of the cosine measurement, and the cosine similarity is not partly bounded in [0, 1], but in [-1, 1]. For comparison, we add the ReLU layers after the third convolutional layers. The mIoU on the four datasets drops to 45.5% comparing to the non-ReLU approach of 46.3%.\nWe also train a network using the 2-norm distance as the guidance, and obtain 30.7% on the four datasets. This result is far poor than the proposed cosine similarity method. Hence, the 2-norm distance is not a good choice for guiding the query images to discover target object regions.\nThe Unified Structure We adopt the proposed unified structure between the guidance and segmentation branches. This structure can benefit each other during the forward and backward stages. We implement two networks for illustrating the effectiveness of this structure. First, we remove the first three convolutional layers of Segmentation Branch, and then multiply the guidance similarity maps directly to the feature maps from Similarity Guidance Branch. The final mIoU of the four datasets decreases to 43.1%. Second, we cut off the connections between the two branches by removing the first and second concatenation operations between the two branches. The final mIoU obtains 45.7%. Therefore, Segmentation Branch in our unified network is very necessary for getting high-quality segmentation masks. Also, Segmentation Branch can borrow some information via the concatenation operation between the two branches.\nWe also verify the functionality of the proposed unified network in the demand of computational resources and generalization ability. In Table VI, we observe that our SG-One model has only 19.0M parameters, while it achieves the best segmentation results. Following the methods in OSLSM [15] and co-FCN [16], we use a separate network (SG-Oneseparate) to process support images. This network has slightly more parameters (36.1M) than co-FCN(34.2M). The mIoU of SG-One-separate obtains 44.8%, and this result is far better than the 41.1% of co-FCN. This comparison shows that the approach we applied for incorporating the guidance information from support image pairs is superior to OSLSM and co-FCN in segmenting unseen classes. Surprisingly, the proposed unified network can even achieve higher performance of 46.3%. We attribute the gain of 1.5% to the reuse of the network in extracting support and query features. The reuse strategy not only reduces the demand of computational resources and decreases the risk of over-fitting, but offers the network more opportunities to see more training samples. OSLSM requires the most parameters (272.6M), whereas it has the lowest score. It is also worth mentioning that the number of OSLSM is from the officially released source code, while the number of co-FCN is based on our reimplemented version. Both of the two baseline methods do not share parameters in processing query and support images. One-shot video segmentation is to segment specified objects in video clips with only the first frame densely annotated [36]. Similar to our one-shot image semantic segmentation problem, the testing categories of the video segmentation problem are inconsistent with the trianing categories. So, for the both tasks, the underline mission is to learn the relationship between the feature embeddings of the support and query images. Siamese network [17] is designed to learn the relationship of such a pair of support and query images by applying two parallel networks for extracting high-level abstract features of them, separately. Both the proposed method and a wealth of the video segmentation methods [38], [41], [52] are derivatives of the Siamese network [17].\nHowever, the key difference of the two problems is the source of information between support and query. First, in the video task, the contents of the target objects and background remain consistent in one sequence. For example, given a video clip of a girl dancing on the grass land, the foreground target (the girl) and the background environment (the grass land) do not change very seriously between different frames. In contrast, one-shot image semantic segmentation task does not exist sequential cues in targeting objects nor the background environments. The objects and background in query images are seriously different from the support images . For instance, for our one-shot image segmentation task, we may be required to segment an old man standing on grass with the reference to a little girl lying in bed, as they both belong to the same category, namely, person. Second, benefiting from sequential cues in videos, the video segmentation methods [41] can calculate the frame-to-frame similarities from successive frames and to boost the performance by online updating. Figure 6 illustrates the differences between the two tasks. In the video segmentation task, the target objects and the background environment maintain consistent in the whole video clip. In contrast, the objects and environments are totally different between the support images and the query image in our image segementation task. Compared to our task, none of the background nor successive information can be applied. We apply our SG-One network to the one-shot video segmentation task on DAVIS2016 [53]. We try our best seeking the fair comparison results in video segmentation papers, which do not apply background similarities nor successive object cues between frames. In Table VII, the results of the baseline models, OSVOS [37], VideoMatch [41] and RGMP [52] are obtained by excluding background features and successive frame-to-frame consistencies. These models are merely trained on the training set of DAVIS2016 by randomly selecting image pairs, excluding the finetuning step on testing set and any sequential cues between frames. In Tab. VII, we compare the mIoU of these different algorithms on DAVIS2016 testing set, and our SG-One achieves the best accuracy of 57.3%, surpassing the baseline methods. The proposed model is more robust and better in segmenting the given query images with the reference to only one annotated image. This work is supported by ARC DECRA DE190101315 and ARC DP200100938." }, { "title": "Sg-one: Similarity guidance network for one-shot semantic segmentation", "year": 2020.0, "authors": "Xiaolin Zhang; Yunchao Wei; Yi Yang; Thomas Huang", "arxiv_di": "1810.09091", "Introduction": "O BJECT Semantic Segmentation (OSS) aims at predicting the class label of each pixel. Deep neural networks have achieved tremendous success on the OSS tasks, such as U-net [1], FCN [2] and Mask R-CNN [3]. However, these algorithms trained with full annotations require many investments to the expensive labeling tasks. To reduce the budget, a promising alternative approach is to apply weak annotations for learning a decent network of segmentation. For example, previous works have implemented image-level labels [4]- [6], scribbles [7]- [9], bounding boxes [10], [11] and points [12]- [14] as cheaper supervision information. Whereas the main disadvantage of these weakly supervised methods is the lack of the ability for generalizing the learned models to unseen classes. For instance, if a network is trained to segment dogs using thousands of images containing various breeds of dogs, it will not be able to segment bikes without retraining the network using many images containing bikes.\nIn contrast, humans are very good at recognizing things with a few guidance. For instance, it is very easy for a Fig. 1. An overview of the proposed SG-One approach for testing a new class. Given a query image of an unseen category, e.g.. cow, its semantic object is precisely segmented with the reference to only one annotated example of this category.\nchild to recognize various breeds of dogs with the reference to only one picture of a dog. Inspired by this, one-shot learning dedicates to imitating this powerful ability of human beings. In other words, one-shot learning targets to recognize new objects according to only one annotated example. This is a great challenge for the standard learning methodology. Instead of using tremendous annotated instances to learn the characteristic patterns of a specific category, our target is to learn one-shot networks to generalize to unseen classes with only one densely annotated example.\nConcretely, one-shot image segmentation is to discover the object pixels of a query image with the reference to only one support image. The target objects in the support image is densely annotated. Current existing methods [15], [16] are all based on the Siamese framework [17]. Briefly, a pair of parallel networks is trained for extracting the features of labeled support images and query images, separately. These features are then fused to generate the probability maps of the target objects. The purpose of the network is actually to learn the relationship between the annotated support image and the query image within the highlevel feature space. These methods provide an advantage that the trained parameters of observed classes can be directly utilized for testing unseen classes without finetuning. Nevertheless, there are some weaknesses with these methods: 1) The parameters of using the two parallel networks are redundant, which is prone to overfitting and leading to the waste of computational resources; 2) combining the features of support and query images by mere multiplication is inadequate for guiding the query network to learn high-quality segmentation masks.\nTo overcome the above mentioned weaknesses, we propose a Similarity Guidance Network for One-Shot Semantic Segmentation (SG-One) in this paper. The fundamental idea of SG-One is to guide the segmentation process by effectively incorporating the pixel-wise similarities between the features of support objects and query images. Particularly, we firstly extract the highlevel feature maps of the input support and query images. Highlevel features are usually abstract and the embeddings of pixels belonging to the same category objects are close. The embeddings with respect to the background pixels are usually depressed and these embeddings are distant from the object embeddings. Therefore, we propose to obtain the representative vectors from support images by applying the masked average pooling operation. The masked average pooling operation can extract the object-related features by excluding the influences from background noises. Then, we get the guidance maps by calculating cosine similarities between the representative vectors and the features of query images at each pixel. The feature vectors corresponding to the pixels of objects in query images are close to the representative vectors extracted from support images, hence the scores in the guidance maps will be high. Otherwise, the scores in guidance maps will be low if the pixels belonging to the background. The generated guidance maps are applied to supply the guidance information of desired regions to the segmentation process. In detail, the position-wise feature vectors of query images are multiplied by the corresponding similarity values. Such a strategy can effectively contribute to activating the target object regions of query images following the guidance of support images and their masks. Furthermore, we adopt a unified network for producing similarity guidance and predicting segmentation masks of query images. Such a network is more capable of generalizing to unseen classes than the previous methods [15], [16].\nOur approach offers multiple appealing advantages over the previous state-of-the-arts, e.g.OSLSM [15] and co-FCN [16]. First, OSLSM and co-FCN incorporate the segmentation masks of support images by changing the input structure of the network or the statistic distribution of input images. Differently, we extract the representative vector from the intermediate feature maps with the masked average pooling operation instead of changing the inputs. Our approach does not harm the input structure of the network, nor harm the statistics of input data. Averaging only the object regions can avoid the influences from the background. Otherwise, when the background pixels dominate, the learned features will bias towards the background contents. Second, OSLSM and co-FCN directly multiply the representative vector to feature maps of query images for predicting the segmentation masks. SG-One calculates the similarities between the representative vector and the features at each pixel of query images, and the similarity maps are employed to guide the segmentation branch for finding the target object regions. Our method is superior in the process of segmenting the query images. Third, both OSLSM and co-FCN adopt a pair of VGGnet-16 networks for processing support and query images, separately. We employ a unified network to process them simultaneously. The unified network utilizes much fewer parameters, so as to reduce the computational burden and increase the ability to generalize to new classes in testing.\nThe overview of SG-One is illustrated in Figure 1. We apply two branches, i.e., similarity guidance branch, and segmenta-tion branch, to produce the guidance maps and segmentation masks. We forward both the support and query images through the guidance branch to calculate the similarity maps. The features of query images also forward through the segmentation branch for prediction segmentation masks. The similarity maps act as guidance attention maps where the target regions have higher scores while the background regions have lower scores. The segmentation process is guided by the similarity maps for getting the precise target regions. After the training phase, the SG-One network can predict the segmentation masks of a new class without changing the parameters. For example, a query image of an unseen class, e.g., cow, is processed to discover the pixels belong to the cow with only one annotated support image provided.\nTo sum up, our main contributions are three-fold:\n• We propose to produce robust object-related representative vectors using masked average pooling for incorporating contextual information without changing the input structure of networks. • We produce the pixel-wise guidance using cosine similarities between representative vectors and query features for predicting the segmentation masks. • We propose a unified network for processing support and query images. Our network achieves the cross-validate mIoU of 46.3% on the PASCAL-5 i dataset in one-shot segmentation setting, surpassing the baseline methods.", "Related_Work": "Object semantic segmentation (OSS) aims at classifying every pixel in a given image for distinguishing different objects or contents. OSS with dense annotations as supervision has achieved great success in precisely identifying various kinds of objects [18]- [25]. Recently, most of the works with impressive performance are based on deep convolutional networks. FCN [2] and U-Net [1] abandon fully connected layers and propose to only use convolutional layers for preserving relative positions of pixels. Based on the advantages of FCN, DeepLab proposed by Chen et al. [26], [27], is one of the best algorithms for segmentation. It employs dilated convolution operations to increase the receptive field, and meanwhile to save parameters in comparison with the large convolutional kernel methods. He et al. [3] proposes segmentation masks and detection bounding boxes can be predicted simultaneously using a unified network.\nWeakly object segmentation seeks an alternative approach for segmentation to reduce the expenses in labeling segmentation masks [14], [28]- [32]. Zhou [33] and Zhang [34], [35] propose to discover precise object regions using a classification network with only image-level labels. Wei [4], [5] object pixels according to the similarity of adjacent pixels and ground-truth scribble lines.\nVideo Object Segmentation is also a challenging problem of segmenting a specific object in a video clip given merely the annotation of the first frame [36]. The categories of training and testing sets are disjoint, which makes the task similar to our one-shot image segmentation. OSVOS [37] adopts a direct approach of learning a segmentation network on the training set, and then finetunes the trained network on the augmented first frames of testing sets. Although OSVOS achieves good segmentation performance, the main drawback is the latency of finetuning during testing a new video clip. PLM [38] applies a more sophisticated network to learn better feature embeddings by involving intermediate feature maps of both search and query frames. A simple crop method is also applied by estimating the approximate location of target objects according to the relationship between successive frames. SegFlow [39] leverages optical flow of moving objects to assist the segmentation process. Flownet [40] are internally embedded to the framework of SegFlow, and updated in an end-to-end way. Consequently, the segmentation network and flow network can benefit from each other to learn better segmentation masks as well as optical flows. VideoMatch [41] learns the represents of both foreground and background regions, because the successive video clips usually maintain similar or static background environments. Therefore, the learned robust represents can be easily applied to retrieve the target object regions of query images.\nFew-shot learning algorithms dedicates to distinguishing the patterns of classes or objects with only a few labeled samples [42], [43]. Networks should generalize to recognize new objects with few images based on the parameters of base models. The base models are trained using entirely different classes without any overlaps with the testing classes. Finn et al. [44] tries to learn some internal transferable representations, and these representations are broadly applicable to various tasks. Vinyals et al. [45] and Annadani et al. [46] propose to learn embedding vectors. The vectors of the same categories are close, while the vectors of the different categories are apart.", "Methodology": "Suppose we have three datasets: a training set\nL train = {(I i , Y i )} Ntrain i=1 , a support set L support = {(I i , Y i )} Nsupport i=1\nand a testing set\nL test = {I i } Ntest i=1\n, where I i is an image, Y i is the corresponding segmentation mask and N is the number of images in each set. Both the support set and training set have annotated segmentation masks. The support set and testing set share the same types of objects which are disjoint with the training set. We denote l ∈ Y as the semantic class of the mask Y . Therefore, we have {l train } ∩ {l support } = ∅, where ∩ denotes the intersection of the two sets. If there are K annotated images for each class, the target few-shot problem is named K-shot. Our purpose is to train a network on the training set L train , which can precisely predict segmentation masks Ŷtest on testing set L test according to the reference of the support set L support . Specifically, the predicted masks contains two classes, i.e., object class and background. If the objects in query images share the same category label with the annotated objects in support images, the corresponding values in the predicted masks are supposed to be 1 for indicating object pixels. Otherwise, the corresponding values should be 0 for indicating background pixels.\nIn order to better learn the connection between the support and testing set, we mimic this mechanism in the training process. For a query image I i , we construct a pair {(I i , Y i ), (I j , Y j )} by randomly selecting a support image I j whose mask Y j has the same semantic class as Y i . We are supposed to estimate the segmentation mask Ŷi with a function Ŷj = f θ ((I i , Y i ), I j ), where θ is the parameters of the function. In testing, (I i , Y i ) is picked from the support set L support and I j is an image from testing set L test . In this section, we firstly present the masked average pooling operation for extracting the object-related representative vector of annotated support images. Then, the similarity guidance method is introduced for combining the representative vectors and features of query images. The generated similarity guidance maps supply the information for precisely predicting the segmentation masks. Masked Average Pooling The pairs of support images and their masks are usually encoded into representative vectors. OSLSM [15] proposes to erase the background pixels from the support images by multiplying the binary masks to support images. co-FCN [16] proposes to construct the input block of five channels by concatenating the support images with their positive and negative masks. However, there are two disadvantages of the two methods. First, erasing the background pixels to zeros will change the statistic distribution of the support image set. If we apply a unified network to process both the query images and the erased support images, the variance of the input data will greatly increase. Second, concatenating the support images with their masks [16] breaks the input structure of the network, which will also prevent the implementation of a unified network.\nWe propose to employ masked average pooling for extracting the representative vectors of support objects. Suppose we have a support RGB image I ∈ R 3×w×h and its binary segmentation mask Y ∈ {0, 1} w×h , where w and h are the width and height of the image. If the output feature maps of I is F ∈ R c×w ×h , where c is the number of channels, w and h are width and height of the feature maps. We firstly resize the feature maps to the same size as the mask Y via bilinear interpolation. We denote the resized feature maps as F ∈ R c×w×h . Then, the i th element v i of the representative vector v is computed by averaging the pixels within the object regions on the i th feature map.\nv i = w,h x=1,y=1 Y x,y * F i,x,y w,h x=1,y=1 Y x,y ,(1)\nAs the discussion in FCN [2], fully convolutional networks are able to preserve the relative positions of input pixels. Therefore, through masked average pooling, we expect to extract the features of object regions while disregarding the background contents. Also, we argue that the input of contextual regions in our method is helpful to learn better object features. This statement has been discussed in DeepLab [26] which incorporates contextual information using dilated convolutions. Masked average pooling keeps the input structure of the network unchanged, which enables us to process both the support and query images within a unified network. Similarity Guidance One-shot semantic segmentation aims to segment the target object within query images given a support image of the reference object. As we have discussed, the masked average pooling method is employed to extract the representative vector v = (v 1 , v 2 , ..., v c ) of the reference object, where c is the number of channels. Suppose the feature maps of a query image I que is F que ∈ R c×w ×h . We employ the cosine distance to measure the similarity between the representative vector v and each pixel within F que following Eq. ( 2)\ns x,y = v * F que x,y ||v|| 2 * ||F que x,y || 2 ,(2)\nwhere s x,y ∈ [-1, 1] is the similarity value at the pixel (x, y). F que x,y ∈ R c×1 is the feature vector of query image at the pixel (x, y). As a result, the similarity map S integrates the features of the support object and the query image. We use the map S = {s x,y } as guidance to teach the segmentation branch to discover the desired object regions. We do not explicitly optimize the cosine similarity. In particular, we element-wisely multiply the similarity guidance map to the feature maps of query images from segmentation branch. Then, we optimize the guided feature maps to fit the corresponding ground truth masks. Similarity Guidance Branch is fed the extracted features of both query and support images. We apply this branch to produce the similarity guidance maps by combining the features of reference objects with the features of query images. For the features of support images, we implement three convolutional blocks to extract the highly abstract and semantic features, followed by a masked averaged pooling layer to obtain representative vectors. The extracted representative vectors of support images are expected to contain the highlevel semantic features of a specific object. For the features of query images, we reuse the three blocks and employ the cosine similarity layer to calculate the closeness between the representative vector and the features at each pixel of the query images. Segmentation branch is for discovering the target object regions of query images with the guidance of the generated similarity maps. We employ three convolutional layers with the kernel size of 3×3 to obtain the features for segmentation. The inputs of the last two convolutional layers are concatenated with the paralleling feature maps from Similarity Guidance Branch. Through the concatenation, Segmentation Branch can borrow features from the paralleling branch, and these two branches can communicate information during the forward and backward stages. We fuse the generated features with the similarity guidance maps by multiplication at each pixel. Finally, the fused features are processed by two convolutional layers with the kernel size of 3 × 3 and 1 × 1, followed by a bilinear interpolation layer. The network finally classifies each pixel to be the same class with support images or to Methods one-shot five-shot FG-BG [16] 55.1 55.6 OSLSM [15] 55.2 co-FCN [16] 60.1 60.8 PL+SEG [51] 61.2 62.3 SG-One(Ours)\n63.1 65.9\nTo summarize, SG-One can effectively predict segmentation masks on new classes without changing the parameters. Our similarity guidance method is better than the baseline methods in incorporating the support objects for segmenting unseen objects.\nFigure 3 shows the one-shot segmentation results using SG-One on unseen classes. We observe that SG-One can precisely distinguish the object regions from the background with the guidance of the support images, even if some support images and query images do not share much appearance similarities. We also show some failure cases to benefit the future researches. We ascribe the failure to 1) the target object regions are too similar to background pixels, e.g.the side of the bus and the car; 2) the target region have very uncommon features with the discovered discriminative regions, e.g.the vest of the dog, which may far distant with the representative feature of support objects. Figure 5 illustrates the similarity maps of cosine distance between the support objects and the query images. We try to segment the objects of the query images in the second row corresponding to the annotated objects of the support images in the first row. Note there exist distracting classes in the given query images. We only expect to segment the objects whose categories are consistent with the support images. With the reference to the extracted features of support objects, the corresponding regions in the query images are highlighted while the distracting regions and the background are depressed. The predicted masks can be precisely predicted with the guidance to the similairty maps. Five-shot Table III illustrates the five-shot segmentation results on the four divisions. As we have discussed, we apply two approaches to five-shot semantic segmentation. The approach of averaging the representative vectors from the five support images achieves 47.1% which significantly outperforms the current state-of-the-art, co-FCN, by 5.7%. This result is also better than the corresponding one-shot mIoU of 46.3%. Therefore, the averaged support vector has a better expressiveness of the features in guiding the segmentation process. Another approach is to solely fuse the final segmentation results by combining all of the detected object pixels. We do not observe any improvement of this approach, comparing to the oneshot result. It is notable that we do not specifically train a new network for five-shot segmentation. The trained network in a one-shot way is directly applied to predict the fiveshot segmentation results. Figure 4 compares the predicted segmentation masks of one-shot and five-shot. The segmentation masks of five-shot are slightly better than that from 1 The details of baseline methods e.g.1-NN, LogReg and Siamese, refer to OSLSM [15]. The results for co-FCN [16] are for our re-implemented version. Table IV reports the evaluation results regarding the same metric adopted in [16] for a fairer comparison. one-shot prediction. As we have also observed that five-shot testing can only improve the mIoU by 0.8 which is a marginal growth. We think the reason for this phenomenon is that the highlevel features for different objects sharing the same class labels are very close. Hence, averaging these features from different objects can only improve a little in terms of feature expressiveness which causes the five-shot increase is limited. On the other side, our target of the similarity learning is exactly to produce the aligned features for each category. So, the fiveshot results can only improve a little under the current one-shot segmentation settings.\nFor a fairer comparison, we also evaluate the proposed model regarding the same metric in co-FCN [16] and PL+SEG [51]. This metric firstly calculates the IoU of foreground and background, and then obtains the mean IoU of the foreground and background pixels. We still report the averaged mIoU on the four cross-validation datasets. Table IV compares SG-One with the baseline methods regarding this metric in terms of one-shot and five-shot semantic segmentation. It is obvious that the proposed approach outperforms all previous baselines. SG-One achieves 63.1% of one-shot segmentation and 65.9% of five-shot segmentation, while the most competitive baseline method PL+SEG only obtains 61.2% and 62.3%. The proposed network is trained end-to-end, and our results do not require any pre-processing or post-processing steps.", "Dataset": "Test classes PASCAL-5 0 aeroplane,bicycle,bird,boat,bottle PASCAL-5 1 bus,car,cat,chair,cow PASCAL-5 2 diningtable,dog,horse,motorbike,person PASCAL-5 3 potted plant,sheep,sofa,train,tv/monitor be background. We employ the cross-entropy loss function to optimize the network in an end-to-end manner.\nOne-Shot Testing One annotated support image for each unseen category is provided as guidance to segment the target semantic objects of query images. We do not need to finetune or change any parameters of the entire network. We only need to forward the query and support images through the network for generating the expected segmentation masks. K-Shot Testing Suppose there are K(K > 1) support images I i sup , i = {1, 2..., K} for each new category, we propose to segment the query image I que using two approaches. The first one is to ensemble the segmentation masks corresponding to the K support images following OSLSM [15] and co-FCN [16] based on Eq. ( 3)\nŶx,y = max( Ŷ 1 x,y , Ŷ 2 x,y , ..., Ŷ K x,y ),(3)\nwhere Ŷ i x,y , i = {1, 2, ..., K} is the predicted semantic label of the pixel at (x, y) corresponding to the support image I i sup . Another approach is to average the K representative vectors, and then use the averaged vector to guide the segmentation process. It is notable that we do not need to retrain the network using K-shot support images. We use the trained network in a one-shot manner to test the segmentation performance using K-shot support images. Following the evaluation method of the previous methods OSLSM [15] and co-FCN [16], we create the PASCAL-5 i using the PASCAL VOC 2012 dataset [47] and the extended SDS dataset [48]. For the 20 object categories in PASCAL VOC, we use cross-validation method to evaluate the proposed model by sampling five classes as test categories L test = {4i+1, ..., 4i+5} in Table I, where i is the fold number, while the left 15 classes are the training label-set L train . We follow the same procedure of the baseline methods e.g.OSLSM [15] to build the training and testing set. Particularly, we randomly sample image pairs from the training set. Each image pair have one common category label. One image is fed into the network as a support image accompanied by its annotated mask. Another image is treated as a query image and its mask is applied to calculate the loss. For a fair comparison, we use the same test set as OSLSM [15], which has 1,000 supportquery tuples for each fold.\nSuppose the predicted segmentation mask is { M } Ntest i=1 and the corresponding ground-truth annotation is {M } Ntest i=1 , given a specific class l. We define the Intersection over Union (IoU l ) of class l as T P l T P l +F P l +F N l , where the T P, F P and F N are the number of true positives, false positives and false negatives of the predicted masks. The mIoU is the average of IoUs over different classes, i.e.(1/n l ) l IoU l , where n l is the number of testing classes. We report the averaged mIoU on the four cross-validation datasets.", "Conclusion": "We present that SG-One can effectively segment semantic pixels of unseen categories using only one annotated example. We abandon the previous strategy [15], [16] and propose the masked average pooling approach to extract more robust object-related representative features. Extensive experiments show the masked average pooling approach is more convenient and capable of incorporating contextual information to learn better representative vectors. We also reduce the risks of the overfitting problem by avoiding the utilization of extra parameters through a unified network. We propose that a welltrained network on images of a single class can be directly applied to segment multi-class images. We present a pure endto-end network, which does not require any pre-processing or post-processing steps. More importantly, SG-One boosts the performance of one-shot semantic segmentation and surpasses the baseline methods. At the end, we analyze the relationship between one-shot video segmentation and our one-shot image semantic segmentation problems. The experiments show the superiority of the proposed SG-One in terms of segmenting video objects under fair comparison conditions. Code has been made available. We hope our simple and effective SG-One can serve as a solid baseline and help ease future research on one/few shot segmentation. There are still two problems and we will try to solve them in the future. First, due to the challenging settings of the one-shot segmentation problem, the latent distributions between the training classes and testing classes do not align, which prevents us from obtaining better features for input images. Second, the predicted masks sometimes can only cover part of the target regions and may include some background noises if the target object is too similar to the background.", "Extra": "We implement the proposed approach based on the VGG-16 network following the previous works [15], [16]. Stem takes the input of RGB images to extract middle-level features, and downsamples the images by the scale of 8. We use the first three blocks of the VGG-16 network as Stem. For the first two convolutional blocks of Similarity Guidance Branch, we adopt the structure of conv4 and conv5 of VGGnet-16 and remove the maxpooling layers to maintain the resolution of feature maps. One conv3×3 layer of 512 channels are added on the top without using ReLU after this layer. The following module is masked average pooling to extract the representative vector of support images. In Segmentation Branch, all of the convolutional layers with 3 × 3 kernel size have 128 channels. The last layer of conv1×1 kernel has two channels corresponding to categories of either object or background. All of the convolutional layers except for the third and the last one are followed by a ReLU layer. We will justify this choice in section IV-E.\nFollowing the baseline methods [15], [16], we use the pretrained weights on ILSVRC [49]. All input images remain the original size without any data augmentations. The support and query images are fed into the network, simultaneously. The difference from them is the support image just go through the guidance branch to obtain the representative vector. The query images goes through both the guidance branch for calculating the guidance maps and go through the segmentation branch for predicting the segmentation masks. We implement the network using PyTorch [50]. We train the network with the learning rate of 1e -5. The batch size is 1, and the weight decay is 0.0005. We adopt the SGD optimizer with the momentum of 0.9. All networks are trained and tested on NVIDIA TITAN X GPUs with 12 GB memory. Our source code is available at https://github.com/xiaomengyc/SG-One. One-shot Table II compares the proposed SG-One approach with baseline methods in one-shot semantic segmentation. We observe that our method outperforms all baseline models. The mIoU of our approach on the four divisions achieves 46.3%, which is significantly better than co-FCN by 5.2% and OSLSM by 5.5%. Compared to the baselines, SG-One earns the largest gain of 7.8% on PASCAL-5 1 , where the testing classes are bus, car, cat, chair and cow. co-FCN [16] constructs the input of the support network by concatenating the support images, positive and negative masks, and it obtains 41.1%. OSLSM [15] proposed to feed only the object pixels as input by masking out the background regions, and this method obtains 40.8%.\nOSVOS [37] adopts a strategy of finetuning the network using the support samples in testing, and it achieves only 32.6%. We conduct experiments to verify the ability of SG-One in segmenting images with multiple classes. We randomly select 1000 entries of the query and support images. Query images may contain objects of multiple classes. For each entry, we sample five annotated images from the five testing classes as support images. For every query image, we predict its segmentation masks with the images of different support classes. We fuse the segmentation masks of the five classes by comparing the classification scores. The mIoU on the four datasets is 29.4%. We conduct the same experiment to test the co-FCN algorithm [16], and co-FCN only obtains the mIoU of 11.5%. Therefore, we can tell that SG-One is much more robust in dealing with multi-cluass images, although it is trained with images of single class. Masked Average Pooling The masked average pooling method employed in the proposed SG-One network is superior in incorporating the guidance masks of support images. Shaban et al. [15] proposed to multiply the binary masks to the input support RGB images, so that the network can only extract features of target objects. co-FCN [16] proposed by Rakelly et al.concatenates the support RGB images with the corresponding positive masks, i.e.object pixels are 1 while background pixels are 0, and negative binary masks i.e.object pixels are 1 and background pixels are 0, constituting the inputs of 5 channels. We follow the instructions of these two methods and compare with our masked average pooling approach. Concretely, we firstly replace the masked average pooling layer by a global average pooling layer. Then, we implement two networks. 1) SG-One-masking adopts the methods in OSLSM [15], in which support images are multiplied by the binary masks to solely keep the object regions. 2) SG-One-concatenate adopts the methods in co-FCN [16], in which we concatenate the positive and negative masks to the support images forming an input with 5 channels. We add an extra input block (VGGnet-16) with 5 convolutional channels for adapting concatenated inputs, while the rest networks are exactly the same as the compared networks. Table V compares the performance of different methods in processing support images and masks. Our masked average pooling approach achieves the best results on every dataset. The mIoU of the four datasets is 46.3% using our method. The masking method (SG-One-masking) proposed in OSLSM [15] obtains 45.0% of the mIoU. The approach of co-FCN (SG-One-concat) only obtains 41.75%, which ascribes the modification of the input structure of the network. The modified input block cannot benefit from the pre-trained weights of processing low-level information. We also implement a network using the general GAP layer to extract representative vectors instead of using the binary masks of the support images. The network under such a setting achieves the mIoU of 42.2%, which is inferior to the proposed MAP method. So, it is necessary to mask out the pixels corresponding to the background regions for better representative vectors. In total, we can conclude that 1) a qualified method of using support masks is crucial for extracting high-quality object features; 2) the proposed masked average pooling method provides a superior way to reuse the structure of well-designed classification network for extracting object features of support pairs; 3) networks with 5-channel input cannot benefit from the pre-trained weights and the extra input block cannot be jointly trained with the query images. 4) the masked average pooling layer has superior generalization ability in segmenting unseen classes. Guidance Similarity Generating Methods We adopt the cosine similarity to calculate the distance between the object feature vector and the feature maps of query images. The definition of the cosine distance is to measure the angle between two vectors, and its range is in [-1, 1]. Correspondingly, we abandon the ReLU layers after the third convolutional layers of both guidance and segmentation branches. By doing so, we increase the variance of the cosine measurement, and the cosine similarity is not partly bounded in [0, 1], but in [-1, 1]. For comparison, we add the ReLU layers after the third convolutional layers. The mIoU on the four datasets drops to 45.5% comparing to the non-ReLU approach of 46.3%.\nWe also train a network using the 2-norm distance as the guidance, and obtain 30.7% on the four datasets. This result is far poor than the proposed cosine similarity method. Hence, the 2-norm distance is not a good choice for guiding the query images to discover target object regions.\nThe Unified Structure We adopt the proposed unified structure between the guidance and segmentation branches. This structure can benefit each other during the forward and backward stages. We implement two networks for illustrating the effectiveness of this structure. First, we remove the first three convolutional layers of Segmentation Branch, and then multiply the guidance similarity maps directly to the feature maps from Similarity Guidance Branch. The final mIoU of the four datasets decreases to 43.1%. Second, we cut off the connections between the two branches by removing the first and second concatenation operations between the two branches. The final mIoU obtains 45.7%. Therefore, Segmentation Branch in our unified network is very necessary for getting high-quality segmentation masks. Also, Segmentation Branch can borrow some information via the concatenation operation between the two branches.\nWe also verify the functionality of the proposed unified network in the demand of computational resources and generalization ability. In Table VI, we observe that our SG-One model has only 19.0M parameters, while it achieves the best segmentation results. Following the methods in OSLSM [15] and co-FCN [16], we use a separate network (SG-Oneseparate) to process support images. This network has slightly more parameters (36.1M) than co-FCN(34.2M). The mIoU of SG-One-separate obtains 44.8%, and this result is far better than the 41.1% of co-FCN. This comparison shows that the approach we applied for incorporating the guidance information from support image pairs is superior to OSLSM and co-FCN in segmenting unseen classes. Surprisingly, the proposed unified network can even achieve higher performance of 46.3%. We attribute the gain of 1.5% to the reuse of the network in extracting support and query features. The reuse strategy not only reduces the demand of computational resources and decreases the risk of over-fitting, but offers the network more opportunities to see more training samples. OSLSM requires the most parameters (272.6M), whereas it has the lowest score. It is also worth mentioning that the number of OSLSM is from the officially released source code, while the number of co-FCN is based on our reimplemented version. Both of the two baseline methods do not share parameters in processing query and support images. One-shot video segmentation is to segment specified objects in video clips with only the first frame densely annotated [36]. Similar to our one-shot image semantic segmentation problem, the testing categories of the video segmentation problem are inconsistent with the trianing categories. So, for the both tasks, the underline mission is to learn the relationship between the feature embeddings of the support and query images. Siamese network [17] is designed to learn the relationship of such a pair of support and query images by applying two parallel networks for extracting high-level abstract features of them, separately. Both the proposed method and a wealth of the video segmentation methods [38], [41], [52] are derivatives of the Siamese network [17].\nHowever, the key difference of the two problems is the source of information between support and query. First, in the video task, the contents of the target objects and background remain consistent in one sequence. For example, given a video clip of a girl dancing on the grass land, the foreground target (the girl) and the background environment (the grass land) do not change very seriously between different frames. In contrast, one-shot image semantic segmentation task does not exist sequential cues in targeting objects nor the background environments. The objects and background in query images are seriously different from the support images . For instance, for our one-shot image segmentation task, we may be required to segment an old man standing on grass with the reference to a little girl lying in bed, as they both belong to the same category, namely, person. Second, benefiting from sequential cues in videos, the video segmentation methods [41] can calculate the frame-to-frame similarities from successive frames and to boost the performance by online updating. Figure 6 illustrates the differences between the two tasks. In the video segmentation task, the target objects and the background environment maintain consistent in the whole video clip. In contrast, the objects and environments are totally different between the support images and the query image in our image segementation task. Compared to our task, none of the background nor successive information can be applied. We apply our SG-One network to the one-shot video segmentation task on DAVIS2016 [53]. We try our best seeking the fair comparison results in video segmentation papers, which do not apply background similarities nor successive object cues between frames. In Table VII, the results of the baseline models, OSVOS [37], VideoMatch [41] and RGMP [52] are obtained by excluding background features and successive frame-to-frame consistencies. These models are merely trained on the training set of DAVIS2016 by randomly selecting image pairs, excluding the finetuning step on testing set and any sequential cues between frames. In Tab. VII, we compare the mIoU of these different algorithms on DAVIS2016 testing set, and our SG-One achieves the best accuracy of 57.3%, surpassing the baseline methods. The proposed model is more robust and better in segmenting the given query images with the reference to only one annotated image. This work is supported by ARC DECRA DE190101315 and ARC DP200100938." }, { "title": "Pyramid scene parsing network", "year": 2017.0, "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "arxiv_di": 1612.01105, "Introduction": "Scene parsing, based on semantic segmentation, is a fundamental topic in computer vision. The goal is to assign each pixel in the image a category label. Scene parsing provides complete understanding of the scene. It predicts the label, location, as well as shape for each element. This topic is of broad interest for potential applications of automatic driving, robot sensing, to name a few.\nDifficulty of scene parsing is closely related to scene and label variety. The pioneer scene parsing task [23] is to classify 33 scenes for 2,688 images on LMO dataset [22]. More recent PASCAL VOC semantic segmentation and PASCAL context datasets [8,29] include more labels with similar context, such as chair and sofa, horse and cow, etc. The new ADE20K dataset [43] is the most challenging one with a large and unrestricted open vocabulary and more scene classes. A few representative images are shown in Fig. 1.\nTo develop an effective algorithm for these datasets needs to conquer a few difficulties.\nState-of-the-art scene parsing frameworks are mostly based on the fully convolutional network (FCN) [26]. The deep convolutional neural network (CNN) based methods boost dynamic object understanding, and yet still face chal- lenges considering diverse scenes and unrestricted vocabulary. One example is shown in the first row of Fig. 2, where a boat is mistaken as a car. These errors are due to similar appearance of objects. But when viewing the image regarding the context prior that the scene is described as boathouse near a river, correct prediction should be yielded.\nTowards accurate scene perception, the knowledge graph relies on prior information of scene context. We found that the major issue for current FCN based models is lack of suitable strategy to utilize global scene category clues. For typical complex scene understanding, previously to get a global image-level feature, spatial pyramid pooling [18] was widely employed where spatial statistics provide a good descriptor for overall scene interpretation. Spatial pyramid pooling network [12] further enhances the ability.\nDifferent from these methods, to incorporate suitable global features, we propose pyramid scene parsing network (PSPNet). In addition to traditional dilated FCN [3,40] for pixel prediction, we extend the pixel-level feature to the specially designed global pyramid pooling one. The local and global clues together make the final prediction more reliable. We also propose an optimization strategy with deeply supervised loss. We give all implementation details, which are key to our decent performance in this paper, and make the code and trained models publicly available 1 .\nOur approach achieves state-of-the-art performance on all available datasets. It is the champion of ImageNet scene parsing challenge 2016 [43], and arrived the 1st place on PASCAL VOC 2012 semantic segmentation benchmark [8], and the 1st place on urban scene Cityscapes data [6]. They manifest that PSPNet gives a promising direction for pixellevel prediction tasks, which may even benefit CNN-based stereo matching, optical flow, depth estimation, etc. in follow-up work. Our main contributions are threefold.\n• We propose a pyramid scene parsing network to embed difficult scenery context features in an FCN based pixel prediction framework.\n• We develop an effective optimization strategy for deep ResNet [13] based on deeply supervised loss.\n• We build a practical system for state-of-the-art scene parsing and semantic segmentation where all crucial implementation details are included.", "Related_Work": "In the following, we review recent advances in scene parsing and semantic segmentation tasks. Driven by powerful deep neural networks [17,33,34,13], pixel-level prediction tasks like scene parsing and semantic segmentation achieve great progress inspired by replacing the fully-connected layer in classification with the convolution layer [26]. To enlarge the receptive field of neural networks, methods of [3,40] used dilated convolution. Noh et al. [30] proposed a coarse-to-fine structure with deconvolution network to learn the segmentation mask. Our baseline network is FCN and dilated network [26,3].\nOther work mainly proceeds in two directions. One line [26,3,5,39,11] is with multi-scale feature ensembling. Since in deep networks, higher-layer feature contains more semantic meaning and less location information. Combining multi-scale features can improve the performance.\nThe other direction is based on structure prediction. The pioneer work [3] used conditional random field (CRF) as post processing to refine the segmentation result. Following methods [25,41,1] refined networks via end-to-end modeling. Both of the two directions ameliorate the localization ability of scene parsing where predicted semantic boundary fits objects. Yet there is still much room to exploit necessary information in complex scenes.\nTo make good use of global image-level priors for diverse scene understanding, methods of [18,27] extracted global context information with traditional features not from deep neural networks. Similar improvement was made under object detection frameworks [35]. Liu et al. [24] proved that global average pooling with FCN can improve semantic segmentation results. However, our experiments show that these global descriptors are not representative enough for the challenging ADE20K data. Therefore, different from global pooling in [24], we exploit the capability of global context information by different-region-based context aggregation via our pyramid scene parsing network.", "Experiment_and_Results": "Our proposed method is successful on scene parsing and semantic segmentation challenges. We evaluate it in this section on three different datasets, including ImageNet scene parsing challenge 2016 [43], PASCAL VOC 2012 semantic segmentation [8] and urban scene understanding dataset Cityscapes [6].", "Extra": "We start with our observation and analysis of representative failure cases when applying FCN methods to scene parsing. They motivate proposal of our pyramid pooling module as the effective global context prior. Our pyramid scene parsing network (PSPNet) illustrated in Fig. 3 is then described to improve performance for open-vocabulary object and stuff identification in complex scene parsing. The new ADE20K dataset [43] contains 150 stuff/object category labels (e.g., wall, sky, and tree) and 1,038 imagelevel scene descriptors (e.g., airport terminal, bedroom, and street). So a large amount of labels and vast distributions of scenes come into existence. Inspecting the prediction results of the FCN baseline provided in [43], we summarize several common issues for complex-scene parsing.\nMismatched Relationship Context relationship is universal and important especially for complex scene understanding. There exist co-occurrent visual patterns. For example, an airplane is likely to be in runway or fly in sky while not over a road. For the first-row example in Fig. 2, FCN predicts the boat in the yellow box as a \"car\" based on its appearance. But the common knowledge is that a car is seldom over a river. Lack of the ability to collect contextual information increases the chance of misclassification.\nConfusion Categories There are many class label pairs in the ADE20K dataset [43] that are confusing in classification. Examples are field and earth; mountain and hill; wall, house, building and skyscraper. They are with similar appearance. The expert annotator who labeled the entire dataset, still makes 17.60% pixel error as described in [43]. In the second row of Fig. 2, FCN predicts the object in the box as part of skyscraper and part of building. These results should be excluded so that the whole object is either skyscraper or building, but not both. This problem can be remedied by utilizing the relationship between categories.\nInconspicuous Classes Scene contains objects/stuff of arbitrary size. Several small-size things, like streetlight and signboard, are hard to find while they may be of great importance. Contrarily, big objects or stuff may exceed the Figure 2. Scene parsing issues we observe on ADE20K [43] dataset. The first row shows the issue of mismatched relationship -cars are seldom over water than boats. The second row shows confusion categories where class \"building\" is easily confused as \"skyscraper\". The third row illustrates inconspicuous classes. In this example, the pillow is very similar to the bed sheet in terms of color and texture. These inconspicuous objects are easily misclassified by FCN.\nreceptive field of FCN and thus cause discontinuous prediction. As shown in the third row of Fig. 2, the pillow has similar appearance with the sheet. Overlooking the global scene category may fail to parse the pillow. To improve performance for remarkably small or large objects, one should pay much attention to different sub-regions that contain inconspicuous-category stuff.\nTo summarize these observations, many errors are partially or completely related to contextual relationship and global information for different receptive fields. Thus a deep network with a suitable global-scene-level prior can much improve the performance of scene parsing. With above analysis, in what follows, we introduce the pyramid pooling module, which empirically proves to be an effective global contextual prior.\nIn a deep neural network, the size of receptive field can roughly indicates how much we use context information. Although theoretically the receptive field of ResNet [13] is already larger than the input image, it is shown by Zhou et al. [42] that the empirical receptive field of CNN is much smaller than the theoretical one especially on high-level layers. This makes many networks not sufficiently incorporate the momentous global scenery prior. We address this issue by proposing an effective global prior representation.\nGlobal average pooling is a good baseline model as the global contextual prior, which is commonly used in image classification tasks [34,13]. In [24], it was successfully applied to semantic segmentation. But regarding the complexscene images in ADE20K [43], this strategy is not enough to cover necessary information. Pixels in these scene images are annotated regarding many stuff and objects. Directly fusing them to form a single vector may lose the spatial relation and cause ambiguity. Global context information along with sub-region context is helpful in this regard to distinguish among various categories. A more powerful representation could be fused information from different sub-regions with these receptive fields. Similar conclusion was drawn in classical work [18,12] of scene/image classification.\nIn [12], feature maps in different levels generated by pyramid pooling were finally flattened and concatenated to be fed into a fully connected layer for classification. This global prior is designed to remove the fixed-size constraint of CNN for image classification. To further reduce context information loss between different sub-regions, we propose a hierarchical global prior, containing information with different scales and varying among different sub-regions. We call it pyramid pooling module for global scene prior construction upon the final-layer-feature-map of the deep neural network, as illustrated in part (c) of Fig. 3.\nThe pyramid pooling module fuses features under four different pyramid scales. The coarsest level highlighted in red is global pooling to generate a single bin output. The following pyramid level separates the feature map into different sub-regions and forms pooled representation for different locations. The output of different levels in the pyramid pooling module contains the feature map with varied sizes. To maintain the weight of global feature, we use 1×1 convolution layer after each pyramid level to reduce the dimension of context representation to 1/N of the original one if the level size of pyramid is N . Then we directly upsample the low-dimension feature maps to get the same size feature as the original feature map via bilinear interpolation. Finally, different levels of features are concatenated as the final pyramid pooling global feature.\nNoted that the number of pyramid levels and size of each level can be modified. They are related to the size of feature map that is fed into the pyramid pooling layer. The structure abstracts different sub-regions by adopting varying-size pooling kernels in a few strides. Thus the multi-stage kernels should maintain a reasonable gap in representation. Our pyramid pooling module is a four-level one with bin sizes of 1×1, 2×2, 3×3 and 6×6 respectively. For the type of pooling operation between max and average, we perform extensive experiments to show the difference in Section 5.2. With the pyramid pooling module, we propose our pyramid scene parsing network (PSPNet) as illustrated in Fig. 3. Given an input image in Fig. 3(a), we use a pretrained ResNet [13] model with the dilated network strategy [3,40] to extract the feature map. The final feature map size is 1/8 of the input image, as shown in Fig. 3(b). On top of the map, we use the pyramid pooling module shown in (c) to gather context information. Using our 4-level pyramid, the pooling kernels cover the whole, half of, and small portions of the image. They are fused as the global prior. Then we concatenate the prior with the original feature map in the final part of (c). It is followed by a convolution layer to generate the final prediction map in (d).\nTo explain our structure, PSPNet provides an effective global contextual prior for pixel-level scene parsing. The pyramid pooling module can collect levels of information, more representative than global pooling [24]. In terms of computational cost, our PSPNet does not much increase it compared to the original dilated FCN network. In end-toend learning, the global pyramid pooling module and the local FCN feature can be optimized simultaneously. Deep pretrained networks lead to good performance [17,33,13]. However, increasing depth of the network may introduce additional optimization difficulty as shown in [32,19] for image classification. ResNet solves this problem with skip connection in each block. Latter layers of deep ResNet mainly learn residues based on previous ones.\nWe contrarily propose generating initial results by supervision with an additional loss, and learning the residue afterwards with the final loss. Thus, optimization of the deep network is decomposed into two, each is simpler to solve.\nAn example of our deeply supervised ResNet101 [13] model is illustrated in Fig. 4. Apart from the main branch using softmax loss to train the final classifier, another classifier is applied after the fourth stage, i.e., the res4b22 residue block. Different from relay backpropagation [32] that blocks the backward auxiliary loss to several shallow layers, we let the two loss functions pass through all previous layers. The auxiliary loss helps optimize the learning process, while the master branch loss takes the most responsibility. We add weight to balance the auxiliary loss.\nIn the testing phase, we abandon this auxiliary branch and only use the well optimized master branch for final prediction. This kind of deeply supervised training strategy for ResNet-based FCN is broadly useful under different experimental settings and works with the pre-trained ResNet model. This manifests the generality of such a learning strategy. More details are provided in Section 5.2. For a practical deep learning system, devil is always in the details. Our implementation is based on the public platform Caffe [15]. Inspired by [4], we use the \"poly\" learning rate policy where current learning rate equals to the base one multiplying (1 -iter maxiter ) power . We set base learning rate to 0.01 and power to 0.9. The performance can be improved by increasing the iteration number, which is set to 150K for ImageNet experiment, 30K for PASCAL VOC and 90K for Cityscapes. Momentum and weight decay are set to 0.9 and 0.0001 respectively. For data augmentation, we adopt random mirror and random resize between 0.5 and 2 for all datasets, and additionally add random rotation between -10 and 10 degrees, and random Gaussian blur for ImageNet and PASCAL VOC. This comprehensive data augmentation scheme makes the network resist overfitting. Our network contains dilated convolution following [4].\nDuring the course of experiments, we notice that an appropriately large \"cropsize\" can yield good performance and \"batchsize\" in the batch normalization [14] layer is of great importance. Due to limited physical memory on GPU cards, we set the \"batchsize\" to 16 during training. To achieve this, we modify Caffe from [37] branch [4] and make it support batch normalization on data gathered from multiple GPUs based on OpenMPI. For the auxiliary loss, we set the weight to 0.4 in experiments. Dataset and Evaluation Metrics The ADE20K dataset [43] Ablation Study for Auxiliary Loss The introduced auxiliary loss helps optimize the learning process while not influencing learning in the master branch. We experiment with setting the auxiliary loss weight α between 0 and 1 and show the results in 5 shows a few results in this competition. Our ensemble submission achieves score 57.21% on the testing set. Our single-model yields score 55.38%, which is even higher than a few other multi-model ensemble submissions. This score is lower than that on the validation set possibly due to the difference of data distributions between validation and testing sets. As shown in column (d) of Fig. 2, PSPNet solves the common problems in FCN. Fig. 6 shows another few parsing results on validation set of ADE20K. Our results contain more accurate and detailed structures compared to the baseline. Our PSPNet also works satisfyingly on semantic segmentation. We carry out experiments on the PASCAL VOC 2012 segmentation dataset [8], which contains 20 object categories and one background class. Following the procedure of [26,7,31,3], we use augmented data with the annotation of [10] resulting 10,582, 1,449 and 1,456 images for training, validation and testing. Results are shown in Table 6, we compare PSPNet with previous best-performing methods on the testing set based on two settings, i.e., with or without pre-training on MS-COCO dataset [21]. Methods pre-trained with MS-COCO are marked by ' †'. For fair comparison with current ResNet based frameworks [38,9,4] in scene parsing/semantic segmentation task, we build our architecture based on ResNet101 while without postprocessing like CRF. We evaluate PSPNet with severalscale input and use the average results following [3,24]. As shown in Table 6, PSPNet outperforms prior methods on both settings. Trained with only VOC 2012 data, we achieve 82.6% accuracyfoot_1 -we get the highest accuracy on all 20 classes. When PSPNet is pre-trained with MS-COCO dataset, it reaches 85.4% accuracyfoot_2 where 19 out of the 20 classes receive the highest accuracy. Intriguingly, our PSP-Net trained with only VOC 2012 data outperforms existing methods trained with the MS-COCO pre-trained model.\nOne may argue that our based classification model is more powerful than several prior methods since ResNet was recently proposed. To exhibit our unique contribution, we show that our method also outperforms stateof-the-art frameworks that use the same model, including FCRNs [38], LRR [9], and DeepLab [4]. In this process, we even do not employ time-consuming but effective postprocessing, such as CRF, as that in [4,9].\nSeveral examples are shown in Fig. 7. For \"cows\" in row one, our baseline model treats it as \"horse\" and \"dog\" while PSPNet corrects these errors. For \"aeroplane\" and \"table\" in the second and third rows, PSPNet finds missing parts. For \"person\", \"bottle\" and \"plant\" in following rows, PSP-Net performs well on these small-size-object classes in the images compared to the baseline model. More visual comparisons between PSPNet and other methods are included in Fig. 9. Cityscapes [6] is a recently released dataset for semantic urban scene understanding. It contains 5,000 high quality pixel-level finely annotated images collected from 50 cities 8. We have proposed an effective pyramid scene parsing network for complex scene understanding. The global pyra- mid pooling feature provides additional contextual information. We have also provided a deeply supervised optimization strategy for ResNet-based FCN network. We hope the implementation details publicly available can help the community adopt these useful strategies for scene parsing and semantic segmentation and advance related techniques. We would like to thank Gang Sun and Tong Xiao for their help in training the basic classification models, Qun Luo for technical support. This work is supported by a grant from the Research Grants Council of the Hong Kong SAR (project No. 2150760)." }, { "title": "Learning dense correspondence via 3d-guided cycle consistency", "year": 2016.0, "authors": "Tinghui Zhou; Philipp Krahenbuhl; Mathieu Aubry; Qixing Huang; Alexei Efros", "arxiv_di": 1604.05383, "Introduction": "Consistency is all I ask! TOM STOPPARD\nIn the past couple of years, deep learning has swept though computer vision like wildfire. One needs only to buy a GPU, arm oneself with enough training data, and turn the crank to see head-spinning improvements on most computer vision benchmarks. So it is all the more curious to consider tasks for which deep learning has not made much inroad, typically due to the lack of easily obtainable training data. One such task is dense visual correspondencethe problem of estimating a pixel-wise correspondence field between images depicting visually similar objects or scenes. Not only is this a key ingredient for optical flow and stereo synthetic s 1 synthetic s 2 real r 1 real r 2\nFs1,s2\nF s1,r1\nF r1,r2\nF r2,s2\nTRAINING TIME matching, but many other computer vision tasks, including recognition, segmentation, depth estimation, etc. could be posed as finding correspondences in a large visual database followed by label transfer.\nIn cases where the images depict the same physical object/scene across varying viewpoints, such as in stereo matching, there is exciting new work that aims to use the commonality of the scene structure as supervision to learn deep features for correspondence [2,12,20,15,39]. But for computing correspondence across different object/scene instances, no learning method to date has managed to seriously challenge SIFT flow [26], the dominant approach for this task.\nHow can we get supervision for dense correspondence between images depicting different object instances, such as images r 1 and r 2 in Figure 1? Our strategy in this paper is to learn the things we don't know by linking them up to the things we do know. In particular, at training time, we use a large dataset of 3D CAD models [1] to find one that could link the two images, as shown in Figure 1. Here the dense correspondence between the two views of the same 3D model s 1 and s 2 can serve as our ground truth supervision (as we know precisely where each shape point goes when rendered in a different viewpoint), but the challenge is to use this information to train a network that can produce correspondence between two real images at test time.\nA naive strategy is to train a network to estimate correspondence between the rendered views of the same 3D model, and then hope that the network could generalize to real images as well. Unfortunately, this does not work in practice (see Table 1), likely due to 1) the large visual difference between synthetic and real images and 2) the lack of cross-instance ground truth correspondence for training. Instead, in this paper we utilize the concept of cycle consistency of correspondence flows [18,40,41] -the notion that the composition of flow fields for any circular path through the image set should have a zero combined flow. Here, cycle consistency serves as a way to link the correspondence between real images and the rendered views into a single 4cycle chain. We can then train our correspondence network using cycle consistency as the supervisory signal. The idea is to take advantage of the known synthetic-to-synthetic correspondence as ground-truth anchors that allow cycle consistency to propagate the correct correspondence information from synthetic to real images, without diverging or falling into a trivial solution. Here we could interpret the cycle consistency as a kind of \"meta-supervision\" that operates not on the data directly, but rather on how the data should behave. As we show later, such 3D-guided consistency supervision allows the network to learn crossinstance correspondence that potentially overcomes some of the major difficulties (e.g. significant viewpoint and appearance variations) of previous pairwise matching methods like SIFT flow [26]. Our approach could also be thought of as an extension and a reformulation of FlowWeb [40] as a learning problem, where the image collection is stored implicitly in the network representation.\nThe main contributions of this paper are: 1) We propose a general learning framework for tasks without direct labels through cycle consistency as an example of \"metasupervision\"; 2) We present the first end-to-end trained deep network for dense cross-instance correspondence; 3) We demonstrate that the widely available 3D CAD models can be used for learning correspondence between 2D images of different object instances.", "Related_Work": "Cross-instance pairwise correspondence The classic SIFT Flow approach [26] proposes an energy minimization framework that computes dense correspondence between different scenes by matching SIFT features [28] regularized by smoothness and small displacement priors. Deformable Spatial Pyramid (DSP) Matching [22], a recent follow-up to SIFT Flow, greatly speeds up the inference while modestly improving the matching accuracy. Barnes et al. [5] extend the original PatchMatch [4] algorithm to allow more general-purpose (including cross-instance) matching. Bristow et al. [6] build an exemplar-LDA classifier around each pixel, and aggregate the matching responses over all classifiers with additional smoothness priors to obtain dense correspondence estimation. In these same proceedings, Ham et al. [14] take advantage of recent developments in object proposals, and utilize local and geometric consistency constraints among object proposals to establish dense semantic correspondence.\nCollection correspondence Traditionally, correspondence has been defined in a pairwise manner, but recent works have tried to pose correspondence as the problem of joint image-set alignment. The classic like on work on Congealing [25,16] uses sequential optimization to gradually lower the entropy of the intensity distribution of the entire image set by continuously warping each image via a parametric transformation (e.g. affine). RASL [31], Collection Flow [21] and Mobahi et al. [29] first estimate a low-rank subspace of the image collection, and then perform joint alignment among images projected onto the subspace. FlowWeb [40] builds a fully-connected graph for the image collection with images as nodes and pairwise flow fields as edges, and establishes globally-consistent dense correspondences by maximizing the cycle consistency among all edges. While achieving state-of-the-art performance, FlowWeb is overly dependent on the initialization quality, and scales poorly with the size of the image collection. Similar to a recent work on joint 3D shape alignment [18], Zhou et al. [41] tackle the problem by jointly optimizing feature matching and cycle consistency, but formulate it as a low-rank matrix recovery which they solve with a fast alternating minimization method. Virtual View Networks [7] leverage annotated keypoints to infer dense correspondence between images connected in a viewpoint graph, and use this graph to align a query image to all the reference images in order to perform single-view 3D reconstruction. Cho et al. [9] use correspondence consistency among selective search windows in a diverse image collection to perform unsupervised object discovery.\nDeep learning for correspondence Recently, several works have applied convolutional neural networks to learn same-instance dense correspondence. FlowNet [11] learns an optical flow CNN with a synthetic Flying Chairs dataset that generalizes well to existing benchmark datasets, yet still falls a bit short of state-of-the-art optical flow methods like DeepFlow [36] and EpicFlow [32]. Several recent works have also used supervision from reconstructed 3D scene and stereo pairs [15,39,2]. However all these approaches are inherently limited to matching images of the same physical object/scene. Long et al. [27] use deep features learned from large-scale object classification tasks to perform intra-class image alignment, but found it to perform similarly to SIFT flow.\nImage-shape correspondence Our work is partially motivated by recent progress in image-shape alignment that allows establishing correspondence between images through intermediate 3D shapes. Aubry et al. [3] learns discriminative patches for matching 2D images to their corresponding 3D CAD models, while Peng et al. [30] utilizes CAD models to train object detectors with few shots of labeled real images. In cases where depth data is available, deep learning methods have recently been applied to 3D object recognition and alignment between CAD models and RGB-D images [13,33,37]. Other works [17,34] leverage image and shape collections for joint pose estimation and refining image-shape alignment, which are further applied to single-view object reconstruction and depth estimation. Although our approach requires 3D CAD models for constructing the training set, the image-shape alignment is jointly learned with the image-image alignment, and no CAD models are required at test time.", "Methodology": "Our goal is to predict a dense flow (or correspondence) field F a,b : R 2 → R 2 between pairs of images a and b. The flow field F a,b (p) = (p x -q x , p y -q y ) computes the relative offset from each point p in image a to a corresponding point q in image b. Given that pairwise correspondence might not always be well-defined (e.g. a side-view car and a frontalview car do not have many visible parts in common), we additionally compute a matchability map M a,b :\nR 2 → [0, 1] predicting if a correspondence exists M a,b (p) = 1 or not M a,b (p) = 0.\nWe learn both the flow field and the matchability prediction with a convolutional neural network. Both functions are differentiable with respect to the network parameters, which could be directly learned if we had dense annotations for F a,b and M a,b on a large set of real image pairs. However, in practice it is infeasible to obtain those annotations at scale as they are either too time-consuming or ambiguous to annotate.\nWe instead choose a different route, and learn both functions by placing the supervision on the desired properties of the ground-truth, i.e. while we do not know what the ground-truth is, we know how it should behave. In this paper, we use cycle consistency with 3D CAD models as the desired property that will be our supervisory signal. Specifically, for each pair of real training images r 1 and r 2 , we find a 3D CAD model of the same category, and render two synthetic views s 1 and s 2 in similar viewpoint as r 1 and r 2 , respectively (see Section 4.1 for more details). For each training quartet < s 1 , s 2 , r 1 , r 2 > we learn to predict flows from s 1 to r 1 (F s1,r1 ) to r 2 (F r1,r2 ) to s 2 (F r2,s2 ) that are cycle-consistent with the ground-truth flow from s 1 to s 2 ( Fs1,s2 ) provided by the rendering engine (similarly for the matchability prediction). By constructing consistency supervision through 3D CAD models, we aim to learn 2D image correspondences that potentially captures the 3D semantic appearance of the query objects. Furthermore, making Fs1,s2 be ground-truth by construction prevents the cycle-consistency optimization from producing trivial solutions, such as identity flows.\nSections 3.1 and 3.2 formally define our training objective for learning correspondence F and matchability M , respectively. Section 3.3 demonstrates how to obtain continuous approximation of discrete maps that allows end-to-end training. Section 3.4 describes our network architecture.", "Conclusion": "In this paper, we used cycle-consistency as a supervisory signal to learn dense cross-instance correspondences. Not only did we find that this kind of supervision is surprisingly effective, but also that the idea of learning with cycle-consistency could potentially be fairly general. One could apply the same idea to construct other training scenarios, as long as the ground-truth of one or more edges along the cycle is known. We hope that this work will inspire more efforts to tackle tasks with little or no direct labels by exploiting cycle consistency or other types of indirect or \"meta\"-supervision.", "Experiment_and_Results": "In this section, we describe the details of our network training procedure, and evaluate the performance of our network on correspondence and matchability tasks.", "Extra": "Given a set of training quartets {< s 1 , s 2 , r 1 , r 2 >}, we train the CNN to minimize the following objective:\n L f low Fs1,s2 , F s1,r1 •F r1,r2 •F r2,s2 ,(1)\nwhere Fs1,s2 refers to the ground-truth flow between two synthetic views, F s1,r1 , F r1,r2 and F r2,s2 are predictions made by the CNN along the transitive path. The transitive flow composition Fa,c = F a,b • F b,c is defined as\nFa,c (p) = F a,b (p) + F b,c (p + F a,b (p)) ,(2)\nwhich is differentiable as long as F a,b and F b,c are differentiable. L f low ( Fs1,s2 , Fs1,s2 ) denotes the truncated Euclidean loss defined as\nL f low ( Fs1,s2 , Fs1,s2 ) = p| Ms 1 ,s 2 (p)=1 min( Fs1,s2 (p) -Fs1,s2 (p) 2 , T 2 ) ,\nwhere Ms1,s2 (p) is the ground-truth matchability map provided by the rendering engine ( Ms1,s2 (p) = 0 when p is either a background pixel or not visible in s 2 ), and T = 15 (pixels) for all our experiments. In practice, we found the truncated loss to be more robust to spurious outliers for training, especially during the early stage when the network output tends to be highly noisy. Our training objective for matchability prediction also utilizes the cycle consistency signal: where Ms1,s2 refers to the ground-truth matchability map between the two synthetic views, M s1,r1 , M r1,r2 and M r2,s2 are CNN predictions along the transitive path, and L mat denotes per-pixel cross-entropy loss. The matchability map composition is defined as\n L mat Ms1,s2 , M s1,r1 •M r1,r2\nMa,c (p) = M a,b (p)M b,c (p + F a,b (p)) ,(4)\nwhere the composition depends on both the matchability as well as the flow field.\nDue to the multiplicative nature in matchability composition (as opposed to additive in flow composition), we found that training with objective 3 directly results in the network exploiting the clean background in synthetic images, which helps predict a perfect segmentation of the synthetic object in M s1,r1 . Once M s1,r1 predicts zero values for background points, the network has no incentive to correctly predict the matchability for background points in M r1,r2 , as the multiplicative composition has zero values regardless of the transitive predictions along M r1,r2 and M r2,s2 . To address this, we fix M s1,r1 = 1 and M r2,s2 = 1, and only train the CNN to infer M r1,r2 . This assumes that every pixel in s 1 (s 2 ) is matchable in r 1 (r 2 ), and allows the matchability learning to happen between real images. Note that this is still different from directly using Ms1,s2 as supervision for M r1,r2 as the matchability composition depends on the predicted flow field along the transitive path.\nThe matchability objective 3 is jointly optimized with the flow objective 1 during training, and our final objective can be written as L f low + λL mat with λ = 100. An implicit assumption made in our derivation of the transitive composition (Eq. 2 and 4) is that F and M are differentiable functions over continuous input, while images inherently consist of discrete pixel grids. To allow end-toend training with stochastic gradient descent (SGD), we ob-tain continuous approximation of the full flow field and the matchability map with bilinear interpolation over the CNN predictions on discrete pixel locations. Specifically, for each discrete pixel location p ∈ {1, . . . , W } × {1, . . . , H}, the network predicts a flow vector F a,b (p) as well as a matchability score M a,b (p), and the approximation over all continuous points p ∈\n[1, W ] × [1, H] is obtained by: F a,b (p) = p∈Np (1 -|p x -px |)(1 -|p y -py |)F a,b (p) M a,b (p) = p∈Np (1 -|p x -px |)(1 -|p y -py |)M a,b (p) ,\nwhere N p denotes the four-neighbor pixels (top-left, topright, bottom-left, bottom-right) of point p, or just p if it is one of the discrete pixels. This is equivalent to the differentiable image sampling with a bilinear kernel proposed in [19]. Our network architecture (see Figure 2) follows the encoder-decoder design principle with three major components: 1) feature encoder of 8 convolution layers that extracts relevant features from both input images with shared network weights; 2) flow decoder of 9 fractionallystrided/up-sampling convolution (uconv) layers that assembles features from both input images, and outputs a dense flow field; 3) matchability decoder of 9 uconv layers that assembles features from both input images, and outputs a probability map indicating whether each pixel in the source image has a correspondence in the target.\nAll conv/uconv layers are followed by rectified linear units (ReLUs) except for the last uconv layer of either decoder, and the filter size is fixed to 3 × 3 throughout the whole network. No pooling layer is used, and the stride is 2 when increasing/decreasing the spatial dimension of the feature maps. The output of the matchability decoder is further passed to a sigmoid layer for normalization.\nDuring training, we apply the same network to three different input pairs along the cycle (s 1 → r 1 , r 1 , → r 2 , and r 2 → s 2 ), and composite the output to optimize the consistency objectives 1 and 3. The 3D CAD models we used for constructing training quartets come from the ShapeNet database [1], while the real images are from the PASCAL3D+ dataset [38]. For each object instance (cropped from the bounding box and rescaled to 128 × 128) in the train split of PASCAL3D+, we render all 3D models under the same camera viewpoint (provided by PASCAL3D+), and only use K = 20 nearest models as matches to the object instance based on the HOG [10] Euclidean distance. We then construct training quartets each consisting of two real images (r 1 and r 2 ) matched to the same 3D model and their corresponding rendered views (s 1 and s 2 ). On average, the number of valid training quartets for each category is about 80, 000. We train the network in a category-agnostic manner (i.e. a single network for all categories). We first initialize the network (feature encoder + flow decoder pathway) to mimic SIFT flow by randomly sampling image pairs from the training quartets and training the network to minimize the Euclidean loss between the network prediction and the SIFT flow output on the sampled pair 1 . Then we fine-tune the whole network end-to-end to minimize the consistency loss defined in Eq. 1 and 3. We use the ADAM solver [23] with β 1 = 0.9, β 2 = 0.999, initial learning rate of 0.001, step size of 50, 000, step of 0.5 for 200, 000 iterations. We train with mini-batches of 40 image pairs during initialization and 10 quartets during fine-tuning.\nWe visualize the effect of our cycle-consistency training in Figure 3, where we sample some random points in the synthetic image s 1 , and plot their predicted correspondences along the cycle s 1 → r 1 → r 2 → s 2 to compare with the ground-truth in s 2 . One can see that the transitive trajectories become more and more cycle-consistent with more iterations of training, while individual correspondences along each edge of the cycle also tend to become more semantically plausible. We visualize the features learned by the network using the t-SNE algorithm [35]. Specifically, we extract conv-9 features (i.e. the output of the last encoder layer) from the entire set of car instances in the PASCAL3D+ dataset, and embed them in 2-D with the t-SNE algorithm. Figure 4 visualizes the embedding. Interestingly, while our network is not explicitly trained to perform viewpoint estimation, the embedding layout appears to be viewpoint-sensitive, which implies that the network might implicitly learn that viewpoint is an important cue for correspondence/matchability tasks through our consistency training. We evaluate the quality of our correspondence output using the keypoint transfer task on the 12 categories from PASCAL3D+ [38]. For each category, we exhaustively sample all image pairs from the val split (not seen during training), and determine if a keypoint in the source image is transferred correctly by measuring the Euclidean distance between our correspondence prediction and the annotated ground-truth (if exists) in the target image. A correct transfer means the prediction falls within α • max(H, W ) pixels of the ground-truth with H and W being the image height and width, respectively (both are 128 pixels in our case). We compute the percentage of correct keypoint transfer (PCK) over all image pairs as the metric, and provide performance comparison for the following methods in Table 1: • SIFT flow [26] -A classic method for dense correspondence using SIFT feature descriptors and handdesigned smoothness and large-displacement priors.\nWe also ran preliminary evaluation on a more recent follow-up based on deformable spatial pyramids [22], and found it to perform similarly to SIFT flow.\n• Long et al. [27] -Similar MRF energy minimization framework as SIFT flow but with deep features learned from the ImageNet classification task.\n• CNN I2S -Our network trained on real image pairs with correspondence inferred by compositing the output of an off-the-shelf image-to-shape alignment algorithm [17] and the ground-truth synthetic correspondence (i.e. obtaining direct supervision for F r1,r2 through F r1,s1 • Fs1,s2 •F s2,r2 , where F r1,s1 and F s2,r2 are inferred from [17]).\n• CNN init -Our network trained to mimic SIFT flow.\n• CNN init + Synthetic ft. -fine-tuning on synthetic image pairs with ground-truth correspondence after initialization with SIFT flow. [35]. Interestingly, the overall layout seems to be mainly clustered based on the camera viewpoint, while the network is not explicitly trained to perform viewpoint estimation. This suggests that the network might implicitly learn that viewpoint is an important cue for the correspondence/matchability tasks through our consistency training. • CNN init + Consistency ft. -fine-tuning with our objectives 1 and 3 after initialization with SIFT flow.\nOverall, our consistency-supervised network significantly outperforms all other methods (except on \"bicycle\" and \"motorbike\" where SIFT flow has a slight advantage).\nNotice the significant improvement over the initial network after consistency fine-tuning. The performance gap between the last two rows of Table 1 suggests that consistency supervision is much more effective in adapting to the real image domain than direct supervision from synthetic Source Target SIFTflow Ours init. ground-truth. Figure 5 compares sample keypoint transfer results using different methods. In general, our final prediction tends to match the ground-truth much better than the other baselines, and could sometimes overcome substantial viewpoint and appearance variation where previous methods, like SIFT flow, are notoriously error-prone. We evaluate the performance of matchability prediction using the PASCAL-Part dataset [8], which provides humanannotated part segment labelingfoot_1 . For each test image pair, a pixel in the source image is deemed matchable if there exists another pixel in the target image that shares the same part label, and all background pixels are unmatchable. We measure the performance by computing the percentage of pixels being classified correctly. For our method, we classify a pixel as matchable if its probability is > 0.5 according to the network prediction. To obtain matchability prediction for SIFT flow, we compute the L 1 norm of the SIFT feature matching error for each source pixel after the alignment, and a pixel is predicted to be matchable if the error is below a certain threshold (we did grid search on the training set to determine the threshold, and found 1, 000 to perform the best). Table 2 compares the classification accuracy between our method and SIFT flow prediction (chance performance is 50%). Our method significantly outperforms SIFT flow on all categories except \"bicycle\" and \"motorbike\" (67.8% vs. 57.1% mean accuracy).\nWe visualize some examples of our matchability prediction in Figure 6. Notice how the prediction varies when the target image changes with the source image being the same. Although in this paper we are mostly interested in finding correspondence between real images, a nice byproduct of our consistency training is that the network also implicitly learns cross-domain, shape-to-image correspondence, which allows us to transfer per-pixel labels (e.g. surface normals, segmentation masks, etc.) from shapes to real images. As a proof of concept, we ran a toy experiment on the task of segmentation transfer. Specifically, we construct a shape database of about 200 shapes per category, with each shape being rendered in 8 canonical viewpoints. Given a query real image, we apply our network to predict the correspondence between the query and each rendered view of the same category, and warp the query image according to the predicted flow field. Then we compare the HOG Euclidean distance between the warped query and the rendered views, and retrieve the rendered view with minimum error whose correspondence to the query image on the foreground region is used for segmentation transfer. Figure 7 shows sample segmentation using different methods. We can see that our learned flows tend to produce more accurate segmentation transfer than SIFT flow using the same pipeline. In some cases our output can even segment challenging parts such as the bars and wheels of the chairs. We thank Leonidas Guibas, Shubham Tulsiani, and Saurabh Gupta for helpful discussions. This work was sponsored in part by NSF/Intel VEC 1539099, ONR MURI N000141010934, and a hardware donation by NVIDIA." }, { "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": 2020.0, "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "arxiv_di": 2010.04159, "Introduction": "Modern object detectors employ many hand-crafted components (Liu et al., 2020), e.g., anchor generation, rule-based training target assignment, non-maximum suppression (NMS) post-processing. They are not fully end-to-end. Recently, Carion et al. (2020) proposed DETR to eliminate the need for such hand-crafted components, and built the first fully end-to-end object detector, achieving very competitive performance. DETR utilizes a simple architecture, by combining convolutional neural networks (CNNs) and Transformer (Vaswani et al., 2017) encoder-decoders. They exploit the versatile and powerful relation modeling capability of Transformers to replace the hand-crafted rules, under properly designed training signals.\nDespite its interesting design and good performance, DETR has its own issues: (1) It requires much longer training epochs to converge than the existing object detectors. For example, on the COCO (Lin et al., 2014) benchmark, DETR needs 500 epochs to converge, which is around 10 to 20 times slower than Faster R-CNN (Ren et al., 2015). ( 2) DETR delivers relatively low performance at detecting small objects. Modern object detectors usually exploit multi-scale features, where small objects are detected from high-resolution feature maps. Meanwhile, high-resolution feature maps lead to unacceptable complexities for DETR. The above-mentioned issues can be mainly attributed to the deficit of Transformer components in processing image feature maps. At initialization, the attention modules cast nearly uniform attention weights to all the pixels in the feature maps. Long training epoches is necessary for the attention weights to be learned to focus on sparse meaningful locations. On the other hand, the attention weights computation in Transformer encoder is of quadratic computation w.r.t. pixel numbers. Thus, it is of very high computational and memory complexities to process high-resolution feature maps.\nIn the image domain, deformable convolution (Dai et al., 2017) is of a powerful and efficient mechanism to attend to sparse spatial locations. It naturally avoids the above-mentioned issues. While it lacks the element relation modeling mechanism, which is the key for the success of DETR. In this paper, we propose Deformable DETR, which mitigates the slow convergence and high complexity issues of DETR. It combines the best of the sparse spatial sampling of deformable convolution, and the relation modeling capability of Transformers. We propose the deformable attention module, which attends to a small set of sampling locations as a pre-filter for prominent key elements out of all the feature map pixels. The module can be naturally extended to aggregating multi-scale features, without the help of FPN (Lin et al., 2017a). In Deformable DETR , we utilize (multi-scale) deformable attention modules to replace the Transformer attention modules processing feature maps, as shown in Fig. 1.\nDeformable DETR opens up possibilities for us to exploit variants of end-to-end object detectors, thanks to its fast convergence, and computational and memory efficiency. We explore a simple and effective iterative bounding box refinement mechanism to improve the detection performance. We also try a two-stage Deformable DETR, where the region proposals are also generated by a vaiant of Deformable DETR, which are further fed into the decoder for iterative bounding box refinement.\nExtensive experiments on the COCO (Lin et al., 2014) benchmark demonstrate the effectiveness of our approach. Compared with DETR, Deformable DETR can achieve better performance (especially on small objects) with 10× less training epochs. The proposed variant of two-stage Deformable DETR can further improve the performance. Code is released at https://github. com/fundamentalvision/Deformable-DETR.", "Related_Work": "Efficient Attention Mechanism. Transformers (Vaswani et al., 2017) involve both self-attention and cross-attention mechanisms. One of the most well-known concern of Transformers is the high time and memory complexity at vast key element numbers, which hinders model scalability in many cases. Recently, many efforts have been made to address this problem (Tay et al., 2020b), which can be roughly divided into three categories in practice.\nThe first category is to use pre-defined sparse attention patterns on keys. The most straightforward paradigm is restricting the attention pattern to be fixed local windows. Most works (Liu et al., 2018a;Parmar et al., 2018;Child et al., 2019;Huang et al., 2019;Ho et al., 2019;Wang et al., 2020a;Hu et al., 2019;Ramachandran et al., 2019;Qiu et al., 2019;Beltagy et al., 2020;Ainslie et al., 2020;Zaheer et al., 2020) follow this paradigm. Although restricting the attention pattern to a local neighborhood can decrease the complexity, it loses global information. To compensate, Child et al. (2019); Huang et al. (2019); Ho et al. (2019); Wang et al. (2020a) attend key elements at fixed intervals to significantly increase the receptive field on keys. Beltagy et al. (2020); Ainslie et al. (2020); Zaheer et al. (2020) allow a small number of special tokens having access to all key elements. Zaheer et al. (2020); Qiu et al. (2019) also add some pre-fixed sparse attention patterns to attend distant key elements directly.\nThe second category is to learn data-dependent sparse attention. Kitaev et al. (2020) proposes a locality sensitive hashing (LSH) based attention, which hashes both the query and key elements to different bins. A similar idea is proposed by Roy et al. (2020), where k-means finds out the most related keys. Tay et al. (2020a) learns block permutation for block-wise sparse attention.\nThe third category is to explore the low-rank property in self-attention. Wang et al. (2020b) reduces the number of key elements through a linear projection on the size dimension instead of the channel dimension. Katharopoulos et al. (2020); Choromanski et al. (2020) rewrite the calculation of selfattention through kernelization approximation.\nIn the image domain, the designs of efficient attention mechanism (e.g., Parmar et al. (2018) 2019) admit such approaches are much slower in implementation than traditional convolution with the same FLOPs (at least 3× slower), due to the intrinsic limitation in memory access patterns.\nOn the other hand, as discussed in Zhu et al. (2019a), there are variants of convolution, such as deformable convolution (Dai et al., 2017;Zhu et al., 2019b) and dynamic convolution (Wu et al., 2019), that also can be viewed as self-attention mechanisms. Especially, deformable convolution operates much more effectively and efficiently on image recognition than Transformer self-attention. Meanwhile, it lacks the element relation modeling mechanism.\nOur proposed deformable attention module is inspired by deformable convolution, and belongs to the second category. It only focuses on a small fixed set of sampling points predicted from the feature of query elements. Different from Ramachandran et al. (2019); Hu et al. (2019), deformable attention is just slightly slower than the traditional convolution under the same FLOPs.\nMulti-scale Feature Representation for Object Detection. One of the main difficulties in object detection is to effectively represent objects at vastly different scales. Modern object detectors usually exploit multi-scale features to accommodate this. As one of the pioneering works, FPN (Lin et al., 2017a) proposes a top-down path to combine multi-scale features. PANet (Liu et al., 2018b) further adds an bottom-up path on the top of FPN. Kong et al. (2018) combines features from all scales by a global attention operation. Zhao et al. (2019) proposes a U-shape module to fuse multi-scale features. Recently, NAS-FPN (Ghiasi et al., 2019) and Auto-FPN (Xu et al., 2019) are proposed to automatically design cross-scale connections via neural architecture search. Tan et al. (2020) proposes the BiFPN, which is a repeated simplified version of PANet. Our proposed multi-scale deformable attention module can naturally aggregate multi-scale feature maps via attention mechanism, without the help of these feature pyramid networks.", "Methodology": "Table 3 compares the proposed method with other state-of-the-art methods. Iterative bounding box refinement and two-stage mechanism are both utilized by our models in Table 3. With ResNet-101 and ResNeXt-101 (Xie et al., 2017), our method achieves 48.7 AP and 49.0 AP without bells and whistles, respectively. By using ResNeXt-101 with DCN (Zhu et al., 2019b), the accuracy rises to 50.1 AP. With additional test-time augmentations, the proposed method achieves 52.3 AP.\nTable 2: Ablations for deformable attention on COCO 2017 val set. \"MS inputs\" indicates using multi-scale inputs. \"MS attention\" indicates using multi-scale deformable attention. K is the number of sampling points for each attention head on each feature level.\nMS inputs MS attention K FPNs AP AP50 AP75 APS APM APL 4 FPN (Lin et al., 2017a) 43.8 62.6 47.8 26.5 47.3 58.1 4 BiFPN (Tan et al., 2020) 43.9 62.5 47.7 25.6 47.4", "Conclusion": "Deformable DETR is an end-to-end object detector, which is efficient and fast-converging. It enables us to explore more interesting and practical variants of end-to-end object detectors. At the core of Deformable DETR are the (multi-scale) deformable attention modules, which is an efficient attention mechanism in processing image feature maps. We hope our work opens up new possibilities in exploring end-to-end object detection.", "Experiment_and_Results": "Dataset. We conduct experiments on COCO 2017 dataset (Lin et al., 2014). Our models are trained on the train set, and evaluated on the val set and test-dev set.\nImplementation Details. ImageNet (Deng et al., 2009) pre-trained ResNet-50 (He et al., 2016) is utilized as the backbone for ablations. Multi-scale feature maps are extracted without FPN (Lin et al., 2017a). M = 8 and K = 4 are set for deformable attentions by default. Parameters of the deformable Transformer encoder are shared among different feature levels. Other hyper-parameter setting and training strategy mainly follow DETR (Carion et al., 2020), except that Focal Loss (Lin et al., 2017b) with loss weight of 2 is used for bounding box classification, and the number of object queries is increased from 100 to 300. We also report the performance of DETR-DC5 with these modifications for a fair comparison, denoted as DETR-DC5 + . By default, models are trained for 50 epochs and the learning rate is decayed at the 40-th epoch by a factor of 0.1. Following DETR (Carion et al., 2020), we train our models using Adam optimizer (Kingma & Ba, 2015) with base learning rate of 2 × 10 -4 , β 1 = 0.9, β 2 = 0.999, and weight decay of 10 -4 . Learning rates of the linear projections, used for predicting object query reference points and sampling offsets, are multiplied by a factor of 0.1. Run time is evaluated on NVIDIA Tesla V100 GPU.", "Extra": "Multi-Head Attention in Transformers. Transformers (Vaswani et al., 2017) are of a network architecture based on attention mechanisms for machine translation. Given a query element (e.g., a target word in the output sentence) and a set of key elements (e.g., source words in the input sentence), the multi-head attention module adaptively aggregates the key contents according to the attention weights that measure the compatibility of query-key pairs. To allow the model focusing on contents from different representation subspaces and different positions, the outputs of different attention heads are linearly aggregated with learnable weights. Let q ∈ Ω q indexes a query element with representation feature z q ∈ R C , and k ∈ Ω k indexes a key element with representation feature x k ∈ R C , where C is the feature dimension, Ω q and Ω k specify the set of query and key elements, respectively. Then the multi-head attention feature is calculated by\nMultiHeadAttn(z q , x) = M m=1 W m k∈Ω k A mqk • W m x k ,(1)\nwhere m indexes the attention head, W m ∈ R Cv×C and W m ∈ R C×Cv are of learnable weights\n(C v = C/M by default). The attention weights A mqk ∝ exp{ z T q U T m Vmx k √ Cv\n} are normalized as\nk∈Ω k A mqk = 1, in which U m , V m ∈ R\nCv×C are also learnable weights. To disambiguate different spatial positions, the representation features z q and x k are usually of the concatenation/summation of element contents and positional embeddings.\nThere are two known issues with Transformers. One is Transformers need long training schedules before convergence. Suppose the number of query and key elements are of N q and N k , respectively. Typically, with proper parameter initialization, U m z q and V m x k follow distribution with mean of 0 and variance of 1, which makes attention weights A mqk ≈ 1 N k , when N k is large. It will lead to ambiguous gradients for input features. Thus, long training schedules are required so that the attention weights can focus on specific keys. In the image domain, where the key elements are usually of image pixels, N k can be very large and the convergence is tedious.\nOn the other hand, the computational and memory complexity for multi-head attention can be very high with numerous query and key elements. The computational complexity of Eq. 1 is of\nO(N q C 2 + N k C 2 + N q N k C).\nIn the image domain, where the query and key elements are both of pixels, N q = N k C, the complexity is dominated by the third term, as O(N q N k C). Thus, the multi-head attention module suffers from a quadratic complexity growth with the feature map size.\nDETR. DETR (Carion et al., 2020) is built upon the Transformer encoder-decoder architecture, combined with a set-based Hungarian loss that forces unique predictions for each ground-truth bounding box via bipartite matching. We briefly review the network architecture as follows.\nGiven the input feature maps x ∈ R C×H×W extracted by a CNN backbone (e.g., ResNet (He et al., 2016)), DETR exploits a standard Transformer encoder-decoder architecture to transform the input feature maps to be features of a set of object queries. A 3-layer feed-forward neural network (FFN) and a linear projection are added on top of the object query features (produced by the decoder) as the detection head. The FFN acts as the regression branch to predict the bounding box coordinates b ∈ [0, 1] 4 , where b = {b x , b y , b w , b h } encodes the normalized box center coordinates, box height and width (relative to the image size). The linear projection acts as the classification branch to produce the classification results.\nFor the Transformer encoder in DETR, both query and key elements are of pixels in the feature maps. The inputs are of ResNet feature maps (with encoded positional embeddings). Let H and W denote the feature map height and width, respectively. The computational complexity of self-attention is of O(H 2 W 2 C), which grows quadratically with the spatial size.\nFor the Transformer decoder in DETR, the input includes both feature maps from the encoder, and N object queries represented by learnable positional embeddings (e.g., N = 100). There are two types of attention modules in the decoder, namely, cross-attention and self-attention modules. In the cross-attention modules, object queries extract features from the feature maps. The query elements are of the object queries, and key elements are of the output feature maps from the encoder. In it,\nN q = N , N k = H × W and the complexity of the cross-attention is of O(HW C 2 + N HW C).\nThe complexity grows linearly with the spatial size of feature maps. In the self-attention modules, object queries interact with each other, so as to capture their relations. The query and key elements are both of the object queries. In it, N q = N k = N , and the complexity of the self-attention module is of O(2N C 2 + N 2 C). The complexity is acceptable with moderate number of object queries.\nDETR is an attractive design for object detection, which removes the need for many hand-designed components. However, it also has its own issues. These issues can be mainly attributed to the deficits of Transformer attention in handling image feature maps as key elements: (1) DETR has relatively low performance in detecting small objects. Modern object detectors use high-resolution feature maps to better detect small objects. However, high-resolution feature maps would lead to an unacceptable complexity for the self-attention module in the Transformer encoder of DETR, which has a quadratic complexity with the spatial size of input feature maps. (2) Compared with modern object detectors, DETR requires many more training epochs to converge. This is mainly because the attention modules processing image features are difficult to train. For example, at initialization, the cross-attention modules are almost of average attention on the whole feature maps. While, at the end of the training, the attention maps are learned to be very sparse, focusing only on the object Deformable Attention Module. The core issue of applying Transformer attention on image feature maps is that it would look over all possible spatial locations. To address this, we present a deformable attention module. Inspired by deformable convolution (Dai et al., 2017;Zhu et al., 2019b), the deformable attention module only attends to a small set of key sampling points around a reference point, regardless of the spatial size of the feature maps, as shown in Fig. 2. By assigning only a small fixed number of keys for each query, the issues of convergence and feature spatial resolution can be mitigated.\nGiven an input feature map x ∈ R C×H×W , let q index a query element with content feature z q and a 2-d reference point p q , the deformable attention feature is calculated by\nDeformAttn(z q , p q , x) = M m=1 W m K k=1 A mqk • W m x(p q + ∆p mqk ) ,(2)\nwhere m indexes the attention head, k indexes the sampled keys, and K is the total sampled key number (K HW ). ∆p mqk and A mqk denote the sampling offset and attention weight of the k th sampling point in the m th attention head, respectively. The scalar attention weight A mqk lies in the range [0, 1], normalized by K k=1 A mqk = 1. ∆p mqk ∈ R 2 are of 2-d real numbers with unconstrained range. As p q + ∆p mqk is fractional, bilinear interpolation is applied as in Dai et al. (2017) in computing x(p q +∆p mqk ). Both ∆p mqk and A mqk are obtained via linear projection over the query feature z q . In implementation, the query feature z q is fed to a linear projection operator of 3M K channels, where the first 2M K channels encode the sampling offsets ∆p mqk , and the remaining M K channels are fed to a softmax operator to obtain the attention weights A mqk .\nThe deformable attention module is designed for processing convolutional feature maps as key elements. Let N q be the number of query elements, when M K is relatively small, the complexity of the deformable attention module is of O(2N q C 2 + min(HW C 2 , N q KC 2 )) (See Appendix A.1 for details). When it is applied in DETR encoder, where N q = HW , the complexity becomes O(HW C 2 ), which is of linear complexity with the spatial size. When it is applied as the cross-attention modules in DETR decoder, where N q = N (N is the number of object queries), the complexity becomes O(N KC 2 ), which is irrelevant to the spatial size HW .\nMulti-scale Deformable Attention Module. Most modern object detection frameworks benefit from multi-scale feature maps (Liu et al., 2020). Our proposed deformable attention module can be naturally extended for multi-scale feature maps.\nLet {x l } L l=1 be the input multi-scale feature maps, where x l ∈ R C×H l ×W l . Let pq ∈ [0, 1] 2 be the normalized coordinates of the reference point for each query element q, then the multi-scale deformable attention module is applied as\nMSDeformAttn(z q , pq , {x l } L l=1 ) = M m=1 W m L l=1 K k=1 A mlqk • W m x l (φ l ( pq ) + ∆p mlqk ) ,(3)\nwhere m indexes the attention head, l indexes the input feature level, and k indexes the sampling point. ∆p mlqk and A mlqk denote the sampling offset and attention weight of the k th sampling point in the l th feature level and the m th attention head, respectively. The scalar attention weight A mlqk is normalized by\nL l=1 K k=1 A mlqk = 1.\nHere, we use normalized coordinates pq ∈ [0, 1] 2 for the clarity of scale formulation, in which the normalized coordinates (0, 0) and (1, 1) indicate the top-left and the bottom-right image corners, respectively. Function φ l ( pq ) in Equation 3 re-scales the normalized coordinates pq to the input feature map of the l-th level. The multi-scale deformable attention is very similar to the previous single-scale version, except that it samples LK points from multi-scale feature maps instead of K points from single-scale feature maps.\nThe proposed attention module will degenerate to deformable convolution (Dai et al., 2017), when L = 1, K = 1, and W m ∈ R Cv×C is fixed as an identity matrix. Deformable convolution is designed for single-scale inputs, focusing only on one sampling point for each attention head. However, our multi-scale deformable attention looks over multiple sampling points from multi-scale inputs. The proposed (multi-scale) deformable attention module can also be perceived as an efficient variant of Transformer attention, where a pre-filtering mechanism is introduced by the deformable sampling locations. When the sampling points traverse all possible locations, the proposed attention module is equivalent to Transformer attention.\nDeformable Transformer Encoder. We replace the Transformer attention modules processing feature maps in DETR with the proposed multi-scale deformable attention module. Both the input and output of the encoder are of multi-scale feature maps with the same resolutions. In encoder, we extract multi-scale feature maps {x l } L-1 l=1 (L = 4) from the output feature maps of stages C 3 through C 5 in ResNet (He et al., 2016) (transformed by a 1 × 1 convolution), where C l is of resolution 2 l lower than the input image. The lowest resolution feature map x L is obtained via a 3 × 3 stride 2 convolution on the final C 5 stage, denoted as C 6 . All the multi-scale feature maps are of C = 256 channels. Note that the top-down structure in FPN (Lin et al., 2017a) is not used, because our proposed multi-scale deformable attention in itself can exchange information among multi-scale feature maps. The constructing of multi-scale feature maps are also illustrated in Appendix A.2. Experiments in Section 5.2 show that adding FPN will not improve the performance.\nIn application of the multi-scale deformable attention module in encoder, the output are of multiscale feature maps with the same resolutions as the input. Both the key and query elements are of pixels from the multi-scale feature maps. For each query pixel, the reference point is itself. To identify which feature level each query pixel lies in, we add a scale-level embedding, denoted as e l , to the feature representation, in addition to the positional embedding. Different from the positional embedding with fixed encodings, the scale-level embedding {e l } L l=1 are randomly initialized and jointly trained with the network. Deformable Transformer Decoder. There are cross-attention and self-attention modules in the decoder. The query elements for both types of attention modules are of object queries. In the crossattention modules, object queries extract features from the feature maps, where the key elements are of the output feature maps from the encoder. In the self-attention modules, object queries interact with each other, where the key elements are of the object queries. Since our proposed deformable attention module is designed for processing convolutional feature maps as key elements, we only replace each cross-attention module to be the multi-scale deformable attention module, while leaving the self-attention modules unchanged. For each object query, the 2-d normalized coordinate of the reference point pq is predicted from its object query embedding via a learnable linear projection followed by a sigmoid function.\nBecause the multi-scale deformable attention module extracts image features around the reference point, we let the detection head predict the bounding box as relative offsets w.r.t. the reference point to further reduce the optimization difficulty. The reference point is used as the initial guess of the box center. The detection head predicts the relative offsets w.r.t. the reference point. Check Appendix A.3 for the details. In this way, the learned decoder attention will have strong correlation with the predicted bounding boxes, which also accelerates the training convergence.\nBy replacing Transformer attention modules with deformable attention modules in DETR, we establish an efficient and fast converging detection system, dubbed as Deformable DETR (see Fig. 1). Deformable DETR opens up possibilities for us to exploit various variants of end-to-end object detectors, thanks to its fast convergence, and computational and memory efficiency. Due to limited space, we only introduce the core ideas of these improvements and variants here. The implementation details are given in Appendix A.4.\nIterative Bounding Box Refinement. This is inspired by the iterative refinement developed in optical flow estimation (Teed & Deng, 2020). We establish a simple and effective iterative bounding box refinement mechanism to improve detection performance. Here, each decoder layer refines the bounding boxes based on the predictions from the previous layer.\nTwo-Stage Deformable DETR. In the original DETR, object queries in the decoder are irrelevant to the current image. Inspired by two-stage object detectors, we explore a variant of Deformable DETR for generating region proposals as the first stage. The generated region proposals will be fed into the decoder as object queries for further refinement, forming a two-stage Deformable DETR.\nIn the first stage, to achieve high-recall proposals, each pixel in the multi-scale feature maps would serve as an object query. However, directly setting object queries as pixels will bring unacceptable computational and memory cost for the self-attention modules in the decoder, whose complexity grows quadratically with the number of queries. To avoid this problem, we remove the decoder and form an encoder-only Deformable DETR for region proposal generation. In it, each pixel is assigned as an object query, which directly predicts a bounding box. Top scoring bounding boxes are picked as region proposals. No NMS is applied before feeding the region proposals to the second stage. As shown in Table 1, compared with Faster R-CNN + FPN, DETR requires many more training epochs to converge, and delivers lower performance at detecting small objects. Compared with DETR, Deformable DETR achieves better performance (especially on small objects) with 10× less training epochs. Detailed convergence curves are shown in Fig. 3. With the aid of iterative bounding box refinement and two-stage paradigm, our method can further improve the detection accuracy.\nOur proposed Deformable DETR has on par FLOPs with Faster R-CNN + FPN and DETR-DC5. But the runtime speed is much faster (1.6×) than DETR-DC5, and is just 25% slower than Faster R-CNN + FPN. The speed issue of DETR-DC5 is mainly due to the large amount of memory access in Transformer attention. Our proposed deformable attention can mitigate this issue, at the cost of unordered memory access. Thus, it is still slightly slower than traditional convolution. Table 2 presents ablations for various design choices of the proposed deformable attention module. Using multi-scale inputs instead of single-scale inputs can effectively improve detection accuracy with 1.7% AP, especially on small objects with 2.9% AP S . Increasing the number of sampling points K can further improve 0.9% AP. Using multi-scale deformable attention, which allows information exchange among different scale levels, can bring additional 1.5% improvement in AP. Because the cross-level feature exchange is already adopted, adding FPNs will not improve the performance. When multi-scale attention is not applied, and K = 1, our (multi-scale) deformable attention module degenerates to deformable convolution, delivering noticeable lower accuracy. A.1 COMPLEXITY FOR DEFORMABLE ATTENTION Supposes the number of query elements is N q , in the deformable attention module (see Equation 2), the complexity for calculating the sampling coordinate offsets ∆p mqk and attention weights A mqk is of O(3N q CM K). Given the sampling coordinate offsets and attention weights, the complexity of computing Equation 2is O(N q C 2 + N q KC 2 + 5N q KC), where the factor of 5 in 5N q KC is because of bilinear interpolation and the weighted sum in attention. On the other hand, we can also calculate W m x before sampling, as it is independent to query, and the complexity of computing Equation 2 will become as O(N q C 2 +HW C 2 +5N q KC). So the overall complexity of deformable attention is O(N q C 2 + min(HW C 2 , N q KC 2 ) + 5N q KC + 3N q CM K). In our experiments, M = 8, K ≤ 4 and C = 256 by default, thus 5K + 3M K < C and the complexity is of O(2N q C 2 + min(HW C 2 , N q KC 2 )). As discussed in Section 4.1 and illustrated in Figure 4, the input multi-scale feature maps of the encoder {x l } L-1 l=1 (L = 4) are extracted from the output feature maps of stages C 3 through C 5 in ResNet (He et al., 2016) (transformed by a 1×1 convolution). The lowest resolution feature map x L is obtained via a 3 × 3 stride 2 convolution on the final C 5 stage. Note that FPN (Lin et al., 2017a) is not used, because our proposed multi-scale deformable attention in itself can exchange information among multi-scale feature maps.\n𝐶𝑜𝑛𝑣 1 × 1, 𝑠𝑡𝑟𝑖𝑑𝑒 1 𝐶𝑜𝑛𝑣 1 × 1, 𝑠𝑡𝑟𝑖𝑑𝑒 1 𝐶𝑜𝑛𝑣 1 × 1, 𝑠𝑡𝑟𝑖𝑑𝑒 1 𝐶𝑜𝑛𝑣 3 × 3, 𝑠𝑡𝑟𝑖𝑑𝑒 2 Input Multi-scale Feature Maps{𝒙 𝑙 } 𝑙=1 4 𝑪3 𝑪 4 𝑪 5 ResNet Feature Maps\nq = {σ(∆b d qx +σ -1 ( bd-1 qx )), σ(∆b d qy +σ -1 ( bd-1 qy )), σ(∆b d qw +σ -1 ( bd-1 qw )), σ(∆b d qh +σ -1 ( bd-1 qh ))},\nwhere d ∈ {1, 2, ..., D}, ∆b d q{x,y,w,h} ∈ R are predicted at the d-th decoder layer. Prediction heads for different decoder layers do not share parameters. The initial box is set as b0 qx = pqx , b0 qy = pqy , b0 qw = 0.1, and b0 qh = 0.1. The system is robust to the choice of b 0 qw and b 0 qh . We tried setting them as 0.05, 0.1, 0.2, 0.5, and achieved similar performance. To stabilize training, similar to Teed & Deng (2020), the gradients only back propagate through ∆b d q{x,y,w,h} , and are blocked at σ -1 ( bd-1 q{x,y,w,h} ). In iterative bounding box refinement, for the d-th decoder layer, we sample key elements respective to the box bd-1 q predicted from the (d -1)-th decoder layer. For Equation 3 in the cross-attention module of the d-th decoder layer, ( bd-1 qx , bd-1 qy ) serves as the new reference point. The sampling offset ∆p mlqk is also modulated by the box size, as (∆p mlqkx bd-1 qw , ∆p mlqky bd-1 qh ). Such modifications make the sampling locations related to the center and size of previously predicted boxes.\nTwo-Stage Deformable DETR. In the first stage, given the output feature maps of the encoder, a detection head is applied to each pixel. The detection head is of a 3-layer FFN for bounding box regression, and a linear projection for bounding box binary classification (i.e., foreground and background), respectively. Let i index a pixel from feature level l i ∈ {1, 2, ..., L} with 2-d normalized coordinates pi = (p ix , piy ) ∈ [0, 1] 2 , its corresponding bounding box is predicted by bi\n= {σ(∆b ix +σ -1 (p ix )), σ(∆b iy +σ -1 (p iy )), σ(∆b iw +σ -1 (2 li-1 s)), σ(∆b ih +σ -1 (2 li-1 s))},\nwhere the base object scale s is set as 0.05, ∆b i{x,y,w,h} ∈ R are predicted by the bounding box regression branch. The Hungarian loss in DETR is used for training the detection head.\nGiven the predicted bounding boxes in the first stage, top scoring bounding boxes are picked as region proposals. In the second stage, these region proposals are fed into the decoder as initial boxes for the iterative bounding box refinement, where the positional embeddings of object queries are set as positional embeddings of region proposal coordinates.\nInitialization for Multi-scale Deformable Attention. In our experiments, the number of attention heads is set as M = 8. In multi-scale deformable attention modules, W m ∈ R Cv×C and W m ∈ R C×Cv are randomly initialized. Weight parameters of the linear projection for predicting A mlqk and ∆p mlqk are initialized to zero. Bias parameters of the linear projection are initialized to make\nA mlqk = 1 LK and {∆p 1lqk = (-k, -k), ∆p 2lqk = (-k, 0), ∆p 3lqk = (-k, k), ∆p 4lqk = (0, -k), ∆p 5lqk = (0, k), ∆p 6lqk = (k, -k), ∆p 7lqk = (k, 0), ∆p 8lqk = (k, k)} (k ∈ {1, 2, ...K}) at initialization.\nFor iterative bounding box refinement, the initialized bias parameters for ∆p mlqk prediction in the decoder are further multiplied with 1 2K , so that all the sampling points at initialization are within the corresponding bounding boxes predicted from the previous decoder layer. For studying what Deformable DETR looks at to give final detection result, we draw the gradient norm of each item in final prediction (i.e., x/y coordinate of object center, width/height of object bounding box, category score of this object) with respect to each pixel in the image, as shown in Fig. 5. According to Taylor's theorem, the gradient norm can reflect how much the output would be changed relative to the perturbation of the pixel, thus it could show us which pixels the model mainly relys on for predicting each item.\nThe visualization indicates that Deformable DETR looks at extreme points of the object to determine its bounding box, which is similar to the observation in DETR (Carion et al., 2020). More concretely, Deformable DETR attends to left/right boundary of the object for x coordinate and width, and top/bottom boundary for y coordinate and height. Meanwhile, different to DETR (Carion et al., 2020), our Deformable DETR also looks at pixels inside the object for predicting its category. For better understanding learned multi-scale deformable attention modules, we visualize sampling points and attention weights of the last layer in encoder and decoder, as shown in Fig. 6. For readibility, we combine the sampling points and attention weights from feature maps of different resolutions into one picture.\nSimilar to DETR (Carion et al., 2020), the instances are already separated in the encoder of Deformable DETR. While in the decoder, our model is focused on the whole foreground instance instead of only extreme points as observed in DETR (Carion et al., 2020). Combined with the visualization of ∂c ∂I in Fig. 5, we can guess the reason is that our Deformable DETR needs not only extreme points but also interior points to detemine object category. The visualization also demonstrates that the proposed multi-scale deformable attention module can adapt its sampling points and attention weights according to different scales and shapes of the foreground object. width of input feature map of l th feature level A mqk attention weight of q th query to k th key at m th head A mlqk attention weight of q th query to k th key in l th feature level at m th head z q input feature of q th query p q 2-d coordinate of reference point for q th query pq normalized 2-d coordinate of reference point for q th query x input feature map (input feature of key elements) x k input feature of k th key x l input feature map of l th feature level ∆p mqk sampling offset of q th query to k th key at m th head ∆p mlqk sampling offset of q th query to k th key in l th feature level at m th head W m output projection matrix at m th head U m input query projection matrix at m th head V m input key projection matrix at m th head W m input value projection matrix at m th head φ l ( p) unnormalized 2-d coordinate of p in l th feature level exp exponential function σ sigmoid function σ -1 inverse sigmoid function The work is supported by the National Key R&D Program of China (2020AAA0105200), Beijing Academy of Artificial Intelligence, and the National Natural Science Foundation of China under grand No.U19B2044 and No.61836011." } ], "Cited By": [], "_matched_df_index": 337, "df_title": "Mask Matching Transformer for Few-Shot Segmentation" }