Datasets:
Delete ACL_23_with_limitation/ACL23_1000.json
Browse files
ACL_23_with_limitation/ACL23_1000.json
DELETED
|
@@ -1,32 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"File Number": "1000",
|
| 3 |
-
"Title": "A Weakly Supervised Classifier and Dataset of White Supremacist Language",
|
| 4 |
-
"3 A1. Did you describe the limitations of your work?": "Unnumbered \"Limitations\" section at the end.",
|
| 5 |
-
"abstractText": "We present a dataset and classifier for detecting the language of white supremacist extremism, a growing issue in online hate speech. Our weakly supervised classifier is trained on large datasets of text from explicitly white supremacist domains paired with neutral and anti-racist data from similar domains. We demonstrate that this approach improves generalization performance to new domains. Incorporating anti-racist texts as counterexamples to white supremacist language mitigates bias.",
|
| 6 |
-
"1 Introduction": "The spread of white supremacist extremism online has motivated offline violence, including recent mass shootings in Christchurch, El Paso, Pittsburgh, and Buffalo. Though some research in natural language processing has focused on types of hate speech, such as anti-Black racism (Kwok and Wang, 2013) and misogyny (Fersini et al., 2018), little work has focused on detecting specific hateful ideologies. Practitioners have called for such systems, particularly for white supremacism (ADL, 2022; Yoder and Habib, 2022).\nTo detect white supremacist language, we build text classifiers trained on data from a large, diverse set of explicitly white supremacist online spaces, filtered to ideological topics.1 In a weakly supervised set-up, we train discriminative classifiers to distinguish texts in white supremacist domains from texts in similar online spaces that are not known for white supremacism. These classifiers outperform prior work in white supremacist classification on three annotated datasets, and we find that the best-performing models use a combination of weakly and manually annotated data.\nHate speech classifiers often have difficulty generalizing beyond data they were trained on (Swamy\n1See https://osf.io/274z3/ to access public parts of this dataset and others used in this paper.\net al., 2019; Yoder et al., 2022). We evaluate our classifiers on unseen datasets annotated for white supremacism from a variety of domains and find strong generalization performance for models that incorporate weakly annotated data.\nHate speech classifiers often learn to associate any mention of marginalized identities with hate, regardless of context (Dixon et al., 2017). To address this potential issue with white supremacist classification, we incorporate anti-racist texts, which often mention marginalized identities in positive contexts, as counter-examples to white supremacist texts. Evaluating on a synthetic test set with mentions of marginalized identities in a variety of contexts (Röttger et al., 2021), we find that including anti-racist texts helps mitigate this bias.",
|
| 7 |
-
"2 The Language of White Supremacist Extremism": "This work focuses on white supremacist extremism, social movements advocating for the superiority of white people and domination or separation from other races (Daniels, 2009). This fringe movement both exploits the bigotries widely held in societies with structural white supremacism and makes them explicit (Ferber, 2004; Berlet and Vysotsky, 2006; Pruden et al., 2022). Key beliefs of white supremacist extremism are that race and gender hierarchies are fixed, that white people’s “natural” power is threatened, and that action is needed to protect the white race (Ferber and Kimmel, 2000; Brown, 2009; Perry and Scrivens, 2016; Ansah, 2021).\nMany qualitative studies have examined the language of white supremacism (Thompson, 2001; Duffy, 2003; Perry and Scrivens, 2016; Bhat and Klein, 2020). Computational models have been developed to identify affect (Figea et al., 2016), hate speech (de Gibert et al., 2019), and violent intent (Simons and Skillicorn, 2020) within white supremacist forums.\n172\nTwo other studies have built models to detect white supremacist ideology in text. Alatawi et al. (2021) test Word2vec/BiLSTM models, pre-trained on a corpus of unlabeled white supremacist forum data, as well as BERT models. To estimate the prevalence of white supremacism on Twitter after the 2016 US election, Siegel et al. (2021) build a dictionary-based classifier and validate their findings with unlabeled alt-right Reddit data. In contrast, we use a large, domain-general white supremacist corpus with carefully selected negative training examples to build a weakly supervised discriminative classifier for white supremacism.",
|
| 8 |
-
"2.1 Hate speech and white supremacism": "The relationship between hate speech and white supremacism has been theorized and annotated in different ways. Some have annotated the glorification of ideologies and groups such as Nazism and the Ku Klux Klan separately from hate speech (Siegel et al., 2021; Rieger et al., 2021), which is often defined as verbal attacks on groups based on their identity (Sanguinetti et al., 2018; Poletto et al., 2021; de Gibert et al., 2019). A user of Stormfront, a white supremacist forum, notes this distinction to evade moderation on other platforms: “Nationalist means defending the white race; racist means degrading non-white races. You should be fine posting about preserving the white race as long as you don’t degrade other races.”2\nWe aim to capture the expression of white supremacist ideology beyond just hate speech against marginalized identities (see Figure 1). In contrast, de Gibert et al. (2019) ask annotators to identify hate speech within a white supremacist forum. They note that some content that did not fit strict definitions of hate speech still exhibited white supremacist ideology. Examples of this from data used in the current paper include “diversity means chasing down whites” (white people being threatened) and “god will punish as he did w/ hitler” (action needed to protect white people).",
|
| 9 |
-
"3 Weakly Annotated Data": "It is difficult for annotators to determine whether the short texts commonly used in NLP and computational social science, such as tweets, express white supremacism or other far-right ideologies. Alatawi et al. (2021) struggle to reach adequate\n2Quotes in this paper are paraphrased for privacy (Williams et al., 2017)\ninter-annotator agreement on white supremacism in tweets. Hartung et al. (2017) note that individual tweets are difficult to link to extreme right-wing ideologies and instead choose to annotate user tweet histories.\nInstead of focusing on individual posts, we turn to weak supervision, approaches to quickly and cheaply label large amounts of training data based on rules, knowledge bases or other domain knowledge (Ratner et al., 2017). Weakly supervised learning has been used in NLP for tasks such as cyberbullying detection (Raisi and Huang, 2017), sentiment analysis (Kamila et al., 2022), dialogue systems (Hudeček et al., 2021) and others (Karamanolakis et al., 2021). For training the discriminative white supremacist classifier, we draw on three sources of text data with “natural” (weak) labels: white supremacist domains and organizations, neutral data with similar topics, and anti-racist blogs and organizations.",
|
| 10 |
-
"3.1 White supremacist data": "We sample existing text datasets and data archives from white supremacist domains and organizations to build a dataset of texts that likely express white supremacist extremism. Table 1 details information on source datasets.\nSources include sites dedicated to white supremacism, such as Stormfront, Iron March, and the Daily Stormer. When possible, we filter out non-ideological content on these forums using existing topic structures, for example, excluding\n“Computer Talk” and “Opposing Views” forums on Stormfront. We also include tweets from organizations that the Southern Poverty Law Center labels as white supremacist hate groups (Qian et al., 2018; ElSherief et al., 2021). In Papasavva et al.’s (2020) dataset from the 4chan /pol/ “politically incorrect” imageboard, we select posts from users choosing Nazi, Confederate, fascist, and white supremacist flags. We also include 4chan /pol/ posts in “general” threads with fascist and white supremacist topics (Jokubauskaitė and Peeters, 2020). From Pruden et al. (2022), we include white supremacist books and manifestos. We also include leaked chats from Patriot Front, a white supremacist group. Details on these datasets can be found in Appendix A.\nWith over 230 million words in 4.3 million posts across many domains, this is the largest collection of white supremacist text we are aware of. Contents are from 1968 through 2019, though 76% of posts are from 2017-2019 (see distributions of posts over time in Appendix A).\nOutlier filtering and sampling This large dataset from white supremacist domains inevitably contains many posts that are off-topic and nonideological. To build a weakly supervised classifier, we wish to further filter to highly ideological posts from a variety of domains.\nWe first remove posts with 10 or fewer words, as these are often non-ideological or require context to be understood (such as “reddit and twitter are cracking down today” or “poor alex, i feel bad”).\nWe then select posts whose highest probability topic from an LDA model (Blei et al., 2003) are ones that are more likely to express white supremacist ideology. LDA with 30 topics separated themes well based on manual inspection.\nOne of the authors annotated 20 posts from each topic for expressing a tenet of white supremacism, described in Section 2. We selected 6 topics with the highest annotation score for white supremacy, as this gave the best performance on evaluation datasets. These topics related to antisemitism, antiBlack racism, and discussions of European politics and Nazism (details in Appendix B). To balance forum posts with other domains and approximate domain distributions in neutral and anti-racist datasets, we randomly sample 100,000 forum posts. This white supremacist corpus used in experiments contains 118,842 posts and 10.7 million words.",
|
| 11 |
-
"3.2 Neutral data": "We also construct a corpus of “neutral” (not white supremacist) data that matches the topics and domains of the white supremacist corpus. To match forum posts, we sample r/politics and r/Europe subreddits. To match tweets, we query the Twitter API by sampling the word distribution in white supremacist tweets after removing derogatory language. For articles, we sample random US news from the News on the Web (NOW) Corpus3, and use a random Discord dataset to match chat (Fan, 2021). For each of these domains, we sample the same number of posts per year as is present in the white supremacist corpus. If there is not significant time overlap, we sample enough posts to reach a similar word count. This corpus contains 159,019 posts and 8.6 million words.",
|
| 12 |
-
"3.3 Anti-racist data": "Hate speech classifiers often overpredict mentions of marginalized identities as hate (Dixon et al.,\n3https://www.corpusdata.org/now_corpus.asp\n2017). Assuming our data is biased until proven innocent (Hutchinson et al., 2021), we design for this issue. We hypothesize that texts from anti-racist perspectives may help. Oxford Languages defines anti-racism as movements “opposing racism and promoting racial equality”. Anti-racist communications often mention marginalized identities (as do white supremacist texts), but cast them in positive contexts, such as a tweet in our anti-racist dataset that reads, “stand up for #immigrants”.\nWe construct a corpus of anti-racist texts to match the domain and year distribution of the white supremacist corpus. For forum data, we sample comments in subreddits known for anti-racism: r/racism, r/BlackLivesMatter, and r/StopAntiAsianRacism. We include tweets from anti-racist organizations listed by the University of North Carolina Diversity and Inclusion office4. To match articles, we scrape Medium blog posts tagged with “anti-racism”, “white supremacy”, “racism”, and “BlackLivesMatter”. As with other corpora, data from each of these sources was inspected for its perspective. This anti-racist corpus contains 87,807 posts and 5.6 million words.\n4 Classification\nDue to the success of BERT-based hate speech models (Mozafari et al., 2019; Samghabadi et al., 2020), we select the parameter-efficient DistilBERT model (Sanh et al., 2019) to compare data configurations5. We use a learning rate of 2×10−5, batch size of 16, and select the epoch with the highest ROC AUC on a 10% development set, up to 5 epochs. Training each model took approximately 8 hours on an NVIDIA RTX A6000 GPU.\nWe train models on binary white supremacist classification. All posts in the white supremacist corpus, after sampling and filtering, are labeled ‘white supremacist’. Posts in neutral and anti-racist corpora are labeled ‘not white supremacist’. We also test combining weakly labeled data with manually annotated data from existing datasets (see below) and our own annotation of white supremacist posts in LDA topics. Since there is relatively little manually annotated data, we duplicate it 5 times in these cases, to a size of 57,645 posts.\n4https://diversity.unc.edu/anti-racism-resou rces/\n5Code for experiments and dataset processing is available at https://github.com/michaelmilleryoder/white_su premacist_lang.",
|
| 13 |
-
"4.1 Evaluation": "Evaluating weakly supervised classifiers on a heldout weakly supervised set may overestimate performance. Classifiers may learn the idiosyncrasies of domains known for white supremacy in contrast to neutral domains (4chan vs. Reddit, e.g.) instead of learning distinctive features of white supremacy. We thus evaluate classifiers on their ability to distinguish posts manually annotated for white supremacy within the same domains, in the following 3 datasets:\nAlatawi et al. (2021): 1100 out of 1999 tweets (55.0%) annotated as white supremacist. Like our work, they conceptualize white supremacy as including hate speech against marginalized groups.\nRieger et al. (2021): 366 out of 5141 posts (7.1%) from 4chan, 8chan, and r/the_Donald annotated as white supremacist. This work uses a more restricted definition of white supremacy largely distinct from hate speech. We sample examples labeled as white supremacist or neither white supremacist nor hate speech. Examples only annotated as hate speech are excluded since they may or may not fit our broader conception of white supremacism.\nSiegel et al. (2021): 171 out of 9743 tweets (1.8%) annotated as white supremacist. Since they use a more restrictive definition of white supremacy, we sample posts annotated as white supremacist or neither white supremacist nor hate speech.\nThe proportions of white supremacist posts in these annotated evaluation datasets vary widely, so we report ROC AUC instead of precision, recall, or F1-score, which assume similar class proportions between training and test data (Ma and He, 2013). Precision and recall curves are also available in Figure 5 in Appendix C.\nGeneralization evaluation To test the ability of classifiers to generalize, we perform a leave-oneout test among annotated datasets. During three runs for each model that uses manually annotated data, we train on two of the annotated datasets and test performance on the third. To test generalization to a completely unseen domain, we use a dataset of quotes from offline white supremacist propaganda, extracted from data collected by the Anti-Defamation League (ADL)6. 1655 out of 1798 quotes (92.0%) were annotated by two of the authors as exhibiting white supremacist ideology.\n6https://www.adl.org/resources/tools-to-track -hate/heat-map\nBaselines We evaluate our approaches against the best-performing model from Alatawi et al. (2021), BERT trained on their annotated Twitter dataset for 3 epochs with a learning rate of 2×10−5 and batch size of 16. We also compare against Siegel et al. (2021), who first match posts with a dictionary and then filter out false positives with a Naive Bayes classifier. Though Rieger et al. (2021) also present data annotated for white supremacy, they focus on analysis and do not propose a classifier.\nHateCheck evaluation for lexical bias To evaluate bias against mentions of marginalized identities, we use the synthetic HateCheck dataset (Röttger et al., 2021). We filter to marginalized racial, ethnic, gender and sexual identities, since white supremacy is a white male perspective interlinked with misogyny and homophobia (Ferber, 2004; Brindle, 2016). We select sentences that include these identity terms in non-hateful contexts: neutral and positive uses; homonyms and reclaimed slurs; and counterspeech of quoted, referenced, and negated hate speech. This sample totals 762 sentences.",
|
| 14 |
-
"5 Results": "Table 2 presents performance of single runs on randomly sampled 30% test sets from Alatawi et al. (2021), Rieger et al. (2021), and Siegel et al. (2021). Classifiers trained with both weakly annotated data and a combination of all manually annotated data average the best performance across evaluation datasets. On the Alatawi et al. (2021) dataset, their own classifier performs the best. All models have lower scores on this challenging dataset, which human annotators also struggled to agree on (0.11 Cohen’s κ). In generalization performance (Table 3), we find that using weakly annotated data outperforms using only manually annotated data in almost all cases, and that combining weakly and manually annotated data enables classifiers to generalize most effectively.",
|
| 15 |
-
"5.1 Anti-racist corpus": "Training with both neutral and anti-racist negative examples improves accuracy on the HateCheck dataset to 69.2 from 60.5 when using a similar number of only neutral negative examples. This supports our hypothesis that incorporating antiracist texts can mitigate bias against marginalized identity mentions. Adding anti-racist texts slightly decreases performance on the other 4 evaluation datasets, to 82.8 from 84.3 mean ROC AUC.",
|
| 16 |
-
"6 Conclusion": "Ideologies such as white supremacy are difficult to annotate and detect from short texts. We use weakly supervised data from domains known for white supremacist ideology to develop classifiers that outperform and generalize more effectively than prior work. Incorporating texts from an antiracist perspective mitigates lexical bias.\nTo apply a white supremacist language classifier to varied domains, our results show the benefit of using such weakly supervised data, especially in combination with a small amount of annotated data. Other methods for combining these data could be explored in future work, such as approaches that use reinforcement learning to select unlabeled data for training (Ye et al., 2020; Pujari et al., 2022). Incorporating social science insights and looking for specific tenets of white supremacist extremism could also lead to better classification. This classifier could be applied to measure the prevalence or spread of white supremacist ideology through online social networks.\nLimitations\nThe presented classifier and dataset are only from English-speaking sources, a major disadvantage in detecting white supremacist content globally. The dataset also is predominantly sourced from data between 2015-2019 and reflects white supremacist extremist responses to current events from that period, including the Black Lives Matter movement. This limits its effectiveness in detecting white supremacist content from other time periods.\nThough including anti-racist data helps mitigate bias tested by our sample of the HateCheck dataset, an accuracy of 69.2% shows room for improvement. There is still a risk of overclassifying posts with marginalized identity mentions as white supremacist.\nEthics Statement\nThere are significant ethical issues to consider in developing text classifiers for ideologies. Since this research has clear social implications, we wish to be explicit about values and author positionality beyond a sense of “objectivity” in selecting research questions (Schlesinger et al., 2017; D’Ignazio and Klein, 2020; Waseem et al., 2021). The authors come from European- and American-dominated university contexts and consider working against racism and white supremacy a priority. Most identify as white and some identify as people of color. This research proceeded with values of racial justice and places those values at the center of assessing knowledge claims (Collins, 1990; Daniels, 2009). Our choice of focusing on white supremacy among other ideologies stems from those values. White supremacist extremism, as well as structural white supremacism, is responsible for substantial harms against those with marginalized identities. This research responds to a need from practitioners for more nuanced classifiers than for broad categories of hate speech or abusive language. We thus choose to pursue this research, though caution that developing classifiers for other ideologies should be done with careful consideration and a clear statement of motivating values.\nThere are significant risks which we consider, and attempt to mitigate, in such a dataset and classifier. First, there is the risk of misuse of a large corpus of white supremacist data, as has been seen in building and releasing a hate speech “troll bot” from 4chan data7. For this reason we build a dis-\n7https://www.vice.com/en/article/7k8zwx/ai-t\ncriminative, not generative, classifier, and only plan on releasing our dataset through a vetting process instead of publicly.\nThere are also privacy risks in how such a classifier could be used. Our classifier only identifies language that is likely similar to white supremacist content. The intended use of this classifier is to measure the prevalence of such an ideology on particular platforms or within networks for research purposes, not to label individuals as holding or not holding white supremacist ideologies. Using the classifier for this purpose poses significant risks of misclassification and could increase harmful surveillance tactics. We strongly discourage such a use. Our hope is that our proposed classifier and dataset can increase knowledge about the nature and extent of white supremacist extremist movement online and can inform structural interventions, such as platform policies, not interventions against individuals.\nHate speech classifiers, developed by researchers with similar equity-based values, have been found to contain biases against marginalized groups (Sap et al., 2019; Davidson et al., 2019). We measure and mitigate this bias from the start by incorporating anti-racist data, though caution that this risk still exists.",
|
| 17 |
-
"Acknowledgements": "This work was supported in part by the Collaboratory Against Hate: Research and Action Center at Carnegie Mellon University and the University of Pittsburgh. The Center for Informed Democracy and Social Cybersecurity at Carnegie Mellon University also provided support. We thank the researchers who provided source datasets, including Diana Rieger, Alexandra Siegel and others at the Center for Social Media and Politics at New York University, Jherez Taylor, Jing Qian, and Meredith Pruden. We also thank the Internet Archive and investigations teams at Bellingcat and Unicorn Riot for archiving source datasets online, and Maarten Sap for feedback.",
|
| 18 |
-
"A White supremacist corpus details": "We sample 9 datasets and data dumps to construct our white supremacist corpus (see Section 3.1). Here we provide details on how each of these data sources was processed and sampled, as well as other details of the corpus.\nPapasavva et al. (2020): 4chan /pol/ allows users to select “troll” flags to use instead of the default country flag detected from their IP address. We filter this dataset8 to posts from users that chose to post with Nazi, White Supremacist, Confederate, or Fascist troll flags. From a qualitative check, samples of posts from users with these flags often expressed white supremacist ideology. We remove posts with duplicate texts, as well as posts that are also found in the 4chan /pol/ dump from Jokubauskaitė and Peeters (2020). Our sample of this dataset contains posts from 2017 through 2019.\nStormfront data archive: Stormfront, a popular white supremacist forum, is no longer active. We sample from an Internet Archive dump of its content taken in 20179. We extract forum text from the HTML files and exclude threads that are not in English and are non-ideological. Specifically, we exclude the following threads: Nederland & Vlaanderen, Srbija, Español y Portugués, Italia, Croatia, South Africa, en Français, Russia, Baltic / Scandinavia, Hungary, Opposing Views Forum, Computer Talk. Our sample of this dataset contains posts from 2001 through 2017.\nJokubauskaitė and Peeters (2020): We select posts in this dataset of “general” 4chan /pol/ threads10 that we find to be related to white supremacy and fascism: kraut/pol/, afd, national socialism, fascism, dixie, kraut/pol/, ethnostate,\n8Available at https://zenodo.org/record/360681 0#.Y8lkkBXMKF6, accessed 19 January 2023. This dataset is under a Creative Commons Attribution 4.0 International license.\n9Available at https://archive.org/details/stormf ront.org_201708, accessed 11 January 2023\n10Available at https://zenodo.org/record/360329 2#.Y8lmTxXMKF5, accessed 19 January 2023. This dataset is under a Creative Commons Attribution 4.0 International license.\nwhite, chimpout, feminist apocalypse, (((krautgate))). This dataset contains posts from 2001 through 2017.\nIron March data archive: Data from Iron March, a now defunct neo-Nazi and white supremacist message board, was obtained through an Internet Archive data dump11 referenced in Simons and Skillicorn (2020). This dataset contains posts from 2011 through 2017.\nQian et al. (2018): We rehydrate tweet IDs from this dataset, graciously provided by the authors, by the ideology of the tweet author according to the Southern Poverty Law Center. After qualitatively checking sample tweets from each ideology to see how closely they match tenets of white supremacism, we select tweets from the following ideologies: neo-Confederate, neo-Nazi, Ku Klux Klan, racist skinhead, anti-immigration, white nationalist, anti-Semitism, hate music, holocaust identity, Christian Identity. 44.9% of tweets were able to be rehydrated from the original set in September 2022. Our rehydrated tweets ran from 2009 through 2017.\nPatriot Front data archive: We select Discord chat posts from servers operated by the white supremacist group, Patriot Front. These chats were leaked by Unicorn Riot12. After manual inspection for which threads are most ideological, we select the ‘general’ channels from 3 servers: Vanguard America-Patriot Front (2017), Front and Center (2018), MI Goy Scouts Official (2018).\nSince chat data may contain names, we remove the top 300 US first names from a 1990 list13.\nCalderón et al. (2021): We include articles from two white supremacist news websites, the Daily Stormer and American Renaissance, graciously provided by Calderón et al. (2021). This data contains posts from 2005 through 2017.\nPruden et al. (2022): We include white supremacist books and manifestos collected and provided by Pruden et al. (2022). These are: Enoch\n11Available through links at https://www.bellingcat.c om/resources/how-tos/2019/11/06/massive-white-s upremacist-message-board-leak-how-to-access-and -interpret-the-data/, accessed 11 January 2023\n12https://unicornriot.ninja/2022/patriot-front -fascist-leak-exposes-nationwide-racist-campaig ns/, accessed 11 January 2023\n13https://namecensus.com/first-names/, accessed 11 January 2023\nPowell’s “Rivers of Blood” speech (1968), Jean Raspail’s Camp of the Saints (1973, English translation), William Pierce’s The Turner Diaries (1978), David Lane’s “White Genocide” manifesto (2012), Anders Breivik manifesto (2011), Renaud Camus’ The Great Replacement (2012, English translation). These books and manifestos are split into paragraphs (split at newlines) for experiments.\nElSherief et al. (2021): From this dataset of implicit hate speech tweets14, we select two portions: 1) tweets labeled for “white grievance” by annotators, and 2) when rehydrated, tweets by users identified as holding selected white supremacist ideologies by Qian et al. (2018) (these papers draw on similar datasets). When we rehydrated these tweets in August 2022, we were only able to access 36.8%. Rehydrated tweets spanned from 2009 through 2017.\nWe lowercase and tokenize all data sources with spaCy 3.1.1 for forum posts and articles, and NLTK’s TweetTokenizer (Bird et al., 2009) for tweets and chat data.\nFigure 3 shows the time spans of data from different sources in the full corpus, and Figure 4 shows the distribution of posts over time in the dataset. These figures exclude historical data from Pruden et al. (2022) for readability.",
|
| 19 |
-
"B Outlier topic removal": "This appendix describes details of removing nonideological content from our white supremacist corpus. We run LDA over the full white supremacist corpus and decide on 30 topics after manually inspecting topics for coherence. We also tried BERTopic (Grootendorst, 2022), but LDA gave a less skewed distribution of documents per topic.\nAfter a brief initial annotation period, one of the authors annotated 20 instances per topic as white supremacist (coded 1), neutral/undecided (0), or not white supremacist (-1). The criteria was the presence of at least one tenet of white supremacism, described in Section 2. Mean distribution of these annotations over topics are presented in Figure 2.\nAs can be seen, most topics have mean scores less than 0, i.e., that they contain more posts annotated as neutral or not white supremacist than white supremacist. This matches results from Rieger et al. (2021), who find 24% of posts in a sample from fringe far-right platforms to be hate speech, high\n14Available at https://github.com/SALT-NLP/implici t-hate, accessed 19 January 2023\ncompared to other online spaces but certainly not the majority of posts. This motivates outlier removal, and we found that removing outlier topics provided an advantage in classification on the evaluation datasets. Assigning posts to the highestlikelihood topic, we find that filtering to posts within the 6 topics with the highest mean annotations for white supremacy provides the best performance. As seen in Figure 2, beyond 6 topics the mean drops to close to a 0 (neutral) rating. These topics related to antisemitism, anti-Black racism, and discussions of European politics and Nazism. Top words for these 6 topics are listed in Table 4.",
|
| 20 |
-
"C Evaluation datasets": "This appendix describes the details of sampling and processing datasets manually annotated for white supremacy used to evaluate classifiers.\nWe also present precision and recall curves for our best-performing Weak + Annotated model on evaluation datasets in Figure 5 for decision thresholds every 0.01 between [0, 1). Class probabilities were calculated from a softmax over the output class logits. There is particular room for improvement on precision for Rieger et al. (2021) and Siegel et al. (2021) datasets.\nAlatawi et al. (2021): From the full annotated dataset of tweets from Alatawi et al. (2021)15, we choose the combined annotator labels for white supremacy as the label of white supremacy or not.\nRieger et al. (2021): This dataset, provided by the authors, contains posts on fringe platforms (4chan /pol/, 8chan /pol/, and r/the_Donald) annotated for many aspects of hate speech, including white supremacist ideology. We sample examples labeled for ‘white supremacy/white ethnostate’ or ‘National Socialist’ ideology as examples of white supremacy. For negative examples, we sample posts that are not labeled as white supremacist or as hate speech for negative examples, since their definition of white supremacy is more restrictive Specifically, we sample posts not labeled for ‘white supremacy/white ethnostate’, ‘National Socialist’, ‘general insult’, ‘personal insult’ or ‘violence’. Direct requests for this dataset to the authors.\nSiegel et al. (2021): We use training data from Siegel et al. (2021), provided by the authors. From lists of tweets annotated for white nationalism and hate speech, we select those marked as positive for white nationalism and as negative examples, those annotated as neither white nationalism nor hate speech. Requests for this dataset should be directed to the authors.\n15Accessed from https://github.com/Hind-Saleh-A latawi/WhiteSupremacistDataset on 11 January 2023.\nAlatawi et al. 2021 test set\nACL 2023 Responsible NLP Checklist",
|
| 21 |
-
"3 A2. Did you discuss any potential risks of your work?": "Unnumbered \"Ethics Statement\" section at the end.",
|
| 22 |
-
"3 A3. Do the abstract and introduction summarize the paper’s main claims?": "Abstract and section 1, Introduction\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 3 Did you use or create scientific artifacts? Section 3 \"Weakly Annotated Data\" Section 4 \"Classification\" (a model)",
|
| 23 |
-
"3 B1. Did you cite the creators of artifacts you used?": "Sections 3.1 and 4.1, more details in Appendices A and C",
|
| 24 |
-
"3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?": "Section 3.1 Appendix A",
|
| 25 |
-
"3 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided": "that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics Statement",
|
| 26 |
-
"3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any": "information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix A (the Patriot Front Discord chat dataset)",
|
| 27 |
-
"3 B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and": "linguistic phenomena, demographic groups represented, etc.? Section 3.1 Table 1\nB6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response.\nC 3 Did you run computational experiments? Section 4",
|
| 28 |
-
"3 C1. Did you report the number of parameters in the models used, the total computational budget": "(e.g., GPU hours), and computing infrastructure used? Section 4\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
|
| 29 |
-
"3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found": "hyperparameter values? Section 4",
|
| 30 |
-
"3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary": "statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5",
|
| 31 |
-
"3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did": "you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.\nD1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.\nD2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response.\nD3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.\nD4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.\nD5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response."
|
| 32 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|