key: cord-0212499-drtzocvb authors: Gehrmann, Sebastian; Clark, Elizabeth; Sellam, Thibault title: Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text date: 2022-02-14 journal: nan DOI: nan sha: e4e9d556e9725a5fdb2e133b61243ff7c1ca8aeb doc_id: 212499 cord_uid: drtzocvb Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo. There are many issues with the evaluation of models that generate natural language. For example, datasets are often constructed in a way that prevents measuring tail effects of robustness, and they almost exclusively cover English. Most automated metrics measure only similarity between model output and references instead of fine-grained quality aspects (and even that poorly). Human evaluations have a high variance and, due to insufficient documentation, rarely produce replicable results. These issues have become more urgent as the nature of models that generate language has changed without significant changes to how they are being evaluated. While evaluation methods can capture surface-level improvements in text generated by state-of-the-art models (such as increased fluency) to some extent, they are ill-suited to detect issues with the content of model outputs, for example if they are not attributable to input information. These ineffective evaluations lead to overestimates of model capabilities. Deeper analyses uncover that popular models fail even at simple tasks by taking shortcuts, overfitting, hallucinating, and not being in accordance with their communicative goals. Identifying these shortcomings, many recent papers critique evaluation techniques or propose new ones. But almost none of the suggestions are followed or new techniques used. There is an incentive mismatch between conducting high-quality evaluations and publishing new models or modeling techniques. While general-purpose evaluation techniques could lower the barrier of entry for incorporating evaluation advances into model development, their development requires resources that are hard to come by, including model outputs on validation and test sets or large quantities of human assessments of such outputs. Moreover, some issues, like the refinement of datasets, require iterative processes where many researchers collaborate. All this leads to a circular dependency where evaluations of generation models can be improved only if generation models use better evaluations. We find that there is a systemic difference between selecting the best model and characterizing how good this model really is. Current evaluation techniques focus on the first, while the second is required to detect crucial issues. More emphasis needs to be put on measuring and reporting model limitations, rather than focusing on producing the highest performance numbers. To that end, this paper surveys analyses and critiques of evaluation approaches (sections 3 and 4) and of commonly used NLG datasets (section 5). Drawing on their insights, we describe how researchers developing modeling techniques can help to improve and subsequently benefit from better evaluations with methods available today (section 6). Expanding on existing work on model documentation and Figure 1 : Even though the evaluation pipeline of a model is complex, with many steps and potential missteps that get "funneled" into the final results, it is often seen as a black box with the purpose of generating numbers that demonstrate superiority over competing approaches. We argue that more attention should be paid to the evaluation process and that the reporting of evaluation results should focus on the characteristics and limitations of a model. formal evaluation processes (Mitchell et al., 2019; Ribeiro et al., 2020) , we propose releasing evaluation reports which focus on demonstrating NLG model shortcomings using evaluation suites. These reports should apply a complementary set of automatic metrics, include rigorous human evaluations, and be accompanied by data releases that allow for re-analysis with improved metrics. In an analysis of 66 recent EMNLP, INLG, and ACL papers along 29 dimensions related to our suggestions (section 7), we find that the first steps toward an improved evaluation are already frequently taken at an average rate of 27%. The analysis uncovers the dimensions that require more drastic changes in the NLG community. For example, 84% of papers already report results on multiple datasets and more than 28% point out issues in them, but we found only a single paper that contributed to the dataset documentation, leaving future researchers to re-identify those issues. We further highlight typical unsupported claims and a need for more consistent data release practices. Following the suggestions and results, we discuss how incorporating the suggestions can improve evaluation research, how the suggestions differ from similar ones made for NLU, and how better metrics can benefit model development itself (section 8). While "natural language generation" used to have a very narrow scope, 1 today it is used broadly to refer to the production of natural language in any context, and NLG tasks include summarization, machine translation, paraphrasing, and story generation. For the purpose of this survey, we follow this broader definition, but focus on conditional generation tasks. We define conditional NLG tasks as those in which a machine learning model can be trained to maximize a conditional probability p(y|x) where y is natural language and x is an input that can be structured data or natural language and which provides information about what should be generated. 2 The evaluation of conditionally generated text typically involves a comparison to the input and/or a reference text, neither of which is available in an unconditional generation setting. The scope of this survey thus includes tasks such as machine translation, summarization, and data-totext generation, but excludes language modeling. In addition, we require in-scope NLG tasks to have an explicit communicative goal, which needs to be expressed while also planning the content and structure of the text and actualizing it in fluent and error-free language (Gehrmann, 2020) . 3 All these aspects need to be captured in the NLG evaluation, making it much more challenging than evaluating other NLP tasks. For an introduction to NLG beyond this survey, we point readers to the overview by Gatt and Krahmer (2018) for a deeper discussion of NLG tasks, and to the survey by Celikyilmaz et al. (2020) of the evaluation approaches and statistical methods that are discussed in Sections 3-4. Evaluation approaches for generated text have traditionally been categorized as intrinsic or extrinsic (Jones and Galliers, 1995) . Intrinsic approaches evaluate a text by itself, whereas extrinsic approaches measure how it affects people performing a given task. Intrinsic evaluations include assessments by human ratings and by automatic metrics which have gained popularity with the advent of statistical NLG (Langkilde and Knight, 1998) , which led to the standardization of tasks. While some work exists that aims to standardize extrinsic evaluations (e.g., Mani et al., 1999; Gehrmann et al., 2019a) , the design space is much larger. As a result, intrinsic approaches dominate academic publications; Gkatzia and Mahamood (2015) found that about 75% of published NLG systems rely on intrinsic evaluations with the fraction increasing. 4 Since we survey widely used approaches, we mostly cover intrinsic evaluations, but stress the importance of task-specific extrinsic evaluations. As pointed out by Reiter and Belz (2009a) , the evaluation meta-evaluations we draw on are most commonly conducted on summarization and machine translation (MT), but that there is an implicit assumption that findings translate to other tasks. To avoid this issue, we note the task for each study, but, due to a lack of prior findings, are not able to cover every NLG task. Taking a cautious approach, we make the worst-case assumption that modes of failure likely transfer across tasks. metrics, how these metrics are typically evaluated, what issues are being found, and how newly introduced metrics may overcome these issues in the future. Since not all evaluation strategies are being applied to all metrics and not all metrics are applied to all possible generation tasks, we can only provide an incomplete insight into the met-ric×task×evaluation method space. Since there currently exists no "perfect" metric, we will not conclude with explicit metric recommendations but rather try and extract successful metric design principles alongside a family of evaluations that together may provide a more complete characterization of a model's performance. Almost all commonly used generation metrics are reference-based: a system output o is compared to one or multiple human-produced references, {r 1 , . . . , r n }. System outputs that are more similar to the references are deemed better. However, there have been many strategies to measure the similarity. The most popular evaluation metrics, BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) , along many others, measure the lexical overlap between o and r in terms of precision and recall of n-grams. Variants and parameters control tokenization, stemming, or balancing of precision and recall. With the advent of deep learning, metrics were introduced that measure the distributional similarity instead that rely on various ways to measure the distance between two distributed token and sequence representations. Notable examples from this class of metrics are the word mover distance (Kusner et al., 2015) , which relies on non-contextual word embeddings, and BERT-SCORE (Zhang et al., 2020) , which aggregates cosine distances between represented tokens in a sequence, among others (Zhao et al., 2019; Clark et al., 2019; Kane et al., 2020; Colombo et al., 2021, inter alia) . A related class of automatic evaluation are statistical approaches, which focus on the distributions, rather than representations, produced by a model. Saggion et al. (2010) first demonstrated that distributional differences between references and model-outputs can be used as a scoring mechanism. Gehrmann et al. (2019b) showed that these differences exist even for large pretrained models, a fact that was used by Zellers et al. (2019) to train a classifier that detects generated text. Hashimoto et al. (2019) used the same foundation to combine human and auto-matic evaluation in capturing the trade-off between sampling diverse outputs and achieving the highest possible quality. Pillutla et al. (2021) expand on these insights and a framework by Djolonga et al. (2020) to compare the human-and modeldistributions by measuring the extent to which they diverge. An alternative approach by Thompson and Post (2020) uses the probabilities of each modelgenerated token under a paraphrasing model that uses the human reference as input. Utilizing existing corpora of human quality judgments of generated text, learned metrics are classifiers that emulate these judgments. Some metrics move beyond reference-based evaluation and instead provide quality estimation scores between an input i and output o. The first metric of this kind was CLASSY, a logistic regression model for summarization evaluation . Newer metrics rely on pretrained models, are trained on more human ratings, and introduce initialization and and pretraining schemes (Sellam et al., 2020; Rei et al., 2020; Pu et al., 2021; Wegmann and Nguyen, 2021, inter alia) , or focus on specific aspects like the faithfulness of generated text (e.g., Kryscinski et al., 2020; Aralikatte et al., 2021) . Many of these metrics rely on artificially introduced errors, but Cao et al. (2020) find that moving from artificial to real error detection is challenging, an issue that Zeng et al. (2021) aim to address by using adversarial examples instead. The metrics mentioned so far operate on text directly, but there has also been a long history of metrics that generate and use intermediate structures. These include accuracy of parse trees (Bangalore et al., 2000) , overlap between "basic elements" (Hovy et al., 2005) , 5 automatically constructed content units (Tauchmann and Mieskes, 2020) using the Pyramid framework by Nenkova and Passonneau (2004) , dependency parses (Pratapa et al., 2021) , or sequence alignment (Deng et al., 2021) . A special case of intermediate structures that recently gained popularity are question-answering metrics that assess information-equivalence. Similar to the faithfulness classifiers above, these aim to measure whether generated text contains the same information as a source or reference. Instantiations of these metrics may blank out entities (Eyal et al., 2019; Xie et al., 2021; Scialom et al., 2019) , or fully generate questions (Chen et al., 2018; Rebuffel et al., 2021; Honovich et al., 2021; Deutsch et al., 2021a, inter alia) . This overview already points to the first issue with the state of metrics research: the metrics listed above, except those targeting machine translation, are designed to work only on English. A notable exception is a study by Briakou et al. (2021) which assesses different learned metrics for formality transfer and uses multilingual pre-trained models such as XLM-R (Conneau et al., 2020) . While automatic metrics are well-studied, the barrier of entry to developing non-English models is growing. Many automatic metrics rely on the assumption that NLG systems outputs that are more similar to the reference(s) are better, a property commonly referred to as "human-likeness" in the NLG literature (see, e.g., Belz and Gatt (2008) ). While the ability to reproduce a reference text sounds like natural evidence of success, relying entirely on it for evaluation is misleading-a caveat pointed out by many evaluation researchers. For instance, Belz and Gatt (2008) investigate the correlation between lexical overlap metrics (such as BLEU and ROUGE) and various measures of success in a Referring Expression Generation context. They find that "a system's ability to produce human-like outputs may be completely unrelated to its effect on human task-performance." One reason for this discrepancy is that similaritybased evaluations reward surface similarity at the expense of meaning and may be "fooled" by similar-looking, yet semantically different, outputs. NLG tasks have an extensive output space which cannot be captured through a limited number of references and, a comparison to references becomes less reliable the more "open-ended" a task is. For that reason, ROUGE underperforms on non-extractive summaries (Dorr et al., 2005) . The problem is especially poignant when the references themselves are flawed. As Dhingra et al. (2019) show, using BLEU and ROUGE is problematic with many table-to-text datasets, because there is a mismatch between the information conveyed by the reference texts and that of the input table. As a result, model outputs that contain similar unsupported information are rewarded by the metric. Similarly, show that BLEU, METEOR, and BERTSCORE may fail to reward good translations when the reference text contains artifacts such as "translationese". One may wonder whether the problem still exists with learnt or embedding-based metrics, since a more flexible notion of similarity should enable metrics to be less reliant on surface-level features or text artifacts in references. However, this argument assumes that the set of reference appropriately covers the target domain, and that the metric is flexible enough to "generalize" from an incomplete set of examples. The current empirical evidence for this is negative -in section 3.4 we will present several studies that show that even current metrics break down with simple adversarial examples (Sai et al., 2021; Kaster et al., 2021) . If similarity to the reference is a flawed proxy for quality, what do automatic metrics tell us? This question can be investigated empirically by measuring the correlation between metric scores and human annotations. In a survey of such studies by Reiter (2018) focused on BLEU, he concludes that it is useful as a diagnostic tool during the development of MT systems, but not for other tasks and that is should not be used at the segment level. More recently, Kocmi et al. (2021) assess how well automatic metrics compute pairwise rankings for MT systems, and recommend using a combination of overlap-based and pretraining-based metrics, confirming the previous findings that metrics may be used to rank MT models at the system-level. Several authors have tried to introduce finergrained quality criteria, and attempted to understand which quality dimensions are captured by automatic metrics that measure the similarity to references. In most cases, there is inconclusive evidence. For instance, Reiter and Belz (2009b) find that these metrics may approximate language quality, although with only weak evidence, and that they do not measure content quality at all. In contrast, Stent et al. (2005) evaluate metrics on restructured sentences, showing that lexical-overlap based metrics do measure similarity in meaning, but fail at measuring syntactic correctness. The inconsistency between studies and use cases suggests that overlap-based metrics likely measure neither, which is confirmed by later studies. In a more recent study, Kryscinski et al. (2019) 5way annotated system outputs on 100 samples from the test set of the CNN-Dailymail summarization corpus (CNNDM, Hermann et al., 2015; Nallapati et al., 2016) along two measures of content quality (relevance of the content, and faithfulness) and two of linguistic quality (on the sentence-and summarylevel) using raters from Mechanical Turk. Consistent with previous findings, they find that ROUGE does not significantly correlate with either of them. Extending the annotations by three expert judgments per data point and extending the analysis to more metrics, find similarly low correlations without significant performance improvements of distributional over lexical similarity metrics. Comparing correlations of these metrics across shared tasks from the Text Analysis Conferences (TAC) and CNN/DM and using a different annotation scheme, Bhandari et al. (2020b) corroborate the very low segment-level correlations and also find that that no distributional metric outperforms ROUGE. Reanalyzing the data and addressing issues in the statistical tests, Deutsch et al. (2021b) come to the same conclusion about ROUGE, but note the insights should be carefully assessed since the data selection strategy for annotations, coupled with large confidence intervals, can lead to false results. Beyond summarization, Novikova et al. (2017a) note similarly poor segment-level correlations for data-to-text datasets. All this shows that it is unclear what the results of embedding-based and lexical metrics represent, and it is questionable whether the numbers they produce can be trusted outside a few cases such as MT systems ranking. To better understand their limitations and opportunities, we need large-scale corpora of high-quality human annotations, which do not yet exist for most NLG tasks. The Myth of the Single Reliable Number If human-likeness should not be used as proxy measure for quality of generated text, what should be used instead? Analyzing DUC 2004 data (Over and Yen, 2004) , where human raters annotated the language quality and the coverage of a summary, i.e., how well it covered the meaning of the source, Graham (2015) found that there was almost no correlation between the two measures. However, language quality was a precondition for achieving high coverage, leading to a complex relationship between the two. The lack of correlation between language and content quality was also noted by Pitler et al. (2010) who find correlations between some evaluation categories. These insights, combined with the lack of strong correlations, suggests that a single num-ber, as produced by almost all automatic metrics, cannot fully characterize an NLG system. Similar points are made by Deutsch and Roth (2021) who show that many similarity metrics capture the overlap in topics between two summaries much better than the overlap in their information. An aspect of quality mentioned above and which permeates all of NLG is faithfulness, and much recent work has focused on this aspect for abstractive summarization. Maynez et al. (2020) state that a model is not faithful if it hallucinates, that is, it adds information that is not present in the source document. They define multiple categories of hallucinations: Intrinsic hallucinations misrepresent facts in the input, for example turning a "former London mayoral candidate" into a "former London mayor". Extrinsic hallucinations ignore the input altogether, for example generating "President Sara" in the example above. Not all hallucinations are problematic-an extrinsic hallucination can be factual, and may, in fact, be desirable depending on the use case. For system evaluation, it is therefore important to be able to discern between hallucinations of different types, which cannot be done by producing a single number. Maynez et al. demonstrate that similarity metrics fail to measure faithfulness. The same failure is observed by Pagnoni et al. (2021) who introduce and collect annotations for an alternative typology of factual errors which involves fine-grained categories such as Coreference Error and Out of Article Error. In an alternative approach to measuring correlations with human judgments, inject factual errors in reference summaries, and checks whether system rankings produced by metrics correlate with the "level of factuality" of the transformed sentences, among other properties like a metric's value range and generalization. They also identify that standard evaluation metrics (e.g., ROUGE-L and ROUGE-1) oftentimes fail at capturing factuality, but identify questionanswering metrics as promising, somewhat contradicting Maynez et al.. Similarly, Chen et al. (2021) analyze mispredictions on a set of previously annotated summarization corpora (Kryscinski et al., 2020; Falke et al., 2019; Maynez et al., 2020) . The study identifies common error types (e.g., "Numerical inference") and constructs an adversarial test set with rule-based transformations. The diversity of approaches in the literature shows that evaluating factual truth is (perhaps unsurprisingly) a complex, ill-defined, and unsolved task. Additionally complicating this problem is that artificially introduced errors rarely match errors of real summarization models, which means that metrics trained on synthetic errors may not generalize to real systems (Goyal and Durrett, 2021) . Researchers have studied the validity of faithfulness metrics for other NLG tasks as well. For tableto-text, report the performance of an information extraction-based metric (Wiseman et al., 2017) given different types of errors, and highlights typically problematic cases such as errors with names and numbers which are not detected by the metric. Taking all these points into consideration, we conclude that there is no consensus on how best decompose and measure faithfulness and that even the best current approaches are typically flawed. However, we can also see a clear benefit to measuring specific aspects of output quality and thus encourage metric designers to stop treating output quality and in particular faithfulness like a one-dimensional problem. Parameter Choices and Reproducibility Despite these findings, most publications still use only a single metric to demonstrate improvements over prior systems. For example, 100% of papers introducing new summarization models at *CL conferences in 2021 use ROUGE and 69% use only ROUGE. It thus warrants a deeper look into how ROUGE and other metrics are used. The most commonly reported ROUGE configurations are the F1 scores of ROUGE-1, -2, and -L. This choice was initially popularized by Rush et al. (2015) , who picked a subset of the options used in DUC 2004 which also included 3, 4, and LW (Over and Yen, 2004) . However, this choice was not empirically motivated, and from DUC 2005 onwards, the recall scores of ROUGE-2 and ROUGE-SU4 were even used instead (Dang, 2006) . 6 On top of the disconnect between the past and present choices, both of them are actually suboptimal. Rankel et al. (2013) find that rarely used configurations of ROUGE are outperforming commonly used one, and in an investigation of all 192 ROUGE configurations, Graham (2015) find that none of them outperformed BLEU and that best performance was achieved with the precision vari-ant of ROUGE-2. The studies by Kryscinski et al. (2019) and evaluate the F1variants of multiple ROUGE versions and confirm the suboptimal setting. They find that ROUGE-1, -2, and -L perform strictly worse than ROUGE-3, -4, and -WE-1 across multiple rating dimensions. Beyond using a suboptimal setup, additional parameters are often unclear; the most popular Python implementation, for example, uses a different list of stopwords compared to the original PERL script, 7 but implementation details are rarely specified. That means that not only do we rely on a metric that consistently underperforms others, we are not even using it correctly or in a replicable manner. Beyond versioning issues, ROUGE was initially designed to evaluate English text, and it thus uses whitespace tokenization, and and English stemmer and stoplist. Yet, it is commonly applied to other languages without mentions of the exact changes to get it to run. Similar issues exist in modern frameworks as well, especially those that utilize pretrained models (Liao et al., 2021) . For example, BERT-SCORE (Zhang et al., 2020) is reported in many recent summarization publications, but the term BERT-SCORE refers to the methodology instead of underlying model. To combat the confusion between model versions, the library produces a unique hash, inspired by the SACREBLEU framework (Post, 2018 ). Yet, these hashes are often not reported or aggregated in incomparable ways. 8 Another example of an often unreported design choice is how to use single-reference metrics in multi-reference setups. While ROUGE explicitly describes how to use it in multi-reference tasks, 9 most neural metrics do not. For example, BLEURT (Sellam et al., 2020) only suggests taking the max of multiple scores without discussing tradeoffs compared to computing the mean. 10 All these evaluation parameters can have a drastic influence over the validity of scores and can lead to 7 The package can be found here. Anecdotally, wrappers around the original implementation can lead to changes of more than 0.5 points. 8 For example, Papers With Code for WMT 2014 en-de compares models on SACREBLEU score without hashes. 9 The multi-reference version of ROUGE represents a very generous upper bound in which results can only improve by adding a reference, never decrease, which can have other negative implications. Moreover, not all implementations may use the originally recommended method. 10 The alternative approach can be seen on the leaderboard of the ToTTo dataset where the mean of multiple BLEURT scores is reported. incorrect comparisons or inflated scores. To develop reliable metrics, it may be helpful to develop benchmarks to collect large-scale annotated evaluation data, which may then be used to train better metrics. This has been the approach in MT for over 15 years (Koehn and Monz, 2006) , with metrics shared tasks organized as part of the yearly WMT workshop/conference. They have led to improved human annotation processes and metrics evaluation approaches, as well as almost all the learned metrics listed in section 3.1. As part of these shared tasks, Macháček and Bojar (2014) and Stanojević et al. (2015) used non-expert crowdworkers to perform a 5-way comparisons between systems. However, they point out that 5-way comparisons are challenging to interpret as pairwise comparisons, which is required to compute segment-level Kendall-Tau correlations. Addressing this issue, Bojar et al. (2016) experimented with three measuring techniques: the original 5-way ranking, direct assessments (DA) where outputs are evaluated by themselves, and HUME, a method which aggregates scores for semantic units. After promising results, Bojar et al. (2017) only used DA on a 0-100 scale and HUME. To compute correlations, DA annotations were converted into relative rankings, called DARR. The following year also abandoned HUME and fully relied on DA (Ma et al., 2018) , and embeddingbased metrics started strongly outperforming other metrics. The 2019 shared task introduced a quality estimation task in accordance with the DA data collection technique, illustrating how the human evaluation techniques can influence the design of metrics (Ma et al., 2019) . However, as metrics and systems improved further, the DA annotations proved insufficient to identify a "best" metric (Mathur et al., 2020) , which led to another major change to the methodology (Freitag et al., 2021b) . The latest evaluations thus followed the suggestion by Freitag et al. (2021a) to use Multidimensional Quality Metrics (MQM, Lommel et al., 2014) , a fine-grained expert-based annotation approach. The results demonstrate that DA is unreliable for high-quality translations, often mistakenly ranking human translations lower than system outputs whereas human translations are correctly identified as better than system outputs in MQM. Surprisingly, metrics correlate much better with MQM, even those trained on the DA annotations. Does this mean that focusing on DA was wrong? No, without many years of (suboptimal) data collection, we would not have learned metrics, and we would not know whether DA worked for MT. However, the progression also teaches the lesson that benchmarks may lead the field down the wrong path. A similar argument by Hirschman (1998) critiques that benchmark evaluations only take a narrow approach and states that evaluation is intrinsically a cost-benefit trade-off. They further argue that we should weigh the divergent needs of stakeholders when designing evaluations, similar to Ethayarajh and Jurafsky (2020) , who argue that not everyone may derive the same utility from an improvement on a leaderboard. Scott and Moore (2007) warn that NLG evaluation shared tasks could harm the field, since they may amplify issues with the data and that benchmarks may lead to people to ignore external evaluations, and put too much emphasis on metrics that do not measure what we think they measure, both of which also happened. We thus can conclude that benchmarks are necessary, but that they need to be self-critical and explore different evaluation approaches. 11 As seen through the WMT metrics shared tasks, machine learning-based metrics are promising, but a common criticism is that they are not transparent; it is often unclear how they operate internally and whether they can deliver high performance consistently across domains, tasks, and systems. Metric developers typically report agreement with human ratings on specific test subsets filtered on the property of interest, or they measure the change in a metric's value when perturbing a reference (e.g., by shuffling words). The idea to write tests for metrics, rather than reporting corpus-wide correlations, may partly be traced back to Lin and Och (2004) , who pose that metrics should always rank a human-produced reference first when compared to multiple system outputs and thus measure how far the reference deviates from the first spot. 12 11 We also note that, in addition to DUC/TAC, there has been a long history of shared tasks in the NLG community addressing a much more diverse set of tasks starting with referring expression generation (Gatt et al., 2008) , but which have also covered tasks such as summarization (Syed et al., 2019) and data-to-text generation (Dusek et al., 2020) . 12 As we discuss later, this strong assumption is rarely met for NLG datasets. This section gives an overview of various research efforts that seek to evaluate automatic metrics experimentally, with each focusing on a specific aspect of the metric, such as its sensitivity to sequence length or to lexical overlap between the candidate and the reference. Perturbation Analysis and Surrogate Models One common methodology is to apply methods from the interpretability literature to understand what metrics focus on. In one such study, Kaster et al. (2021) measure to what extent several BERTbased metrics correlate with a simple linear model based on hand-crafted features. They find that these metrics are sensitive to lexical overlap despite the fact that the initial motivation for distributional similarity metrics was the over-reliance on lexical overlap of BLEU and ROUGE. The authors craft adversarial examples, and show that metrics can be fooled by lexically similar, nonparaphrase sentences. To the same end, Sai et al. (2021) conduct a correlation analysis after applying 34 perturbations that test the metrics' sensitivity to task-specific criteria (e.g., jumbling word order, introducing spelling errors for fluency, or changing numbers for correctness) using the Checklist method (Ribeiro et al., 2020) . The results of this analysis, which covers 18 criteria across six tasks, indicate that trained metrics tend to do better, but tuning towards overall quality across task is a poor practice, leading to metrics that evaluate no individual aspect correctly. Sai et al. further report that even metrics that score highly are not entirely robust to simple perturbations, calling for a more widespread use of this type of analysis. Aside from lexical overlap, another aspect of text that has been shown to confound metrics is length. During the DUC summarization tasks, systems were restricted to a strict number of output bytes and thus were compared at a given length. This is no longer the case in modern datasets, but Sun et al. (2019) show that this can have dire consequences. Specifically, up to a certain length, one can "cheat" ROUGE scores by simply generating longer outputs. Even when the longer outputs are qualitatively worse, scores increase. Impact of the Systems' Quality As models improve, so should metrics. Yet, many metrics are tuned or benchmarked using previously published system outputs, which cannot be representative of the current and future state-of-the-art. As a result of this, Peyrard (2019) find that summariza-tion metrics with previously reported high correlations with humans disagree with one another when tasked to compare high quality summaries, revealing their fragility. Bhandari et al. (2020a) revisits this conclusion, demonstrating that metrics disagree whenever the quality range is narrow, regardless of whether the summaries are good or bad. Bhandari et al. (2020b) also highlight that previously published studies of metrics would yield different conclusions with more recent datasets and top scoring systems, and that the relative performance of metrics vary a lot across datasets. These studies show that it is still unclear how metrics generalize across time, systems, and datasets and the evaluation of such qualities is complicated due to the cost of collecting human annotations, the low diversity of existing datasets, and the impossibility to to access future systems. Since BLEU was introduced, dozens of papers have shown that automatic metrics have poor correlations with human judgments of quality (in addition to those cited above, see, e.g., Callison-Burch et al. (2006) ). We challenge the premise that such a correlation would be desirable, because quality is a vastly under-defined property. Instead, we make the case for multi-dimensional evaluation. This is already common in human evaluations; researchers often collect evaluations for several aspects of a generated text's quality (e.g., in MT, rating both the fluency and adequacy of a translated text). Since a single number cannot give an accurate depiction of system's performance, we call for the development of metrics with a smaller, but better defined scopes. Another aspect that does require more attention is robustness. Meta-evaluation studies have shown that metrics can behave vastly differently on different datasets and when tasked to evaluate different NLG systems. Furthermore, multiple studies demonstrate that automatic metrics easily break when the input is subject to simple perturbations. This shows that there is major headroom for improvement: the metrics should be narrower in the phenomenon they try to capture, but broader in the input domain on which they perform well. Given the results reported on existing benchmarks, we support the view that human evaluation remains an essential component of performance analysis, complementary to automatic metrics. In addition, collected annotations, especially non-English ones, may be used to train future metrics, feeding the positive feedback loop that ties metrics, models, and human evaluation. The work presented in the previous section concludes human evaluation is a necessary component of model evaluations since we cannot trust automatic metrics. This conclusion is reached by treating human evaluation annotations as the ground truth to which automatic metrics are compared, and human annotations are also used as training corpora for automatic metrics. We thus rely on human evaluations and often treat them as a panacea that reveals the ultimate truth about NLG system performance. Yet there are deep-running issues with how human evaluations are conducted, which affect these system analyses, metric evaluations, and newly developed metrics. While some work asks evaluators to rate the overall quality of generated text, it is more common to collect evaluations for specific dimensions of text quality. However, there is little consensus on which dimensions to evaluate. In the human evaluations analyzed in Howcroft et al. (2020)'s study of 165 NLG papers, generated text was evaluated along 204 dimensions of quality, which they mapped to 71 distinct criteria. Some of these criteria are hierarchical, e.g., grammaticality and spelling fall under the more general correctness of surface form criterion. There are also cases where researchers apply the same text quality dimension differently. For example, Howcroft et al. (2020) found that what researchers called fluency could actually be divided into 15 different criteria, depending on how the term was defined and used in the context of the task. The disparities in how text quality dimensions are applied and defined in human evaluations complicate comparisons across efforts and benchmarking improvements over previous work. This problem is exacerbated by the lack of human evaluation details in NLG papers. Of the 478 quality evaluation questions studied by Howcroft et al. (2020) , over 50% did not define the criterion they were evaluating for (279 out of 478), 65% did not report the exact question they gave the evaluators (311/478), and 20% did not even name the criterion being evaluated (98/478). To promote more standardized human evaluations, some researchers have proposed detailed definitions and methodologies for human evaluation for a specific task and/or dimension of text quality. For example, propose a methodology for evaluating accuracy for data-to-text generation tasks, and Rashkin et al. (2021) define a framework for evaluating whether generated text is attributable to identified sources. While general or vague evaluation criteria can lower the reproducibility and lead to low agreement between evaluators, well-specified human evaluation comes at a cost. For example, the human evaluation protocol used in the accuracy shared task at INLG 2021 produced high inter-annotator agreement, but Thomson and Reiter (2021) reported that each 300-word text took an annotator 20-30 minutes to evaluate and the annotation cost for a single generated text was about US$30. However, this detailed human evaluation protocol captured error categories that the automatic metrics were unable to detect. Previous work indicates that the way questions are framed, the types of text that are being evaluated, and the measurement instruments can affect the results of human evaluations. Schoch et al. (2020) discuss the role cognitive biases can play in the way researchers elicit human evaluations, such as using positive or negative framing (e.g., How much more fluent is sentence A vs. sentence B?), including text artifacts or study design details that reveal the researchers' hypothesis, and framing instructions and questions around a model's known strengths and weaknesses. Choi and Pak (2005) provide a longer catalogue covering 48 of these biases. However, if researchers do not report the details of their studies, no one can judge whether any of these biases would apply; surveys of NLG papers find as few as 35% (Howcroft et al., 2020) and 16% (Schoch et al., 2020) of papers share the questions used in their human evaluations. Aspects of the texts themselves may also unduly affect the evaluators' judgments. For example, Sun et al. (2019) find that several dimensions of summary quality (e.g., informativeness) are correlated with the summary's length and thus suggest normalizing for summary length when evaluating these criteria. Bhandari et al. (2020b) find that the relative quality of the generation models also makes a difference, showing significant differences between older annotations and newly collected human judgments for better models. 13 They show that automatic metrics trained on annotations of text generated from older models do not always perform as well when evaluating state-of-the-art generated text. Another confounder, which we point out in section 3, is the correlation between dimensions that should not be correlated. Dusek et al. (2020) demonstrate that the correlation can be avoided by running different annotation tasks in parallel, but this leads to a much higher cost to the evaluators. Lee et al. (2021) find that Likert scales were the most popular method for rating generated text, used in 56% of studies (82/147). However, Belz and Kow (2010) argue that rating scales like those used in direct assessments (i.e., evaluating a generated text alone, without referencing other candidates) have many issues: they are unintuitive, agreement numbers are low, and most statistical measures are inappropriate for ordinal data. They find that these issues can be addressed to some extent by switching to preferential judgments. Kiritchenko and Mohammad (2017) demonstrated that best-worst scaling (asking evaluators to choose the best and the worst items in a set) is an efficient and reliable method for collecting annotations, and this approach has been used to collect comparative evaluations of generated text (e.g., Liu and Lapata, 2019; Amplayo et al., 2021) . Belz and Kow (2011) further compare continuous and discrete rating scales and found that both lead to similar results, but raters preferred continuous scales, consistent with prior findings (Svensson, 2000) . 14 Contrary to these findings, Bojar et al. (2016) and compare direct assessments and relative rankings and find that the rankings produced were very similar, but Novikova et al. conclude that relative rankings are best when combined with magnitude estimates. They also find that collecting judgments in separate tasks decorrelates different evaluation criteria, albeit at a higher cost since multiple tasks have to be run. Human evaluations present yet another issue: how to measure the significance of human evaluation results? van der Lee et al. (2021) 's survey finds that only 23% of NLG papers report statistical analyses to determine the significance of their results, and only 13% explicitly state their hypotheses. One challenge when testing for significance in human evaluation results is small sample sizes; given that the median number of generated texts in a human evaluation is 100 items (van der Lee et al., 2021) , most typical experimental designs for human rating studies will be underpowered to detect small model differences. This problem is not specific to NLG. Card et al. (2020) analyze popular NLP datasets and find that they are not adequately powered (e.g., a typical MT test set of 2000 sentences would have approximately 75% power to detect differences of 1 BLEU point). Howcroft and Rieser (2021) demonstrate that treating ordinal data as interval data makes tests even more underpowered, which is what most papers do when analyzing rating and Likert scales (68 out of 85 recent papers, according to Amidei et al. (2019b) ). Significance thresholds are not always adjusted when running multiple significance tests (e.g., Bonferroni correction), increasing the likelihood of false positives (van der Lee et al., 2019) . Improvements in NLG models also make detecting statistically significant differences more challenging. Text generated by high quality models may differ less often or in more subtle ways, which requires more human judgments to detect. Wei and Jia (2021) show that the requirement for more judgments can quickly becomes prohibitive: to detect a difference of 1 point on a 1-100 scale in WMT, we need 10,000 perfect annotator judgments. As a result, they suggest that automatic metrics may actually be more reliable than human annotations if the annotations are insufficiently powered. The number of required annotations can potentially be decreased by not uniformly sampling examples to annotate and instead biasing the sampling toward those where models differ. However, this process can lead to artificially high correlation of the results with automatic metrics, which could overstate their effectiveness and the quality of human annotations (Deutsch et al., 2021b) . Moreover, since NLG models may only differ in very few examples, statistical analyses should also handle ties as discussed by Dras (2015) for pairwise rankings. Aside from the parameters of the study, there are also confounding factors in the evaluation of the annotation quality itself. To demonstrate that the annotations are of sufficient quality, reporting interannotator agreement is the most common method. However, Amidei et al. (2019a) survey 10 years of annotation agreement measures and show that almost all studies fail reliability tests. They argue that a substantial amount of the variability cannot and should not be eliminated since evaluation of generated text is intrinsically subjective and relies on many different factors including rater experience, motivation, knowledge, or education. As a remedy, they suggest using additional correlation measures alongside kappa statistics. In many human evaluations, a small number of evaluators judge the generated text. 39% of papers in van der Lee et al. (2021) 's survey use between 1-5 evaluators. However, it is becoming increasingly common to collect judgments from a large number of evaluators using crowdsourcing platforms like Amazon Mechanical Turk (MTurk), Appen, Prolific Academic, and Upwork. In particular, MTurk has a long history in NLP with early claims stating that a small number of crowdworkers can replace a single expert rater (Snow et al., 2008) . Similar claims were made in other communities, stating that, while not as high-quality, overall data quality can actually be improved by having more redundant annotations (Sheng et al., 2008) . However, later studies find that this point is actually a lot more nuanced. Some dimensions of text quality may be easier than others to rate with crowdsourced evaluators instead of experts. Gillick and Liu (2010) find that MTurk judges were better at measuring generated summaries' linguistic quality than their content or overall quality and had a much higher correlation between linguistic and overall quality than experts. Clark et al. (2021) find MTurk evaluators are more likely to base judgments of generated text on the text's form rather than its content. In their work on German summarization evaluation, Iskender et al. (2020) find that non-redundancy and usefulness are very hard to assess using crowdworkers and suggest that experts should be used for them, while crowdworkers are suitable for other dimensions of text quality as long as results are carefully interpreted. Analyzing DUC annotations between 2001 , Harman and Over (2004 find that averaged human ratings can yield meaningful insights, but also note that there is very high variance both within and between human raters and that it is unclear whether the source of the variance is intrinsic to the humans or the models. This variance may be even higher in crowdsourcing scenarios compared to expert raters. Karpinska et al. (2021) report that running the same MTurk evaluation on different days of the week can vary enough to produce different results. When analyzing evaluations of MT systems, Freitag et al. (2021a) find that agreement between ratings produced by linguists and those from crowdworkers can be extremely low. In fact, they find that automatic metrics can have higher agreement with high-quality annotations than human crowdworkers. Some tasks like multi-document summarization are especially challenging and time-consuming for people to evaluate. Observations like these have led to work proposing evaluation methods that combine the advantages of human and automatic evaluation (e.g., Hashimoto et al., 2019; Zhang and Bansal, 2021) . The increasing quality of generated text has led some researchers to move away from crowdsourcing platforms. For example, expert evaluators like English teachers (Karpinska et al., 2021) or trained, in-person evaluators (Ippolito et al., 2020) were needed to distinguish between human-authored text and text generated by today's generation models (an evaluation most commonly found in dialogue generation). Similarly, Freitag et al. (2021a) demonstrate that non-expert annotations often lead to mistaken claims of super-human model performance, when expert annotators correctly identify issues in the generated texts. It is unclear whether these issues are specific to the fact that non-expert annotators are being used, or if these issues may be overcome by improving the quality of the study and the working condition of raters. Investigating the use of MTurk for NLP, Huynh et al. (2021) find that about 25% of studies have technical issues, 28% have flawed, vague, or insufficient instructions, and 26% of study creators were rated as having poor communication. Notably, they also find that 35% of requesters pay poorly or very badly according to MTurk raters. To that end, many have questioned whether the treatment evaluators receive and the structure of crowdsourcing platforms provide ethical working conditions for evaluators. The most basic of these considerations is payment; does the low-pay, small-batch format of crowdsourcing actually provide evaluators with a fair wage? Fort et al. (2011) discuss the low wages MTurk workers receive, along with concerns about data quality issues that the platform incentivizes. These concerns are not unique to MTurk; Schmidt (2013) argues that there are ethical concerns across crowdsourcing platforms, regardless of how they incentivize workers. Shmueli et al. (2021) cover a broader set of ethical considerations for crowdsourcing work, including potential psychological harms, exposing sensitive information about workers, and breaching workers' anonymity. Despite these concerns, Shmueli et al. report that only 14 out of 703 NLP papers that used crowdsourcing mention IRB review. Most of the human evaluations in this section are intrinsic evaluations, asking evaluators to rate the quality of the generated text. However, the more valuable question is answered with extrinsic evaluation: how well does the generated text serve its intended purpose? These evaluations measure how useful a text generation model is and indicate whether real world users would be satisfied with the generated texts. Evaluations focused on intrinsic qualities of the text fail to capture dimensions of NLG systems that practitioners care about, e.g., how trustworthy a generated text is or how well it performs in human-in-the-loop settings. 15 Another related aspect that is rarely considered in human evaluations is the subjectivity of text evaluation. People may value certain text qualities more highly than others or be working from a different point of reference. Even the more "objective" aspects of text quality, like grammatical correctness, may depend on the evaluators' dialect, the perceived formality of the text, the context or style of the generated text, etc. Disagreement in evaluators' ratings does not always indicate evaluator error; rather it may be a signal that there is more complexity to the text or dimension of quality. While it has been shown that increasing the number of annotations per example can decrease the overall bias (Artstein and Poesio, 2009), this finding assumes that the population of annotators is somehow representative of the whole world. Prabhakaran et al. (2021) find that aggregating annota-tor responses results in under-representation of groups of annotators' opinions, and they recommend releasing annotator-level annotations and collecting annotators' socio-demographic information to prevent the exclusion of minority perspectives. We thus should be careful of results such as those that suggest excluding data with low agreement scores with other annotators (Owczarzak et al., 2012) , unless we know the source of the disagreement is not subjectivity. Even well-established NLG tasks have aspects of subjectivity that are usually ignored. For example, the goal of a summarization task is to generate the important points from a document, but Kryscinski et al. (2019) find that when annotators select which sentences in a document are the most important to include in a summary, the majority of evaluators only agree on an average of 0.6 sentences per document. While the majority of evaluation criteria is by definition subjective, there is an opportunity for hybrid approaches with the help of standardized measures (van der Lee et al., 2021) . One such dimension that could be useful for tasks like simplification is the readability of text, which could be measured using scales such as the ones proposed by Kincaid et al. (1975) or Ambati et al. (2016) . van der Lee et al. point out that the relationship between these objective measures and subjective readability assessments is not currently being studied, although a strong objective measure could lead to a higher degree of standardization. Similarly, one can imagine human-in-the-loop approaches for measuring faithfulness that focus on claims that are challenging to verify using only automatic approaches, enabling the collection of a much larger quantity of judgments. A component mostly kept apart from evaluation analyses is the data, even though NLG tasks are embodied through datasets; for example, claims about performance on CNN/DM may be used as a proxy for performance on all summarization tasks. Issues with datasets are widely studied in the general machine learning literature which we heavily draw on in this section, with anecdotal evidence for NLG tasks when available. In a recent survey of datasets and benchmarks in machine learning, Liao et al. (2021) point out that the lack of differentiation between tasks and datasets that aim to capture them can lead to harmful over-generalization. They ar-gue that choosing to evaluate on a dataset reinforces design decisions taken during its construction and focuses the evaluation on the specific distributions represented in the data. Collectively, the research community could select for a more diverse language representation and decide to replace older flawed datasets by newly developed ones. Unfortunately, the collective choices also reinforce suboptimal design decisions. Analyzing a sample of 20 papers that proposed summarization approaches in 2021, we find 27 datasets that models were being evaluated on. The most popular ones, CNN/DM and XSum (Narayan et al., 2018) , were used five and four times respectively, despite their issues, which we explore in section 5.2. Additionally, only two of the 27 datasets were non-English, despite much recent work that introduces multilingual summarization corpora (Giannakopoulos et al., 2015; Scialom et al., 2020; Ladhak et al., 2020; Hasan et al., 2021; Perez-Beltrachini and Lapata, 2021) . These findings lead to three questions. First, how can we as a research field measure summarization improvements on disjoint datasets? How can we claim that we are making progress if we only focus on a single language? And, given the significant issues with popular benchmark datasets, what do improvements even mean? Throughout this section, we analyze typical design choices during NLG data construction and how they influence insights derived from evaluations. 16 Dataset creation is a value-laden process, yet those values are rarely made explicit (Hutchinson et al., 2021) . The choices of dataset creators have significant impact, for example on who is represented in the data and on the language(s) of a dataset. Joshi et al. (2020) (Jørgensen et al., 2015 (Jørgensen et al., , 2016 Blodgett et al., 2016, among others) . Beyond performance drops, excluding dialects from datasets can often be seen as akin to de-legitimizing the language and their speakers (Rosa and Flores, 2017) . This problem is even worse in NLG, where no popular corpora exist to measure the discrepancy in performance between dialects, and, as seen above, the most popular corpora only cover versions of English present on popular British or US news websites. When making claims about model performance, we should thus acknowledge that we report it for only a tiny sliver of possible phenomena and work toward reporting performance for different subpopulations (Mitchell et al., 2019) . Design Choices Beyond actively reporting more fine-grained numbers, Hutchinson et al. (2021) propose that the assumptions underlying a dataset should be specified before and during the collection to enable an early peer review of the choices. Instead of releasing datasets as monolithic artifacts and treating them as number-producing blackboxes, they should be accompanied by sensitivity studies for dataset parameters and rigorous discussions of their limitations. Unfortunately, none of these suggestions are typically followed: Scheuerman et al. (2021) analyzed 114 computer vision datasets and find that their creation process values efficiency at the expense of care and that they typically aim to be as universal as possible without nuanced understanding of contexts from which datapoints arise. All this typically benefits the model work at the expense of data work, leading to easierto-digest but deeply flawed results, similar to what we have discussed so far for NLG evaluations. Similarly, through interviews with 53 AI practicioners, Sambasivan et al. (2021) highlight how data collection choices cascade and amplify through all parts of the development pipeline from training and evaluation to deployment. They warn of the lack of incentives to produce high-quality datasets and encourage more work on data improvement processes that should be part of the life cycle of a dataset. To address some of the representational issues, it seems natural that we should aim to produce "impartial" data, but this may also be either undesired or even impossible. Rogers (2021) summarize discussions around data curation, the act of manufacturing distributions that differ from naturally occurring ones, pointing out that dataset creators should maybe not be the ones deciding what distribution should represent the world, and that studying the world as it is with all its flaws and biases is an important aspect of NLP. There is thus not a one-size-fits-all curation solution. Regardless of the choices of dataset creators, it is imperative to report the curation decisions alongside limitations of datasets in structured format to allow for a better interpretation and contextualization of performance results (Bender and Friedman, 2018; Gebru et al., 2021; . To that end, interactive tools like like Know Your Data, Data Quality for AI, and the Data Measurements Tool may provide valuable insights. Since the suggested documentation and analysis processes are rarely followed, we will only be able to shed some light onto issues in NLG datasets, and note that uncovering and addressing these issues should be an ongoing process. Memorization When talking about dataset issues, we also need to consider the trend of pretraining corpora that were scraped from the web. Many NLG datasets are similarly built on top of web-scrapes (e.g., news websites for summarization datasets or Wikipedia for data-to-text datasets) and often do not contain significant post-editing steps. As a result of this, pretraining examples can be found in downstream test corpora (Dodge et al., 2021; Lee et al., 2021) . Since it is impossible to remove the affected data from the training corpus after the release of a model, multiple approaches have been explored mitigation techniques. For example, BIG bench introduced a hash identifier that will allow web crawlers to ignore their data, but this approach does not work for data scraped from other sites. Another approach investigated by uses large models alongside humans to create fully artificial data. However, the authors find that even careful curation aiming to diminish bias issues leads to others which may be more subtle. As it stands, the only approaches to avoid memorization are, therefore, to not rely on web-crawled data at all or to continuously build new datasets that utilize data collected after the cutoff date for large pretraining datasets, neither of which are practical for all NLG tasks. Any performance improvements on NLG tasks based on web-data should thus be analyzed carefully for the effect of memorization and test leakage. Another assumption underlying corpus-based NLG is that human references represent a gold-standard. While we discussed in section 3.2 that this leads to flawed metrics of "human-likeness", we also use human-written references as the target to optimize during training. As Reiter and Sripada (2002) argue, this is a fallacy since humans disagree with each other and make mistakes, which our models will learn to replicate. It is thus crucial to understand exactly what task we are actually learning from the data, whether it corresponds to the claimed communicative goal, which potential shortcuts a model may be taking, and whether there is noise in the data that distracts from the task. A commonly cited shortcut in summarization is positional bias. Since most common summarization datasets are built on journalist-written news articles, they typically follow best practices to provide salient information early on in the article (Grusky et al., 2018) . Summarization datasets thus have strong positional data biases that models pick up on and which lead to inflated results if the test set has the same biases (Gehrmann et al., 2018) . If the claims made about such a model are specific to news summarization and resulting models were only used to summarize news articles written in a similar style, this may not be a significant problem. However, Kedzie et al. (2018) demonstrate that controlling for positional bias drastically decreases model performances to the point where deep learning based models barely outperform much simpler approaches. Therefore, it may be helpful to also evaluate summarization models on a variety of tasks including in the non-news domain to prevent inductive model biases from inflating the results. Another side effect of the positional bias is that simply picking the first three sentences of a professionally written article is a strong baseline, as shown by Nenkova (2005) who analyze DUC-2001 data and note that "only one system significantly outperforms the baseline of selecting first sentences from the input articles". The same baseline was introduced for neural models on CNN/DM by Nallapati et al. (2017) who similarly find that it is extremely effective. This effect is especially pronounced in CNN/DM where raters even prefer the first three sentences to the summary provided in the dataset (Stiennon et al., 2020) . This is due to the fact that the design choice for CNN/DM was to pair an article with the bullet points written for it on the homepage of the respective news outlet, which worked well for its intended use as a reading comprehension dataset (Hermann et al., 2015) , but does not work for summarization. This re-use of datasets for incompatible tasks, along with the concentration on very few datasets, is a worrying trend that was quantified across multiple other tasks by Koch et al. (2021) . Along similar lines, datasets constructed through web scrapes may additionally contain extraneous information, such as hyperlinks or image captions. Here again, CNN/DM is a culprit ; since the references were never meant to be a real summary, there is no requirement that a reference is faithful to the source article. An analysis of XSum finds that over 70% of references contain external hallucinations (Maynez et al., 2020) . This finding provides an opportunity for dataset developers to improve dataset construction processes-for example, XL-Sum, a recent multilingual news summarization dataset, evaluates the faithfulness of references across 10 languages and find that in their dataset, only 25-40% of summaries contain unsupported information, a significant decrease compared to XSum. An alternative path toward this goal is to improve datasets over time once these issues are uncovered. For example, Gehrmann et al. (2021) release an improved version of XSum that filters the dataset based on a faithfulness classifier trained on the data by Maynez et al. (2020) . Similar noise can also be found in common datasets for other NLG tasks. In WikiBio (Lebret et al., 2016) , which has the communicative goal to provide a short biography grounded in keyvalue attributes about a person, less than half of the attributes are actually realized in the reference and over half of references score very low or low in faithfulness on a 5-point scale . It is surprising to see how far behind the rest of NLG is behind MT in this regard, where filtering and cleaning of scraped data is common practice and shared tasks are being held (Koehn et al., 2019) . Moreover, crowdsourced NLG datasets, for which one may expect a lower ratio of noise, are not without problems. Dušek et al. (2019) find that cleaning the E2E NLG dataset (Novikova et al., 2017b) , for which the communicative goal is to describe a restaurant given a set of key-value attributes, led to a reduction in slot-error rate of up to 97%, which means that failures may have incor-rectly have been attributed to the model instead of the data. Similar reductions in errors were seen in task-oriented dialog as a result of improving the dataset (Budzianowski et al., 2018) . Despite these findings, few datasets follow construction processes with multiple post-editing steps to ensure a low ratio of noise . These examples demonstrate the importance of identifying limitations of existing "standard" datasets and either replacing them with better constructed ones, or-if the limitations can be addressed-improving them over time. More attention should be paid to construction processes that aim to minimize noise, and faithfulness evaluations should be default for new datasets. While much of this work is by nature qualitative, automatic methods can be employed to characterize aspects like the abstractiveness or compression-ratio in summarization datasets (Bommasani and Cardie, 2020). We next take a look at the choices behind test set construction. It is usually considered a best practice to create i.i.d. splits. That is, we assume that a subset of the dataset is representative of the full data distribution, and randomly split the data into training, validation, and test sets. However, this assumption may not hold, and, given the sampling bias pointed out in section 5.1, lead to similar underrepresentation in the test data. As a way to make i.i.d. schemes more robust, Gorman and Bedrick (2019) propose using multiple random splits similar to cross-validation as they find that results on multiple NLU tasks change when the splitting process is changed. Following prior work by Demsar (2006) , this enables computing statistical significance of numbers. However, besides the increased computational complexity in NLG, Søgaard et al. (2021) point out that i.i.d. splits may not the correct way to characterize system performance, since the above assumption implicitly assumes that the data distribution matches the distribution a model would run on during deployment in a real-world scenario and thus argue for evaluating on samples that measure aspects, for example topics or content from certain years, that are not seen during training. They evaluate multiple not-random, informed data splitting approaches and find that the results vary significantly depending on how the test set was constructed. Considering data splits during dataset construction can thus lead to much more informative results. For example, the E2E NLG dataset was used for a shared task with a private test set that contained completely unseen attribute combinations, leading to a drop in performance . A similar approach was taken for ToTTo, a dataset to describe a set of highlighted cells in a table, which reports numbers for seen and unseen combinations of table columns . Here, results on unseen combinations are more than 60% lower than on the seen combinations. Transformations Another factor to consider for the construction of test sets is how to handle natural language variation. Dialectic or individual variations can entail different spelling, word order, grammar, or vocabulary. To that end, Moradi and Samwald (2021) show that models are very brittle to character-and word-level perturbations. Even if the dataset creation did not consider informed splits, it is possible to create evaluation suites, a collection of test sets that together yield informative insights. Building on the insights from challenge sets that avoid potential model shortcuts (Mc-Coy et al., 2019) , Ribeiro et al. (2020) argue that for NLU tasks, one can enumerate linguistic phenomena and expected outcomes. For example, a negation should flip the result of whether a fact is entailed by its premise, but replacing an entity in both should not. While we cannot enumerate capabilities in a similar way for NLG, Mille et al. (2021) argue for informed transformations coupled with collection of additional data to enrich existing datasets. Informed transformations measure the causal effect of introducing language variation, for example changing the order of columns or replacing numbers with others, while additional test data can be used to evaluate without overlap with the training set, addressing the memorization issue mentioned above. Dhole et al. (2021) expand their framework to over 100 different transformations that include dialectal variations, OCR errors, and others that can be used to create more realistic scenarios. Time Travel Dataset shifts are when the joint distribution of inputs and outputs differs between the development of a model and its deployment (or in our case test its test setup) (Quiñonero-Candela et al., 2009). One of the suggestions by Søgaard et al. (2021) is to simulate dataset shifts to simulate a more realistic deployment scenario, which they test on the Gigaword sentence compression dataset (Napoles et al., 2012) where they divide splits by year of publication. This process becomes even more important given what we know about the extent of train-test overlap in pretrained models (Lee et al., 2021) . One recently suggested approach by Mille et al. (2021) is to continuously collect new test sets using the original collection approach. As an example, they collect new test sets for XSum and the English and German subsets of MLSum (Scialom et al., 2020) which focus on COVID-19 related news articles which is not part of large pretraining corpora or the mentioned datasets. We re-analyze their released data for 33 models in Figure 2 , focusing on their BLEURT score (Sellam et al., 2020) . As can be seen, models consistently do not handle the new concept well. Error Reporting While evaluation suites enable quantification of errors to some extent, not all errors are detectable through general-purpose methodologies, and thus require more in-depth investigations. . This error analysis is only suited for the specific task but provides a view into the data distribution of the references and uncovers a systematic shortcoming where the system does not report the effect correctly in about 50% of cases. As discussed above, how a test set is constructed can have large implications on the model evaluation. Another contributing aspect is the style in which references are presented. Since we rely on human-likeness metrics in evaluation processes, the style in which the references are constructed matters significantly. An example of this is in machine translation where "low-quality" references that contain translationese lead to low diversity and favor translation systems that produce similar low-quality outputs . This can partially be counteracted by employing experts to paraphrase existing references, thus creating a wider set of reference points to which a metric can compare, but this direction has not been explored for NLG. Similar problems exist in summarization, where the same references are often used to evaluate both extractive and abstractive approaches. Summaries in many datasets exhibit a high fraction of content overlap with the articles. Consequently, extractive systems are favored by design (Goel et al., 2021) , and metrics have a lower correlation with human judgments as references become more abstractive (Bhandari et al., 2020a) due to the mismatch in style.These findings has also been corroborated for XSum Mille et al., 2021) . As we have seen, it is impossible to fully identify whether or how our models fail with methods available to us today. And even if we detect failure, we cannot attribute it to the data, the evaluation process, or the model itself. Due to the problem's complexity, it will require a significant effort to establish a positive feedback loop in which improvements to data, models, or human and automatic evaluations can benefit the other parts of this circular dependency. To help facilitate work toward this goal, we make the following suggestions for NLG researchers. Preconditions of any further progress and the better understanding of model limitations are improved documentation standards and avoidance of the documentation pitfalls discussed throughout this paper. On the data side, this includes documenting the exact data collection processes, their limitations, and a discussion of the social impact of datasets, as proposed for data cards (Bender and Friedman, 2018; Gebru et al., 2021) . Beyond identifying issues, standardized documentation following mutually agreed-upon frameworks can lower the barrier of entry to newly developed resources (Lhoest et al., 2021) . A drastic, yet necessary, change from the status quo is that datasets and their documentation must not be static entities. Datasets should be cleaned and improved (e.g., Dušek et al., 2019; ) over time and sending pull requests to update data documentation needs to become as commonplace as sending pull requests to or opening issues in open-source libraries. Additionally, treating datasets as dynamic encourages the development of evaluation suites that everyone can benefit from (Bowman and Dahl, 2021). The benefit of this approach can be seen in natural language inference, where the dataset SNLI (Bowman et al., 2015) was extended to cover more genres (Williams et al., 2018) and languages (Conneau et al., 2018) and subsequently has been used to explore whether adversarial data augmentation techniques are useful for evaluation (Nie et al., 2020; Phang et al., 2021) . However, we also should not hesitate to take more drastic measures and deprecate datasets when better ones are released. To that end, we mirror the suggestion by Bommasani and Cardie (2020) that it is time to do that with CNN/DM, which is no longer a useful summarization dataset. The same goes for metrics as well: It is clear that no single metric can provide all the insights, so no paper should rely on only a single metric. Moreover, while it is too early to fully deprecate ROUGE, we need to normalize not reporting its scores in favor of other lexical metrics like METEOR that have been shown to perform at a similar or higher level. When lexical metrics are used for non-English text, the tokenization approach needs to be documented and metrics with established tokenization approaches like BLEU should be used in favor of ROUGE. For now, alongside deeper analysis, we recommend using at least one entailment or QA metric and a learned distributional similarity one like BLEURT, at least until we have reliable direct assessment metrics that do not require references. For Translation, we recommend to combine a lexical overlap-based metric, e.g., CHRF or BLEU (at the systems-level) with a learnt metric such as COMET or BLEURT, and encourage using of MQM for human evaluation if researchers have access to expert raters given their budgetary constraints (Freitag et al., 2021a) . Some existing projects focus on continuous improvements for some evaluation aspects, e.g., DynaBench (Kiela et al., 2021) aims to improve data alongside models, and Bidimensional Leaderboards for improving metrics alongside models. However, DynaBench focuses on NLU and Bidimensional Leaderboards use CNN/DM as the only non-MT NLG dataset. It is thus unclear whether a single shared framework that addresses only a subset the mentioned issues is the solution instead of uniting the decentralized research community behind this shared goal. Implementing and popularizing these changes in the community will require several changes to peer review processes. First, we should encourage authors to submit resource papers. As Rogers and Augenstein (2020) point out, resource papers are already underappreciated and increasing what counts as acceptable documentation for a resource paper may lead to fewer such papers being written. Second, authors and reviewers need to move from claiming empirical improvements toward a more rigorous documentation of how those were achieved. Modeling papers often include deliberations why certain architecture choices were made, but the choice of which datasets to evaluate on or which metrics are being used rarely move beyond Figure 3 : Our vision of an evaluation report. A model should be evaluated on multiple datasets via multiple metrics and well-documented human evaluation. The outputs, scores, and any human assessments should be easy to download. In addition, each dataset should be the most current version and accompanied by an up-to-date data card. The evaluation report next reports fine-grained error types such as hallucinations (intrinsic, extrinsic, factual), or grammatical errors (subject-verb agreement, spelling, etc.). Finally, evaluation suites are used to produce model audit results on specific input types. In this example, the model handles OCR and spelling mistakes relatively well, but fails on unseen topics or time shift. Model audits can also include breakdowns of performances by input types or dialects. The entire report is part of the model card. "other people use it". By the same logic, reviewers may be hesitant to accept claims when a model is not evaluated on the standard flawed datasets. As discussed in this work, many of the standard practices should be reconsidered and we thus need more elaboration on these choices. Third, we encourage researchers to focus on specific phenomena, rather than overall quality. Instead of treating NLG models or metrics as "one big problem", we encourage work on more specific aspects, say, logical consistency in dialog, or aggregations in table-to-text generation. We further encourage researchers to use task-specific metrics and be upfront with the tradeoffs, and we encourage reviewers to expect and accept more nuanced claims and contributions while discouraging claims about the overall quality of a system. Finally, to support this research, we should encourage re-training and/or re-implementing prior work for the most appropriate benchmark task(s) and evaluation process when necessary. Beyond this, one of the best ways to contribute to improving evaluation as a modeling researcher is to release of model outputs for validation and test sets alongside instructions on how to replicate reported numbers. Many works like that of would not be possible without access to model outputs, and such corpora can be used for metric development and validation, and to conduct meta evaluations. Releasing outputs on non-English datasets, even when no human evaluation can be conducted, will ease the path for evaluation improvements on the covered languages by reducing the burden on the evaluation researchers to produce the outputs. While human evaluation can solve many issues of automatic metrics, it often does not. We thus should not blindly accept results that show that one number is larger than another. Again, it is more important that the process to arrive at these numbers is well documented, that people involved in the process are considered, and that results are sufficiently statistically powered. To achieve this goal, we need to work toward reusability and replicability in human evaluations, for example by filling in human evaluation datasheets (Shimorina and Belz, 2021; Belz et al., 2021) and by contributing and using projects that standardize parts of the process (e.g., Khashabi et al., 2021; . Expanding on the suggestions by van der Lee et al. (2019) , we need a wider adoption of effect size estimates, power analysis, statistical significance tests, and emphasize the importance of analysis of the validity of human evaluation results. Additionally, even though much of recent advances in NLP have been powered by non-expert crowdraters, the importance of expert raters is becoming increasingly clear, both in dataset construction and model evaluation. There needs to be a clear understanding what the required qualifications are to participate in a task and whether the involved raters fulfill them. We further suggest more work on reducing the barrier for the deployment of extrinsic evaluations. Extrinsic evaluations hold generated text to a higher standard, requiring a generated text to not only be "correct," but also to be effective at its intended purpose and to be useful to a potential end user. Grounding the evaluation in a specific task and situational context moves the focus of the evaluation from the appearance of the generated text to the content and the purpose of the text. While ranking models according to a single quality number is easy and actionable-we simply pick the model at the top of the list-it is much more important to understand when and why models fail. A model being on top of a well-established benchmark only means that it performs best on the majority of test examples, but as we have seen, the construction of test sets is often not representative of performance on real scenarios and can hide issues in less frequent classes. Mitchell et al. (2019) describe the "quantitative analysis" process of testing on subpopulations and reporting disaggregated results according to chosen metrics, falling back on synthetic data when necessary. Our suggestion goes beyond this to create what we call evaluation reports as part of model cards which document the results of model audits, as outlined in Figure 3 . The idea of an model audit is to identify what breaks a model, with the goal of moving away from chasing the highest overall number. The long-term goal of evaluation reports are performance guarantees: we would like to know exactly what to expect of a model for a given input. Since the space of potential model shortfalls is rather extensive, the creation of model audit processes will rely on our collective work to create evaluation suites and on automatic transformations using frameworks like those discussed in section 5.3. To the extent possible, evaluation reports should be framed in causal terms by measuring the response of multiple metrics (or human evaluation) to stimuli to avoid issues with metrics, similar to the CheckList framework by Ribeiro et al. (2020) . This has the further advantage of setting more realistic user expectations. Taking the example from Figure 2 , we could state that "When the model summarizes news articles from the same source, but with COVID-19 related content, we expect quality drops of 20±5% according to BLEURT. N% of summaries are deemed non-understandable by nonexpert raters." Evaluation reports should further include improved error analyses, following suggestions by and Bender and Koller (2020) who argue for more focus on limitations in addition to aggregated scores. Advocating creating evaluation reports does not mean that we should not demonstrate improvements at all, but need to move away from them being the only contribution. Papers should show brittleness and a clear path toward improvements for future work, rather than hiding or being ignorant of existing issues. Another advantage of this framing is that the reliance on large models may dwindle, since work on quantifying shortcomings is equally applicable to smaller models and methods that improve model robustness often work on all model sizes. The explicit set of evaluations that should be run are subject to investigation in future work and may also depend on the claims that are being made. Table 1 : A condensed view of the recommendations provided in section 6 in a relaxed format for use in our analysis of recent modeling papers (see Appendix A for exact annotation instructions). On the right, we show the number of papers that (do not) follow the recommendations. We also present the percentage of applicable papers that follow each practice. While any one paper should not be expected to follow all the recommendations, a higher overall coverage is highly indicative of a better evaluation process. While a survey of challenges and issues will, by definition, paint a rather gloomy picture, there are many positive examples of model evaluations, a couple of which we highlight in this section. To do so, we analyze 66 papers from ACL, INLG, and EMNLP 2021 and the extent to which they already follow our recommendations. Our analysis focuses on whether the different aspects in table 1 appear in a paper, rather than measuring its extent or quality. That means, for example, that we only identify the presence of a significance test in a human annotation, not judge whether it is the correct test, and that we identify whether any motivation for using a particular dataset exists, not the soundness of the motivation. Through this, we aim to capture an upper bound to the existence of the different aspects. We provide additional information on the annotation process and the exact instructions in Appendix A. Overall, we find that 36.7% of our 2046 judgments were positive, which means that the field has already taken a significant step toward solving the problems pointed out throughout this survey. Scores for papers ranged from 6.5% to 58.1%, with an average of 27.3% (median 25.8%, standard deviation of 0.11), demonstrating that there is no consistent standard that is widely applied. We present a histogram of the average scores per paper in fig. 4 and highlight some positive examples of individual judgments when discussing the results. Starting with the recommendations pertaining to the basic evaluation setup, the vast majority of papers (84%) include evaluation results from multiple datasets and reports human evaluation results (73%). However, the documentation of the choices that went into the evaluation process and those that relate to the specific claims and motivations is often flawed. Only 38 and 30% of papers respectively motivate why they chose a particular dataset and metric and half the papers made claims in the abstract pertaining to their system outputs' overall quality when this was not the aspect that was evaluated. About 29% of papers reported results on non-English language, although this result was inflated by machine translation papers included in the analysis. While not explicitly annotated, almost no paper stated that they were working on only English, a practice that has long been criticized (Bender, 2009) . Disappointingly, only 29% discussed the limitation of the proposed method, a finding that corroborates our claim that evaluations are too focused on reporting superior performance rather than fully characterizing system outputs. As a positive example, Kim et al. (2021) report negative results on on out-of-distribution performance, encouraging future researchers to work on making their proposed method more robust. On a positive note, a majority of papers report multiple kinds of metrics. 57% of papers report metrics from different categories instead of only relying on lexical overlap. In most such cases, the categories were metrics that measure similarity to a reference and diversity among outputs. However, some also developed metrics to specifically measure what is being claimed. For example, Lyu et al. (2021) work on lexical consistency for documentlevel MT, which they first analyze and then derive a metric from. This metric is subsequently used alongside other metrics to validate their specific claims. About 20% of papers provide addi-tional breakdowns of the results, report on non-i.i.d. test sets, conduct error analyses, or demonstrate a causal effect of input features. These are especially helpful when the analysis is motivated by problemspecific needs. For example, Krishna et al. (2021) investigate the generation of doctors' notes from conversations and analyze the performance in the presence of speech recognition errors. While 29% of papers point out issues in the datasets they use or introduce, we found only one paper that contributed to the data documentation, leaving future researchers to rediscover the same issue(s). Moreover, only 3/13 papers that point out issues actually work toward solving them and release updates to the dataset. As discussed above, this is an area where normalizing contributing documentation and releasing updates would have beneficial effects for future work with these datasets. Looking closer at the human evaluations, we did not require the definition for each evaluation criterion to be stated, and, therefore, our results look more positive than those by Howcroft et al. (2020) . We find that 82% of papers that report human evaluation results also state what is being measured, although the documentation of who is evaluating is still lacking (58%). However, we did not find a single paper that estimated how many annotations should be collected, and most opted for the "typical" 100 data points which, as pointed out above, may be insufficient (van der Lee et al., 2021) . Similarly, only 25% and 39% of papers assess the annotations and/or the annotators and only 21% discuss which background knowledge was required to participate in an evaluation. An aspect that needs to consistently be improved is the release of data. Even though many papers released datasets or code to reproduce their models, almost none released model outputs or their human evaluation data. This practice can lead to issues when, for example, new papers are not able to compare using the same metrics environment, something that 37% of papers did not do. Moreover, it can significantly slow down evaluation research due to a lack of data to annotate or human annotations to compare to. Overall, this analysis shows that there is much room for improvement, but it also shows that we are not starting at zero. While none of the papers reached 100%, which may be an overly ambitious goal, many reached 40% or higher, meaning that they already included many of our suggestions. The perfect evaluation is a white whale The myriad of evaluation obstacles will not suddenly vanish as we develop metrics that do not suffer from the same shortcomings as human-likeness measures or as we develop better datasets and evaluation suites. Data and evaluations are by design subjective and only reflect a small subset of the space of potential inputs. In addition, most critiques and suggestions covered in this paper assume that generated text already exists, but as Dodge et al. (2019) demonstrate, compute budget and hyperparameter tuning can also massively confound results. The perfect evaluation process is thus not an achievable goal and the question this paper originally set out to answer, "How can we fix NLG evaluation?", does not have a simple answer. However, we pointed many steps toward a better evaluation process that will hopefully address these issues. This leads to the natural question if it is worth spending all this extra effort on model evaluations when anything we come up with will always remain deeply flawed. Aside from the utilitarian argument that reducing the number of issues are most definitely a beneficial outcome, we pose that creation of evaluation reports should not lead to significant extra effort. There are many research projects focusing on developing evaluation infrastructure, from human evaluation datasheets (Shimorina and Belz, 2021), to NLG-specific data cards with interactive collection tools , metrics frameworks , and APIs that aim to produce replicable human evaluation results (Khashabi et al., 2021) . Similar to deep learning modeling frameworks, there is a delayed payoff due to their learning curves, but reusing and improving the existing infrastructure will help strengthen evaluations. In the end, evaluation practices will only be adopted if reviewers hold model developers accountable to use them. One aim of this work is to document failures in evaluation processes that frequently happen, many of which can be directly addressed or pointed out in reviews. In the future, we also suggest creating model evaluation checklists like those by Rogers et al. (2021) for responsible data use or Dodge et al. (2019) for reporting hyperparameters and compute infrastructure. Model Interpretability This paper omits discussions of analysis methods from the interpretability literature as a part of a model's holistic evaluation process since interpretability may be seen as orthogonal to and is not strictly necessary for model evaluations. However, interpretability methods are a useful tool for evaluations, since they may be used to uncover model shortcuts (McCoy et al., 2019) or to find systematic errors (Popovic and Ney, 2011 ) that would be described in an evaluation report. 17 Interpretability tools can also facilitate evaluations. For example, the the Language Interpretability Tool (Tenney et al., 2020) can automatically evaluate on subpopulations, and Explainaboard (Liu et al., 2021) , although with limited support for NLG, helps identifying challenging inputs for a model. Evaluating the interpretability of a model itself as an additional dimension is extrinsic and task-specific (Doshi-Velez and Kim, 2017) and thus out of scope for this work. NLG is not ML and also not NLU A recent survey by Liao et al. (2021) summarizes evaluation failures across all of machine learning, including computer vision and NLP. While we find overlap with their findings on the data side where most (but not all) criticisms can be applied to other tasks, most of the issues surveyed here go beyond onesize-fits-all machine learning analyses; since outputs are natural language, no equivalent of accuracy or F1-Score exists. This property was focused on by Dale and Mellish (1998) who position the evaluation of NLG systems as complementary to NLU systems, pointing out the symmetry of moving from natural language to representations of meaning or structure (NLU) or the other way around (NLG). However, they also discuss evaluation of intermediate steps of an NLG system. As a consequence of the move toward end-to-end approaches, many NLG tasks have NLU components like the selection of appropriate content that are implicitly evaluated as part of the evaluation of the final generated text. Yet, as Bender and Koller (2020) discuss, these NLU steps require abilities that current models are incapable of acquiring from supervised learning. Evaluating NLG tasks only through the lens of outputs is thus insufficient and we should strife to deliver a more fine-grained breakdown, but it is unclear how to evaluate intermediate steps in current evaluation setups. While separate reasoning steps are starting to be incorporated into current NLG approaches (e.g., Puduppully et al., 2019; Narayan et al., 2021) , there has been no consensus for how a planning stage should look like or how to evaluate it and all current evaluation practices are focused on output forms. Nevertheless, approaches that incorporate explicit steps to attribute generated information to sources will be crucial to making progress in the field, and evaluation processes need to reflect these advances (Rashkin et al., 2021) . The use of models and external evaluation A limitation of this work, and evaluation in general, is the focus on intrinsic evaluations and the lack of extrinsic evaluation, and more generally measurement of the external effects of model training and development. NLG model behaviors may oftentimes be acceptable in some context and undesirable in others. This is not a problem that can be solved through only intrinsic evaluations since norms are established through language and the cultural background of a person may lead to a different perception of language (Nakayama and Halualani, 2011). Take for example a summarization system that is run on a subjective article such as a column in a newspaper. A general-purpose summarizer will likely not generate text that states "Author X states that Y", but instead will present opinions such as "I don't like math" or "Jollof Rice is tasty" as facts. This presentation, alongside the anthropomorphic bias of deep learning models (Watson, 2019) , can perpetuate these opinions including harmful stereotypes. This is a general limitation of NLG models which we are unable to capture using standardized benchmarks alongside intrinsic evaluations. We also note that few, if any, benchmark currently reports the environmental side-effects of training and serving NLG models (Strubell et al., 2019) . This means that there are still many considerations required to understand NLG systems that fall beyond the scope of this survey. External evaluation may further be more appropriate for interactive systems like dialog systems which are out of scope for this work and which require evaluation considerations such as the distinction between turn-and dialoglevel metrics (Smith et al., 2022) and which are much more susceptible to antropomorphism (Dinan et al., 2021) . Better metrics will lead to better models A common assumption that has been explored with various success in the past is whether it is possible to directly optimize metrics using reinforcement learning instead of the typical cross-entropy ob-jective in NLG tasks. For example, Paulus et al. (2018) demonstrated that we may be able to optimize ROUGE directly and Pasunuru and Bansal (2018) explored alternative optimization targets such as maximizing entailment. This work assumes that metrics are good proxies for task performance which, as seen in this survey, is demonstrably false. In addition, Choshen et al. (2020) show that improvements from reinforcement learning objectives in machine translation are unrelated to the training signals, but rather a side effect from changes in the model distribution curve. They even find this to be true when the reward signal is semantic similarity instead of BLEU (Wieting et al., 2019) . However, more recent work demonstrates that, in machine translation, optimizing toward newer learned metrics like BLEURT does not suffer from this issue, leading to significant model improvements (Shu et al., 2021) . Developing better metrics thus provides an exciting opportunity to close the circle between metrics and models, especially if we can optimize toward multiple metrics which measure disjoint quality aspects. However, this advance relies on our recommendation for evolving metrics and the embrace of deprecation. We surveyed challenges in NLG evaluation from the perspective of automatic metrics, human evaluation, and datasets. Our findings reveal that, while much progress is being made, the evaluation process currently applied to most models is not sustainable. Models have improved to the point where differences between them are unlikely to be spotted based on surface-level phenomena and careful annotation process are required to characterize their output quality and distinguish between them. Moreover, we discussed issues with popular NLG datasets that further conflate evaluation results. In addition to pointing to worthwhile evaluationrelated research directions, we suggest a series of actionable improvements that model developers can follow that will have a positive long-term impact while also improving their model evaluations. Notably, we argue for evaluation reports that focus on a causal framing of limitations of models with the goal to eventually be able to provide performance guarantees for a wide set of potential deployment scenarios. We show in an analysis of 66 recent NLG papers, that many of the suggestions are partially followed already, but that there is no consistent standard which evaluation aspects are required of researchers. When discussing the limitations of our suggestions, we note that their implementation will require changes to the peer review system to hold model developers accountable to follow the suggested best practices. Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019b . The use of rating and Likert scales in natural language generation human evaluation tasks: A review and some recommendations. Here we describe the annotation instructions for our analysis of 66 ACL, EMNLP, and INLG papers from 2021. The instructions were defined such that results are an upper bound to the criteria. We avoid judging quality of a particular evaluation aspect and instead only annotate its presence. The analyzed papers were selected from the proceedings of the three conferences. A paper was selected if it included in its title any reference to working on a generation problem. To avoid an overemphasis on machine translation in the analysis, we did not specifically select translation papers unless their title was related to generation as a whole. As a result, about 10-15 translation papers were part of the analysis which we considered an appropriate amount. The selected papers were subsequently filtered if they did not provide modeling results, e.g., because it was an analysis-focused paper or if the modeling task was not a generation task covered by our definition in section 2. Make informed evaluation choices and document them • Evaluate on multiple datasets: Select yes if the paper reports results on more than one dataset. Select N/A if the paper explicitly states that there is only one dataset available for the addressed task. • Motivate dataset choice(s): Select yes if the paper states why each particular dataset was chosen. If the only reasoning is that previous work uses it, select no. If the paper introduces a dataset, select N/A. • Motivate metric choice(s): Select yes if the paper states why each particular metric was chosen. If the only reasoning is that previous work uses it, select no. • Evaluate on non-English language: If at least one of the evaluated datasets includes non-English language, select yes. • Use a combination of metrics from at least two different categories: Select yes, if the automatic evaluation results include at least two metrics from different families (e.g., one QA-based one and one lexical one). Reporting ROUGE and BLEU would not count while ROUGE and BLEURT would. • Avoid claims about overall "quality": Select no if the abstract of the paper reports improvements generally and not in terms of specific generation aspects (e.g., "we outperform baselines") • Discuss limitations of using the proposed method: Select yes, if there is at least one paragraph dedicated to the limitations of the proposed method in the results or discussion section or as its own section. Analyze and address issues in the used dataset(s) • Discuss or identify issues with the data: Select yes, if there is at least a mention of problematic artefacts with the data or what or who it represents. • Contribute to the data documentation or create it if it does not yet exist: Select yes, if the paper is accompanied by a data card or if there is a mention that original documentation was updated. • Address these issues and release an updated version: Select yes, if the paper is accompanied by a release of updated data or points to a loader that retrieves the updated dataset. If the paper introduces a dataset, select N/A. • Create targeted evaluation suite(s): Select yes, if the paper describes the creation of a finegrained breakdown of subpopulations or multiple training or test splits. • Release evaluation suite or analysis script: Select yes, if the resources in the previous points were released in the form of data or code. • Re-train or -implement most appropriate baselines: Select yes, if the paper explicitly mentions that it trains or implements baselines from prior papers. • Re-compute evaluation metrics in a consistent framework: Select yes, if all the reported scores were computed by the authors or by another centralized framework (e.g., through upload to a leaderboard). If only a subset was recomputed, select no. Select N/A for both questions above if a new dataset was introduced and the only one evaluated in the paper. Run a well-documented human evaluation • Run a human evaluation to measure important quality aspects: Select yes, if a human evaluation of any kind was conducted. • Document the study setup (questions, measurement instruments, etc.): Select yes, if, at the minimum, the specific questions and the way that participants answer them are reported. • Document who is participating in the study: Select yes, if, at the minimum, the annotation platform used and the number of participants are stated. • Estimate the effect size and conduct a power analysis: Select yes, if any effect size estimate or power analysis is mentioned (we assume that not mentioning it implies it absence). • Run significance test(s) on the results: Select yes, if the human annotation results are accompanied by a statistical significance test. • Conduct an analysis of result validity (agreement, comparison to gold ratings): Select yes, if there is any kind of analysis of the quality of the human annotations themselves. • Discuss the required rater qualification and background: Select yes, if the required knowledge of raters is discussed and compared to the qualifications selected for in the study. • Report disaggregated results for subpopulations: Select yes, if the paper reports finegrained results on subsets of the test set(s) (note that the paper does not need to introduce these breakdowns as in the point above). • Evaluate on non-i.i.d. test set(s): Select yes, if there is an evaluation on a non-i.i.d. test set. If the paper does not specifically mention this fact, select no (i.e., if the used dataset has such a test set but this is not mentioned). • Analyze the causal effect of modeling choices on outputs with specific properties: Select yes, if the results include a breakdown that allow for insights of the form "if input has feature X, model output has Y". An ablation study counts as a yes, if the ablation focuses on feature representations (i.e. what data a model sees), but not if the ablation is on model architecture choices. • Conduct an error analysis and/or demonstrate failures of a model: Select yes, if there is any kind of error analysis or qualitative samples of where the model fails. In this section, select yes, if the paper is accompanied by data releases that include the following. • Release outputs on the validation set • Release outputs on the test set • Release outputs for non-English dataset(s): Select N/A if the paper does not include evaluation on any non-English data. There are a few limitation of this setup. (1) Due to the phrasing as recall-oriented prompts, nuanced errors pointed out in earlier sections are implicitly ignored. For example, "Document the study setup" is marked as positive even if the exact definition of each measurement category is not provided. The lack of providing a definition was identified as a source of confusion by Howcroft et al. (2020) . In other cases, our prompts may not be covering all possibilities. For example, a study that releases not an improved version of a corpus, but instead a tailored pretraining set would not count as "Address dataset issues and release an updated version". (2) Each paper is only annotated by one co-author of this survey. This means that there could be misunderstandings of the different dimensions. We tried to address this problem by refining definitions when unclear points arose and by discussing the definitions before starting the annotation which led to the instructions above. Nevertheless, the exact percentage results may differ from the ground-truth by a few points and we thus consider only the overall trends. Assessing relative sentence complexity using an incremental CCG parser Agreement is overrated: A plea for correlation to assess human evaluation reliability NL-Augmenter: A framework for task-sensitive natural language augmentation Anticipating safety issues in E2E conversational AI: framework and tooling Precision-recall curves using information divergence frontiers Show your work: Improved reporting of experimental results Documenting large webtext corpora: A case study on the colossal clean crawled corpus A methodology for extrinsic evaluation of text summarization: Does ROUGE correlate? Towards a rigorous science of interpretable machine learning Squibs: Evaluating human pairwise preference judgments FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization Semantic noise matters for neural natural language generation Findings of the E2E NLG challenge Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge Utility is in the eye of the user: A critique of NLP leaderboards Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) Question answering as an automatic evaluation metric for news article summarization SummEval: Re-evaluating Summarization Evaluation Ranking generated summaries by correctness: An interesting but challenging application for natural language inference Amazon Mechanical Turk: Gold Mine or Coal Mine? Computational Linguistics Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation BLEU might be guilty but references are not innocent Alon Lavie, and Ondřej Bojar. 2021b. Results of the WMT21 metrics shared task: Evaluating metrics with expertbased human evaluations on TED and news domain GO FIGURE: A meta evaluation of factuality in summarization The TUNA challenge 2008: Overview and evaluation results Survey of the state of the art in natural language generation: Core tasks, applications and evaluation Datasheets for datasets Human-AI Collaboration for Natural Language Generation with Interpretable Neural Networks Evaluating duc 2005 using basic elements Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions What happens if you treat ordinal ratings as interval data? human evaluations in NLP are even more underpowered than you think Towards accountability for machine learning datasets: Practices from software engineering and infrastructure A survey of nlp-related crowdsourcing hits: what works and what does not Automatic detection of generated text is easiest when humans are fooled Best practices for crowd-based evaluation of German summarization: Comparing crowd, expert and automatic evaluation Evaluating natural language processing systems: An analysis and review Challenges of studying and processing dialects in social media Learning a POS tagger for AAVE-like language The state and fate of linguistic diversity and inclusion in the NLP world Pelkins Ajanoh, and Mohamed Coulibali. 2020. NU-BIA: NeUral based interchangeability assessor for text generation The perils of using Mechanical Turk to evaluate open-ended text generation Bidimensional leaderboards: Generate and evaluate language hand in hand Global explainability of bert-based evaluation metrics by disentangling along linguistic factors Content selection in deep learning models of summarization GENIE: A leaderboard for human-in-the-loop evaluation of text generation Dynabench: Rethinking benchmarking in NLP Structure-augmented keyphrase generation Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel Bestworst scaling more reliable than rating scales: A case study on sentiment intensity annotation 2021. Reduced, reused and recycled: The life of a dataset in machine learning research Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions Manual and automatic evaluation of machine translation between European languages Generating SOAP notes from doctor-patient conversations using modular summarization techniques Neural text summarization: A critical evaluation Evaluating the factual consistency of abstractive text summarization From word embeddings to document distances WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization Generation that exploits corpus-based statistical knowledge Neural text generation from structured data with application to the biography domain Deduplicating training data makes language models better Datasets: A community library for natural language processing Are we learning yet? a meta review of evaluation failures across machine learning ROUGE: A package for automatic evaluation of summaries ORANGE: a method for evaluating automatic evaluation metrics for machine translation ExplainaBoard: An explainable leaderboard for NLP Hierarchical transformers for multi-document summarization Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics What's in the box? an analysis of undesirable content in the Common Crawl corpus Encouraging lexical translation consistency for document-level neural machine translation Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges Results of the WMT14 metrics shared task Therese Firmin, and Beth Sundheim. 1999. The TIPSTER SUMMAC text summarization evaluation Results of the WMT20 metrics shared task On faithfulness and factuality in abstractive summarization Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference Reusable templates and guides for documenting datasets and models for natural language processing and generation: A case study of the HuggingFace and GEM data and model cards Automatic construction of evaluation suites for natural language generation datasets Model cards for model reporting Evaluating the robustness of neural language models to input perturbations The handbook of critical intercultural communication Summarunner: A recurrent neural network based sequence model for extractive summarization of documents Abstractive text summarization using sequence-to-sequence RNNs and beyond Annotated Gigaword Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization Planning with learned entity prompts for abstractive summarization Automatic text summarization of newswire: Lessons learned from the document understanding conference Evaluating content selection in summarization: The pyramid method Adversarial NLI: A new benchmark for natural language understanding Why we need new evaluation metrics for NLG The E2E dataset: New challenges for endto-end generation RankME: Reliable human ratings for natural language generation An introduction to duc-2004 Assessing the effect of inconsistent assessors on summarization evaluation Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics Bleu: a method for automatic evaluation of machine translation ToTTo: A controlled table-totext generation dataset Multireward reinforced summarization with saliency and entailment Data and its (dis)contents: A survey of dataset development and use in machine learning research A deep reinforced model for abstractive summarization Models and datasets for cross-lingual summarisation Studying summarization evaluation metrics in the appropriate scoring range Adversarially constructed evaluation sets are more challenging MAUVE: measuring the gap between neural text and human text using divergence frontiers Automatic evaluation of linguistic quality in multidocument summarization Towards automatic error analysis of machine translation output A call for clarity in reporting BLEU scores On releasing annotator-level labels and information in datasets Evaluating the morphosyntactic well-formedness of generated texts Learning compact metrics for MT Data-to-text generation with content selection and planning Dataset shift in machine learning A decade of automatic content evaluation of news summaries: Reassessing the state of the art Better metrics to automatically predict the quality of a text summary Measuring attribution in natural language generation models Data-questeval: A referenceless metric for data-to-text semantic evaluation COMET: A neural framework for MT evaluation A structured review of the validity of BLEU An investigation into the validity of some metrics for automatically evaluating natural language generation systems An investigation into the validity of some metrics for automatically evaluating natural language generation systems Building applied natural language generation systems Should corpora texts be gold standards for NLG? Shared task on evaluating accuracy Beyond accuracy: Behavioral testing of NLP models with CheckList Changing the world by changing the data What can we do to improve peer review in NLP? 2021. 'just what do you think you're doing, dave?' a checklist for responsible data use in NLP Unsettling race and language: Toward a raciolinguistic perspective A neural attention model for abstractive sentence summarization Multilingual summarization evaluation without human models Perturbation checklists for evaluating NLG evaluation metrics everyone wants to do the model work, not the data work": Data cascades in high-stakes AI Do datasets have politics? disciplinary values in computer vision dataset development The good, the bad and the ugly: Why crowdsourcing needs ethics this is a problem, don't you agree?" framing and bias in human evaluation for natural language generation MLSUM: The multilingual summarization corpus Questeval: Summarization asks for fact-based evaluation Answers unite! unsupervised metrics for reinforced summarization models An nlg evaluation competition? eight reasons to be cautious BLEURT: Learning robust metrics for text generation Crowdsourcing lightweight pyramids for manual summary evaluation Get another label? improving data quality and data mining using multiple, noisy labelers The human evaluation datasheet 1.0: A template for recording details of human evaluation experiments in NLP Beyond fair pay: Ethical implications of NLP crowdsourcing Reward optimization for neural machine translation with learned metrics Human evaluation of conversations is an open problem: comparing the sensitivity of various methods for evaluating dialogue agents Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks We need to talk about random splits Results of the WMT15 metrics shared task Evaluating evaluation methods for generation in the presence of variation Neural NLG for methodius: From RST meaning representations to texts Learning to summarize from human feedback Energy and policy considerations for deep learning in NLP How to compare summarizers without target length? pitfalls, solutions and re-examination of the neural summarization literature Comparison of the quality of assessments using continuous and discrete ordinal rating scales Towards summarization for social media -results of the TL;DR challenge Language agnostic automatic summarization evaluation 2020. The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models Automatic machine translation evaluation in many languages via zero-shot paraphrasing A gold standard methodology for evaluating accuracy in datato-text systems Generation challenges: Results of the accuracy evaluation shared task Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of automatically generated text: Current trends and best practice guidelines Best practices for the human evaluation of automatically generated text Underreporting of errors in NLG output, and what to do about it Asking and answering questions to evaluate the factual consistency of summaries The rhetoric and reality of anthropomorphism in artificial intelligence Does it capture STEL? a modular, similarity-based linguistic style evaluation framework The statistical advantage of automatic NLG metrics at the system level Beyond BLEU:training neural machine translation with semantic similarity A broad-coverage challenge corpus for sentence understanding through inference Challenges in data-to-document generation Factual consistency evaluation for text summarization via counterfactual estimation Synthbio: A case study in faster curation of text datasets Defending against neural fake news Gradient-based adversarial factual consistency evaluation for abstractive summarization Finding a balanced degree of automation for summary evaluation Bertscore: Evaluating text generation with BERT MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance We are grateful to Mirella Lapata, Ankur Parikh, Dipanjan Das, and Slav Petrov, who have provided comments on earlier versions of this paper. The content of this work was additionally discussed with many others, including Matthew Lamm, Vitaly Nikolaev, and many of the participants in the GEM benchmark. Without those discussions, the subsections and discussion points would look very differently and we thank everyone who participated in the discussions.