key: cord-0230545-dpmny87r authors: Wang, Nancy X. R.; Mahajan, Diwakar; Danilevsky, Marina; Rosenthal, Sara title: SemEval-2021 Task 9: Fact Verification and Evidence Finding for Tabular Data in Scientific Documents (SEM-TAB-FACTS) date: 2021-05-28 journal: nan DOI: nan sha: 46efbc5108e3176ebb6f4df74d38ba16f6d3ed0c doc_id: 230545 cord_uid: dpmny87r Understanding tables is an important and relevant task that involves understanding table structure as well as being able to compare and contrast information within cells. In this paper, we address this challenge by presenting a new dataset and tasks that addresses this goal in a shared task in SemEval 2020 Task 9: Fact Verification and Evidence Finding for Tabular Data in Scientific Documents (SEM-TAB-FACTS). Our dataset contains 981 manually-generated tables and an auto-generated dataset of 1980 tables providing over 180K statement and over 16M evidence annotations. SEM-TAB-FACTS featured two sub-tasks. In sub-task A, the goal was to determine if a statement is supported, refuted or unknown in relation to a table. In sub-task B, the focus was on identifying the specific cells of a table that provide evidence for the statement. 69 teams signed up to participate in the task with 19 successful submissions to subtask A and 12 successful submissions to subtask B. We present our results and main findings from the competition. Tables are ubiquitous in documents and presentations for conveying important information in a concise manner. This is true in many domains, stretching from scientific to government documents. In fact, surrounding text in these articles are often statements summarizing or highlighting some information derived from the primary source of data in tables. A relevant example is shown in Figure 1 from a Business Insider article analyzing the impact of Covid-19 (Aylin Woodward and Gal, 2020) . Describing all the information provided in this table in a readable manner would be lengthy and considerably more difficult to understand. Despite their importance, popular question answering (e.g. SQuAD and Natural Question (Rajpurkar et 2016; Kwiatkowski et al., 2019) ) and truth verification tasks (e.g. SemEval-2019 Fact Checking Task (Mihaylova et al., 2019) ) have not focused on tables, being composed solely of written text. This is likely due to their complexity to parse and understand, despite their rich amount of information. Further, the structure of tables allows much more information to be presented in an efficient manner as humans can interpret meaning in the spatial relationship between cells. However, due to their challenging nature, recent algorithms have been less successful at extracting (Hoffswell and Liu, 2019) and understanding header and data structure in tables (Cafarella et al., 2018) . In addition, any hierarchical and nested headers (common in printed documents) increases the difficulty in interpreting data cells, as shown in Figure 2 . In this paper, we propose to bridge this gap with statement verification and evidence finding using tables from scientific articles. This important task promotes proper interpretation of the surrounding article. In fact, the misunderstanding of tables can lead to the reporting of fake news that we see as being all too prevalent today. (East et al., 2018) with hierarchical column and row structure. Additional difficulty follows from row hierarchy not being delineated by separate columns. We present the first SemEval challenge to address table understanding. We introduce a brand new dataset of 1980 tables from scientific articles that addresses two challenging tasks important to The rest of this paper is formatted as follows: We first discuss related work. We then present a new large table understanding dataset containing close to 2000 tables that is the first to provide evidence labels at the cell level for statements and the first to focus on scientific articles. We provide a detailed analysis of the dataset including several baseline results. We then discuss the performance and approaches of the 19 participants in our challenge and end with an aggregated analysis of participating teams. Finally, we discuss future work. Natural Language Inference (NLI) The table evidence task can be best understood as a variation of the natural language inference task (Dagan et al., 2005) , but on tabular data. NLI asks whether one (or more) sentence entails, refutes, or is unrelated to another sentence; our framing asks whether a given table entails, refutes, or is unrelated to a sentence. Several datasets have been created for studying NLI, such as SNLI (Bowman et al., 2015) , MultiNLI , and SciTail (Khot et al., 2018) . Table QA This task is also closely related to the problem of search and question answering on ta-bles. The closest example would be, given a table that is known to contain the relevant information, return cell values that answer a natural language question (Pasupat and Liang, 2015) . A variation requires analyzing a collection of tables rather than a single one, along with the natural language question (Sun et al., 2016) . Two of the most recent works are TAPAS (Herzig et al., 2020) and TaBERT (Yin et al., 2020) , which jointly pre-train over textual and tabular data to facilitate table QA. However, such approaches have previously focused on traditional natural language questions ("What is the population of France?") rather than inference statements ("France has the highest population in Europe"), which may be entailed, refuted or unknowable from the given table. The works closest to our dataset are TabFact (Wenhu Chen and Wang, 2020) and INFOTABS (Gupta et al., 2020) . Both datasets were sourced from Wikipedia tables and contain hypothesis and premise pairs. TabFact has entailment and refute hypothesis types while INFOTABS has an additional "neutral" hypothesis category, much like our "unknown" statements. Both works show that neural models still lag far behind human performance for the fact checking task with tables. While both datasets have been great at kindling interest in fact verification with tabular data, our dataset differs in two key aspects. First, we source from scientific articles in a variety of domains rather than Wikipedia infoboxes. Scientific tables have very specialized vocabulary and can be more difficult to interpret. Additionally, scientific tables have much more complex structure, like hierarchical column and row headers, rendering the assumption that the first column/row is the header unhelpful. Finally, tables are often directly referenced in scientific text unlike Wikipedia tables that are generally stand-alone. This creates an opportunity to leverage natural statements that depict the original author's style and intent. The second key differentiator of SEM-TAB-FACTS is the accompanying evidence annotations. We believe the future of fact verification and AI in general will be in cooperation with humans rather than in replacement. Thus, it is essential that models are able to present explanations for decisions on the relationship between the statement and Our dataset consists of two forms of generation: (1) a crowdsourced dataset, and (2) an auto-generated dataset. Table 1 presents the statistics of the dataset. We detail our dataset creation process in the following sections. We sourced our tables from scientific articles belonging to active journals that are currently being published by Elsevier and are available on Sci-enceDirect 1 . We utilized Elsevier ScienceDirect APIs 2 to scrape scientific articles which belong to this list, and satisfy the following criteria: (1) the article is open-access 3 , (2) the article is available under "Creative Commons Attribution 4.0 (CC-BY)" user license 4 , and (3) the article has at least one table. We downloaded 1,920 articles belonging to 722 journals which contained 6,773 tables. We further filtered out complicated tables (e.g. multiple tables in a single table) using hand-written rules to get a set of 2,762 candidate tables from 1,085 articles for annotation. We also extracted sentences mentioning the table within the scientific article as candidate statements, which are corrected and then labeled manually by the annotators. The manually generated statements were collected using the crowdsourcing platform Appen 5 . We collected five entailed and five refuted statements for each table from the business preferred operators (BPO) on Appen. The BPO crowd is composed of employees hired by Appen on an hourly basis at a constant pay rate determined by Appen. We found that the workers were much more motivated for the task as they were able to ask questions if needed and we were also able to provide direct feedback to the workers. We initially attempted generating statements with workers from the Appen opencrowd, which is on-demand, but the quality was very poor as it was hard to automatically validate naturally generated statements. Our instructions explicitly lay out 7 types of statements and ask that workers attempt to make one of each type. We encourage the use of different sets of cells whenever possible. The types of statements are aggregation, superlative, count, comparative, unique, all and usage of caption or common sense knowledge. These are derived from the INFOTABS analysis (Gupta et al., 2020) . We asked workers to avoid subjective adjectives like "best", "worst", "seldom" and lookup statements that only require reasoning with one cell. The pay for each statement set was 75 cents. In total, we collected 10000 statements for 1000 unique tables. See Figure 3 for an example table with its manually generated and natural statements. Additionally, for our training data, we conducted a verification task to check for grammatical issues and doubly verify the statement label for both the generated and natural in-text statements. The verification task was paid at 3 cents per statement, which equates to 30 cents per table. We restricted the verification task to the workers in the open-crowd from English speaking countries. After verification, we only preserved the statements that were verified to be grammatically correct and the new label matched the original label. Natural statements were also verified in the same process. Although natural statements were generally factually correct, they were sometimes not able to be verified by the referenced table. Additionally, these statements often required rewording to ensure that all parts of the statement can be verified by the table, which was a step taken only for the development and test sets. This left us with 981 tables and 4506 statements. The majority of the removals were due to We initially attempted to collect the development and test sets as well as evidence annotations via the same method as the training set. However, we found that the quality was not gold-level and thus we (three of the authors) decided to manually correct the statements and annotate the evidence ourselves. All authors first annotated a small set of 102 statements to test inter-annotator agreement for statement relationship and evidence labeling. Out of 102 statements, we found 5 statements where at least one of three annotators disagreed on the relationship and a further 5 statements where the relationship was agreed but the evidence annotation differed. The other 92 were in complete agreement, indicating high agreement. Therefore, the annotations for the rest of the dev set were annotated by just one person. The test set was annotated fully by one author and the two other authors checked the annotations with all disagreements being resolved. See Figure 4 for a screenshot of the statement annotation correction and evidence annotation interface. See the third and fourth rows of Table 1 for detailed statistics of the dev and test sets. IBM Watson™ Discovery 6 is an AI-powered search and text analytics engine for extracting answers from complex business documents. One of the available functionalities is a Table Understanding service that produces a detailed enrichment of table data within an html document. We use this service to identify the body and header cells, as well as the cell relationships, within our dataset. We then proceed to use a set of templates to automatically create statements about each table. We begin by identifying which cells and columns are numeric and non-numeric using a simple regex. Unlike non-numeric cells, numeric cells and columns are appropriate for specific templates that expect numeric values, such as 'Value [V] is the maximum of Column [C]', where every value in column [C] has been identified as numeric. We also generate evidence for some of these templates. The template and evidence generation rules along with their inputs are detailed in Table 2 . This process generated 3,512,978 statements from 1,980 tables which were highly skewed in favor of refuted statements. This dataset was then down-sampled to a maximum of 50 statements per table while ensuring a more even distribution between the two classes to form our final released auto-generated dataset. The full statistics for the auto-generated training data is shown in the second row of Table 1 . The goal of task A is to determine if a statement is entailed or refuted by the given table, or whether, as is in some cases, this cannot be determined from the table. We show two evaluation results. The first is a standard 3-way Precision / Recall / F1 micro evaluation of a multi-class classification that evaluates whether each table was classified correctly as Entailed / Refuted / Unknown. This tests whether the classification algorithm understands cases where there is insufficient information to make a determination. The second, simpler evaluation, uses the same P/R/F1 metric but is a 2-way classification that removes statements with the "unknown" ground truth label from the evaluation. The 2-way metric still penalizes misclassifying refuted/ entailed statement as unknown. In Task B, the goal is to determine for each cell and each statement, if the cell is within the minimum set of cells needed to provide evidence for the statement ("relevant") or not ("irrelevant"). In other words, if the table were shown with all other cells blurred out, would this be enough for a human to reasonably determine that the table entails or refutes the statement? The evaluation calculates the recall and precision for each cell, with "relevant" cells as the positive category. For some statements, there may be multiple minimal sets of cells that can be used to determine statement entailment or refusal. In such cases, our dataset contains all of these versions. We compare the prediction to each ground truth version and count the highest score. We present our baseline experimental setup for each task below. Task A We employ state-of-the-art Table- BERT implementation 7 as proposed by Wenhu Chen and Wang (2020). We utilize Table- BERT's best performing configuration ( Using the above process, we perform the following experiments (1) apply the Table- BERT model out-of-the-box (2) re-train Table- BERT model with unknown statement and apply on our test data (3) fine-tune the model in (2) with our manual+autogenerated data and apply on our test data. We also compare these experiments with a majority baseline with entailed as our majority class. The results are presented in Table 3 . Applying Table- BERT model out-of-the-box provides some improvement over a majority-baseline. However, when the model is retrained with previously missing unknown statements, the performance improves for three-way classification. Further fine-tuning the model with our training dataset (both manual and auto-generated) provides the best performance on the two-way F1-score. Task B We present the following two baselines for Task B: (1) a random baseline where each cell is marked relevant or irrelevant randomly (2) a simple word-match-based baseline where a cell is marked relevant if it overlaps with the statement. The baseline results are presented in Table 4 . We present two leaderboards for each task 8 . The official leaderboard is from participants who have given us detailed descriptions on their system and affirmed that they did not incorporate any information from the test set that changed their final model. This is a more accurate representation of system quality. The unverified leaderboard is composed of participants who either did not give enough detail or have affirmed that they incorporated some test data information in their final model. The participants did not have access to labels for test data but some teams altered their models upon examining the input data in the test set. Although we discouraged this approach, we present the results in hopes it can give some interesting information about how much improvement might be possible with having access to input test data. 19 teams participated in Task A. Of the 14 teams on the official leaderboard, King001 obtained the highest score for task A for both the 2-way (88.74) and 3-way (84.48) F-scores. However, the top three participants have comparable scores. All teams except for the last two beat our best baseline in Table 3 . The unverified leaderboard includes 5 teams and contains higher scores thank in the official leaderboard. However, due to the reasons outlined above, we cannot say with certainty that the results are reproducible. The full leaderboard results for all participants are in Table 5 . Task B is a much harder task and fewer teams participated in this challenge. Of the 12 teams that participated, 8 are in the official leaderboard. The best score is 65.17 by Breaking-BERT@IITK(65.17) which is noticeably lower than the F-scores in Task A. Similarly to Task A the results in the unverified leaderboard are considerably higher. The full leaderboard results for all participants are in Table 6 . We summarize the system details for all participating teams in Tables 7 (Task A) and 8 (Task B). In general, deep learning was the most popular approach used by the participants e.g. BiL- (Herzig et al., 2020) , TaBERT (Yin et al., 2020) and Table- BERT (Wenhu Chen and Wang, 2020). One third of the participants employed some form of ensembling technique in their submission. Most of the participants have used the manually generated ground-truth in the development of their systems, with only one team not finding it useful. Further, a large percentage of participants have used the auto-generated ground truth in their systems with three teams not finding it helpful in their evaluation. In terms of external resources, a majority of the participants used external table understanding resources in their systems. Further, most of the participants employed pre-processing techniques like acronym completion, removing special characters, etc... A substantial percentage of participants used techniques like incorporating word embeddings, entity resolution etc. Finally, a large number of participants used TabFact (Wenhu Chen and Wang, 2020) as an external dataset. We also conducted additional analyses on participant submissions on the official leaderboard. We show through the average confusion matrix for Task A in Table 9 that the Unknown label was the most difficult. In fact, there were more unknown statements incorrectly labelled as entailed than were correctly categorized. Naturally, the statements with the lowest accuracy (< 25%) consist of mainly unknown statements, especially those statements Team Description AttesTable (Varma et al., 2021) Extended TAPAS to 3 classes by fine-tuning it. Employed a novel way of synthesizing "unknown" samples. BreakingBERT@IITK (Jindal et al., 2021) Ensemble models with TAPAS and TableBERT Transformers in a hierarchical two-step method for 3-way classification (unknown vs not unknown first) Used TAPAS model with TabFact task, and added unique features. Employed prepossessing tricks like k-fold validation and replacing the characters and did hyperparameter tuning. BOUN (Köksal et al., 2021 Kaushik Acharya (Acharya, 2021) Parsed statements into candidate logical form; mapped result to handwritten rules, to then execute over relevant cells (identified using string matching and universal dependency parsing) Trained 20 instances of TaPas, SAT and Sattiy (Ruan et al., 2021) Ensemble of 6 fine-tuned pre-trained models on the augmented data with content snap-shot input. Augmented the data provided by expanding the labels. Used Fast Gradient Method and added disturbance to the embedding layer to obtain a more stable word representation and a more general model. Ensemble models in a hierarchical two-step method. 8-model to identify unknown statements and 9-model ensemble to classify entailed/refuted. Incorporated different ensemble weights for various statement types (count, superlative, unique). Volta (Gautam et al., 2021) Finetuned TAPAS that was pretrained on TabFact. Pre-processing to standardize multiple header rows to a single header. Added numeric and enumerate features to TAPAS and also statistic information (such as count) as a new row/column to the table. Out of the entailed and refuted statements, ones that require numerical reasoning, like range, count or comparisons seemed to be most challenging. The statements with the highest accuracy (> 95%) generally had most words or numbers exactly overlapping with those in the table. In task B, out of the statements with less than 30% evidence F-score, 86% were ones with a refuted relationship. Conversely, the statements with greater than 70% Fscore, 74% were ones with an entailed relationship. This shows that it is more difficult to find the most direct evidence to prove that a statement is refuted by a table than it is to show the positive evidence that a particular statement is supported by it. We believe this is an interesting line of research for future studies. In this paper, we presented the data and competition results for SEM-TAB-FACTS, Shared Task 9 of SemEval 2021. We created a large dataset via automated and crowdsourced fact verification as well as evidence finding for tables. Our 19 teams had a variety of techniques to tackle this unique but very relevant problem. The evidence finding scores are still quite low and have a large improvement potential. Additionally, the test set may be expanded in future versions of this task with a combination of manually generated, natural, and automated statements. Kaushikacharya at SemEval-2021 task 9: Candidate generation for fact verification over tables What to know about the coronavirus outbreak in 17 charts and maps A large annotated corpus for learning natural language inference Ten years of webtables Board connections and crisis performance: Family, state, and political networks The PASCAL recognising textual entailment challenge BERT: Pre-training of deep bidirectional transformers for language understanding The association between smoking and electronic cigarette use in a cohort of young people Volta at SemEval-2021 task 9: Statement verification and evidence finding with tables using TAPAS and transfer learning INFOTABS: Inference on tables as semi-structured data TaPas: Weakly supervised table parsing via pre-training Interactive repair of tables extracted from pdf documents on mobile devices Vishesh Kaushik, and Ashutosh Modi. 2021. Breakingbert@iitk at Statement verification and evidence finding with tables Scitail: A textual entailment dataset from science question answering Bekir Yıldırım, and ArzucanÖzgür. 2021. BOUN at SemEval-2021 Text Augmentation Techniques for Fact Verification in Tabular Data Natural questions: A benchmark for question answering research SemEval-2019 task 8: Fact checking in community question answering forums TAPAS at SemEval-2021 task 9: Reasoning over tables with intermediate pretraining Compositional semantic parsing on semi-structured tables SQuAD: 100,000+ questions for machine comprehension of text Sattiy at SemEval-2021 task 9: Method for statement verification and evidence finding with tables based on multimodel ensemble Table cell search for question answering Attestable at SemEval-2021 task 9: Extending statement verification with tables for unknown class, and semantic evidence finding Tabfact : A large-scale dataset for table-based fact verification A broad-coverage challenge corpus for sentence understanding through inference TaBERT: Pretraining for joint understanding of textual and tabular data Thifly queen at SemEval-2021 task 9: Statement verification and evidence finding with tables