key: cord-0195438-rt6axd52 authors: Adelani, David I.; Ruiter, Dana; Alabi, Jesujoba O.; Adebonojo, Damilola; Ayeni, Adesina; Adeyemi, Mofe; Awokoya, Ayodele; Espana-Bonet, Cristina title: The Effect of Domain and Diacritics in Yor`ub'a-English Neural Machine Translation date: 2021-03-15 journal: nan DOI: nan sha: 24ac368d08765dfad920ceefb79fba7bfe81d83c doc_id: 195438 cord_uid: rt6axd52 Massively multilingual machine translation (MT) has shown impressive capabilities, including zero and few-shot translation between low-resource language pairs. However, these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper, we present MENYO-20k, the first multi-domain parallel corpus with a special focus on clean orthography for Yor`ub'a--English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality, we also analyze the effect of diacritics, a major characteristic of Yor`ub'a, in the training data. We investigate how and when this training condition affects the final quality and intelligibility of a translation. Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$ BLEU) when translating to Yor`ub'a, setting a high quality benchmark for future research. Neural machine translation (NMT) achieves high quality performance when large amounts of parallel sentences are available (Barrault et al., 2020) . Large and freely-available parallel corpora do exist for a small number of high-resource pairs and domains. However, for low-resource languages such as Yorùbá (yo), one can only find few thousands of parallel sentences online 1 . In the best-case scenario, i.e. some amount of parallel data exists, one can use the Biblethe Bible is the most available resource for low-resource languages (Resnik et al., 1999) -and JW300 (Agić and Vulić, 2019) . Notice that both corpora belong to the religious domain and they do not generalize well to popular domains such as news and daily conversations. In this paper, we address this problem for the Yorùbá-English (yo-en) language pair by creating a multi-domain parallel dataset, MENYO-20k, which we make publicly available 2 with CC BY-NC 4.0 licence. It is a heterogeneous dataset that comprises texts obtained from news articles, TED talks, movie and radio transcripts, science and technology texts, and other short articles curated from the web and translated by professional translators. Based on the resulting train-development-test split, we provide a benchmark for the yo-en translation task for future research on this language pair. This allows us to properly evaluate the generalization of MT models trained on JW300 and the Bible on new domains. We further explore transfer learning approaches that can make use of a few thousand sentence pairs for domain adaptation. Finally, we analyze the effect of Yorùbá diacritics on the translation quality of pre-trained MT models, discussing in details how this affects the understanding of the translated text especially in the en-yo direction. We show the benefit of automatic diacritic restoration in addressing the problem of noisy diacritics. The Yorùbá language is the third most spoken language in Africa, and it is native to southwestern Nigeria and the Republic of Benin. It is one of the national languages in Nigeria, Benin and Togo, and spoken across the West African regions. The language belongs to the Niger-Congo family, and it is spoken by over 40 million native speakers (Eberhard et al., 2019) . Yorùbá has 25 letters without the Latin characters c, q, v, x and z, and with additional characters e . , gb, s . , o . . Yorùbá is a tonal language with three tones: low, middle and high. These tones are represented by the grave (e.g. "à "), optional macron (e.g. "ā") and acute (e.g. "á") accents respectively. These tones are applied on vowels and syllabic nasals, but the mid tone is usually ignored in writings. The tone information and underdots are important for the correct pronunciation of words. Often, articles written online, including news articles such as BBC 3 ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except when they are embedded in context. For example,èdè (language), edé (crayfish), e . de . (a town in Nigeria),è . de . (trap) andè . dè . (balcony) will be mapped to ede without diacritics. Machine translation might be able to learn to disambiguate the meaning of words and generate correct English even with un-diacriticized Yorùbá. However, one cannot generate correct Yorùbá if the training data is un-diacriticized. One of the purposes of our work is to build a corpus with correct and complete diacritization in several domains. The dataset collection was motivated by the inability of machine translation models trained on JW300 to generalize to new domains (∀ et al., 2020 Table 1 summarizes the texts collected, their source, the original language of the texts and the number of sentences from each source. We collected both parallel corpora freely available on the web (e.g JW News) and monolingual corpora we are interested in translating (e.g. the TED talks) to build the MENYO-20k corpus. The JW News is different from the JW300 since they contain only news reports, and we manually verified that they are not in JW300. Some few sentences were donated by professional translators such as "short texts" in Table 1 . Our curation followed two steps: (1) translation of monolingual texts crawled from the web by professional translators; (2) verification of translation, orthography and diacritics for parallel texts obtained online and translated. Texts obtained from the web that were judged by native speakers being high quality were verified once, the others were verified twice. The verification of translation and diacritics was done by professional translators and volunteers who are native speakers. Vocabulary Coverage (yo) 30,760 parallel sentences. Also, we download the JW300 parallel corpus which is available for a large variety of low-resource language pairs. It has parallel corpora from English to 343 languages containing religion-related texts. From the JW300 corpus, we get 459, 871 sentence pairs already tokenized with Polyglot 4 (Al-Rfou, 2015). We make use of additional monolingual data to train the semisupervised MT model using back-translation. The Yorùbá monolingual texts are from the Yorùbá embedding corpus (Alabi et al., 2020) , one additional book ("Ojowu") with permission from the author, JW300-yo, and Bible-yo. We only use Yorùbá texts that are properly diacritized. In order to keep the topics in the Yorùbá and English monolingual corpora close, we choose two Nigerian news websites (The Punch Newspaper 5 and Voice of Nigeria 6 ) for the English monolingual corpus. The news scraped covered categories such as politics, business, sports and entertainment. Overall, we gather 475,763 monolingual sentences from the website. MENYO-20k is, on purpose, highly heterogeneous. In this section we analyze the differences and how its (sub)domains depart from the characteristics of the commonly used Yorùbá-English corpora for MT. Characterizing the domain of a dataset is a difficult task. Some metrics previously used need either large corpora or a characteristic vocabulary of the domain (Beyer et al., 2020; España-Bonet et al., 2020) . Here, we do not have these resources and we report the overlapping vocabulary between training and test sets and the perplexity observed in the test sets when a language model (LM) is trained on the MT training corpora. In order to estimate the perplexities, we train a language model of order 5 with KenLM (Heafield, 2011) on each of the 3 training data subsets: JW300 (named C2 for short in tables), JW300+Bible (C3), JW300+Bible+MENYO-20k (C4). Following NMT standard processing pipelines (see subsection 4.2), we perform byte-pair encoding (BPE) (Sennrich et al., 2016) on the corpora to avoid a large number of out-of-vocabulary tokens which, for small corpora, could alter the LM probabilities. For each of the resulting language models, we evaluate their average perplexity on the different domains of the test set to evaluate compositional domain differences (Figure 1 , top). As expected, the average perplexity drops when adding more training data. Due to the limited domain of both JW300 and Bible, a literary style close to the Books domain, the decrease in perplexity is small when adding additional Bible data to JW300, namely −8% (en) and −11% (yo). Interestingly, both JW300 and Bible also seem to be close to the TED domain (1st and 2nd lowest perplexities for en and yo respectively), which may be due to discourse/monologue content in both training corpora. Adding the domain-diverse MENYO-20k corpus largely decreases the perplexity across all domains with a major decrease of −66% on IT (yo) and smallest decrease of −1% on Books (en). The perplexity scores correlate negatively with the resulting BLEU scores in Table 3 , with a Pearson's r (r) of −0.367 (en) and −0.461 (yo), underlining that compositional domain differences between training and test subsets is the main factor of differences in translation quality. Further, to evaluate lexical domain differences, we calculate the vocabulary coverage (tokenized, not byte-pair encoded 7 ) of the different domains of the test set by each of the training subsets (Figure 1, bottom) . The vocabulary coverage increases to a large extend when MENYO-20k is added. However, while vocabulary coverage and average perplexities have a strong (negative) correlation, r = −0.756 (en) and r = −0.689 (yo), a high perplexity does not necessarily mean low vocabulary coverage. E.g., the vocabulary coverage of the IT domain by JW300 is high (91% for en) despite leading to high perplexities (765 for en). In general, vocabulary coverage of the test sets is less indicative of the resulting translation performance than perplexity, showing only a weak correlation between vocabulary coverage and BLEU, with r = 0.150 and r = 0.281 for en and yo respectively. Supervised NMT We use the transformer-base architecture proposed by Vaswani et al. (2017) as implemented in Fairseq 8 . We set the drop-out at 0.3 and batch size at 10, 240 tokens. For optimization, we use adam (Kingma and Ba, 2015) with β 1 = 0.9 and β 2 = 0.98 and a learning rate of 0.0005. The learning rate has a warmup update of 4000, using label smoothed cross-entropy loss function with label-smoothing value of 0.1. We use the best performing supervised system to translate the monolingual corpora described in section 3 yielding to 476k backtranslations. This data is used together with the original corpus to train a new system. The process is repeated until convergence. Fine-tuning mT5 We examine a transfer learning approach by fine-tuning a massively multilingual model mT5 (Xue et al., 2021) . mT5 had been pre-trained on 6.3T tokens originating from Common Crawl in 101 languages (including Yorùbá). The approach has already shown competitive results on other languages (Tang et al., 2020) . In our experiments, we use mT5-base, a model with 580M parameters. We transferred all the parameters of the model including the sub-word vocabulary. Publicly Available NMT Models We further evaluate the performance of three multilingual NMT systems: OPUS-MT (Tiedemann and Thottingal, 2020) , Google Multilingual NMT (GM-NMT) (Arivazhagan et al., 2019) and Facebook's M2M-100 with 1.2B parameters. All the three pre-trained models are trained on over 100 languages. While GMNMT and M2M-100 are a single multilingual model, OPUS-MT models are for each translation direction, e.g yo-en. We generate the translations of the test set using the Google Translate interface, 9 and OPUS-MT using Easy-NMT. 10 For M2M-100, we make use of Fairseq to translate the test set. Data and Preprocessing For the MT experiments, we use the training part of our MENYO-20k corpus and two other parallel corpora, Bible and JW300 (section 3). For tuning the hyperparameters, we use the development split of the multi-domain data which has 3, 397 sentence pairs and for testing the test split with 6, 633 parallel sentences. To ensure that all the parallel corpora are in the same format, we convert the Yorùbá texts in the JW300 dataset to Unicode Normalization Form Composition (NFC), the format of the Yorùbá texts in the Bible and multidomain dataset. Our preprocessing pipeline includes punctuation normalization, tokenization, and truecasing. For punctuation normalization and truecasing, we use the Moses toolkit (Koehn et al., 2007) while for tokenization, we use Polyglot, since it is the tokenizer used in JW300. We apply joint BPE, with a vocabulary threshold of 20 and 40k merge operations. Evaluation Metrics To evaluate the models, we use tokenized BLEU (Papineni et al., 2002) score implemented in multi-bleu.perl and confidence intervals (p = 95%) in the scoring package 11 . Since diacritics are applied on individual characters, we also use chrF, a character n-gram F1-score (Popović, 2015) , for en-yo translations. Automatic Diacritization In order to automatically diacritize Google MNMT and M2M-100 outputs for comparison, we train an automatic diacritization system using the supervised NMT setup. We use the Yorùbá side of MENYO-20k and JW300, which use consistent diacritization. We split the resulting corpus into train (458k sentences), test (517 sentences) and development (500 sentences) portions. We apply a small BPE of 2k merge operations to the data. We apply noise on the diacritics by i) randomly removing a diacritic with probability p = 0.3 and ii) randomly replacing a diacritic with p = 0.3. The corrupted version of the corpus is used as the source data, and the NMT model is trained to reconstruct the original diacritics. On the test set, where the corrupted source has a BLEU (precision) of 19.0 (29.8), reconstructing the diacritics using our system lead to a BLEU (precision) of 87.0 (97.1), thus a major increase of +68.0 (+67.3) respectively. Internal Comparison We train four basic NMT engines on different subsets of the training data: Bible (C1), JW300 (C2), JW300+Bible (C3) and JW300+Bible+MENYO-20k (C4). Further, we analyse the effect of fine-tuning for in-domain translation. For this, we fine-tune the converged model trained on JW300+Bible on MENYO-20k (C3+Transfer) and, similarly, we fine-tune the converged model trained on JW300+Bible+MENYO-20k on . This yields six NMT models in total for en-yo and yo-en each. Their transla- Table 2 : Tokenized BLEU with confidence intervals (p = 95%) and chrF scores over the full test for NMT models trained on different subsets of the training data C i (top) and performance of external systems (bottom). For Yorùbá, we analyse the effect of diacritization: en-yo p applies an in-house diacritizer on the translations obtained from pre-trained models and yo-en u reports results using undiacritized Yorùbá texts as source sentences for training (see text). Top-scoring results per block are underlined and globally boldfaced. tion performance is evaluated on the complete MENYO-20k test set (Table 2 , top) and later we analyze in-domain translation in Table 3 . As expected, the BLEU scores obtained after training on Bible only (C1) are low, with BLEU 2.2 and 1.4 for en-yo and yo-en respectively, which is due to its small amount of training data. Training on the larger JW300 corpus (C2) leads to higher scores of BLEU 7.5 (en-yo) and 9.6 (yo-en), while combining it with Bible (C3) only leads to a small increase of BLEU +0.6 and +1.2 for en-yo and yo-en respectively. When further adding MENYO-20k (C4) to the training data, the translation quality increases by +2.8 (en-yo) and +3.2 (yo-en). When, instead of adding MENYO-20k to the training pool, it is used to fine-tune the converged JW300+Bible model, (C3+Transfer) the increase in BLEU over JW300+Bible is even larger for en-yo (BLEU +4.2), which results in an overall top-scoring model with BLEU 12.3. For yo-en fine-tuning is slightly less effective (BLEU 13.2) than simply adding MENYO-20k to the training data (BLEU 14.0). As seen in subsection 3.3, perplexities and vocabulary coverage in English are not as distant among training/test sets as in Yorùbá, so the fine-tuning step resulted less efficient. When we use the MENYO-20k dataset to fine-tune the converged JW300+Bible+ MENYO-20k model (C4+Transfer) we observe an increase in BLEU over JW300+Bible for both translation directions: +4.3 for en-yo and +3.8 for yo-en. This is the best performing system and the one we use for back-translation. Table 2 also shows the performance of the semi-supervised system (C4+Transfer+BT). After two iterations of BT, we obtain an improvement of +3.6 BLEU points on yo-en. There is, however, no improvement in the en-yo direction probably because a significant portion of our monolingual data is based on JW300. Finally, fine-tuning mT5 with MENYO-20k does not improve over fine-tuning only the JW300+Bible system on en-yo, but it does for yo-en. Again, multilingual systems are stronger when used for English, and we need the contribution of back-translation to outperform the generic mT5. We evaluate the performance of the open source multilingual engines introduced in the previous section on the full test set (Table 2, bottom) . OPUS-MT, while having no model available for en-yo, achieves a BLEU of 5.9 for yo-en. Thus, despite being trained on JW300 and other available yo-en corpora on OPUS, it is largely outperformed by our NMT model trained on JW300 only (BLEU +3.7). This may be caused by some of the noisy corpora included in OPUS (like CCaligned) , which can depreciate the translation quality. Facebook's M2M-100, is also largely outperformed even by our simple JW300 baseline by 5 BLEU points in both translation directions. A manual examination of the en-yo LASER extractions used to train M2M-100 shows that these are very noisy similar to the findings of Caswell et al. (2021) , which explains the poor translation performance. Google, on the other hand, obtains impressive results with GMNMT for the yo-en direction, with BLEU 22.4. The opposite direction en-yo, however, shows a significantly lower performance (BLEU 3.7), being outperformed even by our simple JW300 baseline (BLEU +3.8). The difference in performance for English can be attributed to the highly multilingual but English-centric nature of the Google MNMT model. As already noticed by Arivazhagan et al. (2019) , low-resourced language pairs benefit from multilinguality when translated into English, but improvements are minor when translating into the non-English language. For the other translation direction, en-yo, we notice that lots of diacritics are lost in Google translations, damaging the BLEU scores. Whether this drop in BLEU scores really affects understanding or not is analyzed via a human evaluation (Section 4.4). Diacritization Diacritics are important for Yorùbá embeddings (Alabi et al., 2020) . However, they are often ignored in popular multilingual models (e.g. multilingual BERT (Devlin et al., 2019) ), and not consistently available in training corpora and even test sets. In order to investigate whether the diacritics in Yorùbá MT can help to disambiguate translation choices, we additionally train yo-en u equivalent models on undiacritized JW300, JW300+Bible and JW300+Bible+MENYO-20k (Table 2 , indicated as yo-en u in comparison to the ones with diacritics yo-en). Since one cannot generate correct Yorùbá text when training without diacritics, en-yo u systems are not trained. Alternatively, we restore diacritics using our in-house diacritizer in the output of open source models that produce undiacritized text. Results for yo-en are not conclusive. Diacritization is useful when only out-of-domain data is used in training (JW300, JW300+Bible 12 for testing on MENYO-20k). In this case, the domain of the training data is very different from the domain of the test set, and disambiguation is needed not to bias all the lexicon towards the religious domain. When we include in-domain data (JW300+Bible+MENYO-20k), both models perform equally well, with BLEU 14.0 for both diacritized and undiacritized versions. Diacritization is not needed when we fine-tune the model with data that shares the domain with the test set (JW300+Bible+Transfer), BLEU is 13.2 for the diacritized version vs. BLEU 13.9 for the undiacritized one. In practice, this means that, when training data is far from the desired domain, investing work for a clean diacritized Yorùbá source input can help improve the translation performance. When more data is present, the diacritization becomes less important, since context is enough for disambiguation. When Yorùbá is the target language, diacritization is always needed. An example is the low automatic scores GMNMT (BLEU 3.7, chrF 18.5) and M2M-100 (BLEU 3.3, chrF 15.8) reach for en-yo translation. Table 2 -bottom (indicated as en-yo p ) show the improvements after automatically restoring the diacritics, namely BLEU + 6.9 points, chrF +15.9 for GMNMT; and +3.5 and +9.9 for M2M-100. Even if the diacritizer is not perfect, diacritics do not seem enough to get state-of-the-art results according to automatic metrics: fine-tuning with high Table 4 : Human evaluation for en-yo and yo-en MT models (C4+Transfer (C4+Trf), C4+Trf+BT, mT5+Trf, and GMNMT) in terms of Adequacy, Fluency and Diacritics prediction accuracy. The rating that is significantly different from GMNMT is indicated by * (T-test p < 0.05) . quality data (C4+Transfer+BT, chrF 34.6) is still better than using huge but unadapted systems. Domain Differences In order to analyze the domain-specific performance of the different NMT models, we evaluate each model on the different domain subsets of the test set (Table 3) . The Proverb subset is especially difficult in both directions, as it shows the lowest translation performance across all domains, i.e. maximum BLEU of 9.04 (en-yo) and 8.74 (yo-en). This is due to the fact that proverbs often do not have literal counterparts in the target language, thus making them especially difficult to translate. The TED domain is the best performing test domain, with maximum BLEU of 16.1 (en-yo) and 16.8 (yo-en). This can be attributed to the decent base coverage of the TED domain by JW300 and Bible together (monologues) with the additional TED domain data included in the MENYO-20k training split (507 sentence pairs). Also, most BLEU results are on line with the LM perplexity results and conclusions drawn in subsection 3.3. Due to the closeness of Bible and JW300 to the book domain, we see only small improvements of BLEU on this domain, i.e. +0.2 (en-yo) and +0.7 (yo-en), when adding to the JW300+Bible (C3) training data pool. On the other hand, the IT domain benefits the most from the additional MENYO-20k data, with major gains of BLEU +5.5 (en-yo) and 4.6 (yo-en), owing to the introduction of IT domain content in the MENYO-20k training data (∼ 1k sentence pairs), which is completely lacking in JW300 and Bible. To have a better understanding of the quality of the translation models and the intelligibility of the translations, we compare three top performing models in en-yo and yo-en. For en-yo, we use C4+Transfer, C4+Transfer+BT and GMNMT. Although GMNMT is not the third best system according to BLEU (Table 2) , we are interested in the study of diacritics in translation quality and intelligibility. For the yo-en, we choose C4+Transfer+BT, mT5+Transfer and GMNMT being the 3 models with the highest BLEU scores on Table 2 . We ask 7 native speakers of Yorùbá that are fluent in English to rate the adequacy, fluency and diacritic accuracy in a subset of test sentences. Four of them rated the en-yo translation direction and the others rated the opposite direction yo-en. We randomly select 100 sentences within the outputs of the six systems and duplicate 5 of them to check the intra-agreement consistency of our raters. Each annotator is then asked to rate 105 sentences per system on a 1 − 5 Likert scale for each of the features (for English, diacritic accuracy cannot be evaluated). We calculate the agreement among raters using Krippendorff's α. The inter-agreement per task is 0.44 (adequacy), 0.40 (fluency) and 0.87 (diacritics) for Yorùbá, and 0.71 (adequacy), 0.55 (fluency) for English language. We observe that a lot of raters often rate the fluency score for many sentences with the same values (e.g 4 or 5), which results to a lower Krippendorff's α for fluency. The intra-agreement for the four Yorùbá raters are 0.75, 0.91, 0.66, and 0.87, while the intra-agreement for the three English raters across all evaluation tasks are 0.92, 0.71, and 0.81. For yo-en, our evaluators rated on average GMNMT to be the best in terms of adequacy (4.02 out of 5) and fluency (4.71), followed by mT5+Transfer, which shows that finetuning massively multilingual models also benefits low resource languages MT especially in terms of fluency (4.39). This contradicts the results of the automatic evaluation which prefers C4+Transfer+BT over mT5+Transfer. For en-yo, GMNMT is still the best in terms of adequacy (3.69) followed by C4+Transfer+BT, but performs the worst in terms of fluency and diacritics prediction accuracy. So, the bad quality of the diacritics affects fluency and drastically penalises automatic metrics such as BLEU, but does not interfere with the intelligibility of the translations as shown by the good average adequacy rating. Automatic diacritic restoration for Yorùbá (Orife, 2018; Orife et al., 2020) can therefore be very useful to improve translation quality. C4+Transfer and C4+Transfer+BT perform similarly with high scores in terms of fluency and near perfect score in diacritics prediction accuracy (4.91 ± 0.1) as a result of being trained on cleaned corpora. In order to make MT available for a broader range of linguistic communities, recent years have seen an effort in creating new parallel corpora for low-resource language pairs. Recently, Guzmán et al. (2019) provided novel supervised, semi-supervised and unsupervised benchmarks for Indo-Aryan languages {Sinhala,Nepali}-English on an evaluation set of professionally translated sentences sourced from the Sinhala, Nepali and English Wikipedias. Novel parallel corpora focusing on African languages cover South African languages ({Afrikaans, isiZulu, Northern Sotho, Setswana, Xitsonga}-English) (Groenewald and Fourie, 2009 ) with MT benchmarks evaluated in Martinus and Abbott (2019) , as well as multidomain (News, Wikipedia, Twitter, Conversational) Amharic-English (Hadgu et al., 2020) and multidomain (Government, Wikipedia, News etc.) Igbo-English (Ezeani et al., 2020) . Further, the LORELEI project (Strassel and Tracey, 2016) has created parallel corpora for a variety of lowresource language pairs, including a number of Niger-Congo languages such as {isiZulu, Twi, Wolof, Yorùbá }-English. However, these are not open-access. On the contrary, Masakhane (∀ et al., 2020) is an ongoing participatory project focusing on creating new freely-available parallel corpora and MT benchmark models for a large variety of African languages. While creating parallel resources for low-resource language pairs is one approach to increase the number of linguistic communities covered by MT, this does not scale to the sheer amount of possible language combinations. Another research line focuses on low-resource MT from the modeling side, developing methods which allow a MT system to learn the translation task with smaller amounts of supervisory signals. This is done by exploiting the weaker supervisory signals in larger amounts of available monolingual data, e.g. by identifying additional parallel data in monolingual corpora (Artetxe and Schwenk, 2019; Schwenk et al., 2021 Schwenk et al., , 2020 , comparable corpora (Ruiter et al., 2019 (Ruiter et al., , 2021 , or by including auto-encoding (Currey et al., 2017) or language modeling tasks (Gulcehre et al., 2015; Ramachandran et al., 2017) during training. Low-resource language pairs can benefit from high-resource languages through transfer learning (Zoph et al., 2016) , e.g. in a zero-shot setting (Johnson et al., 2017) , by using pre-trained language models (Lample and Conneau, 2019) , or finding an optimal path of pivoting through related languages (Leng et al., 2019) . By adapting the model hyperparameters to the low-resource scenario, Sennrich and Zhang (2019) were able to achieve impressive improvements over a standard NMT system. We present MENYO-20k, a novel en-yo multi-domain parallel corpus for machine translation and domain adaptation. By defining a standardized train-development-test split of this corpus, we provide several NMT benchmarks for future research on the en-yo MT task. Further, we analyze the domain differences on the MENYO-20k corpus and the translation performance of NMT models trained on religion corpora, such as JW300 and Bible, across the different domains. We show that, despite consisting of only 10k parallel sentences, adding the MENYO-20k corpus train split to JW300 and Bible largely improves the translation performance over all domains. Further, we train a variety of supervised, semi-supervised and fine-tuned MT benchmarks on available en-yo corpora, creating a high quality baseline that outperforms current massively multilingual models, e.g. Google MNMT by BLEU +18.8 (en-yo). This shows the positive impact of using smaller amounts of high-quality data (e.g. C4+Transfer, BLEU 12.4) that takes into account language-specific characteristics, i.e. diacritics, over massive amounts of noisy data (Facebook M2M-100, BLEU 3.3). Apart from having low BLEU scores, our human evaluation reveals that models trained on low-quality diacritics (Google MNMT) suffer especially in fluency, while still being intelligible to the reader. While correctly diacritized data is vital for translating en-yo, it only has an impact on the quality of yo-en translation quality when there is a domain mismatch between training and testing data. JW300: A wide-coverage parallel corpus for low-resource languages Polyglot: A massive multilingual natural language processing pipeline Massive vs. curated embeddings for low-resourced languages: the case of Yorùbá and Twi Massively multilingual neural machine translation in the wild: Findings and challenges Margin-based parallel corpus mining with multilingual sentence embeddings Proceedings of the Fifth Conference on Machine Translation Embedding space correlation as a measure of domain similarity Copied monolingual data improves lowresource neural machine translation BERT: Pre-training of deep bidirectional transformers for language understanding Ethnologue: Languages of the world. twenty Tailoring and Evaluating the Wikipedia for in-Domain Comparable Corpora Extraction Igbo-english machine translation: An evaluation benchmark Beyond english-centric multilingual machine translation Participatory research for low-resourced machine translation: A case study in african languages Introducing the autshumato integrated translation environment On using monolingual corpora in neural machine translation The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English Evaluating Amharic Machine Translation KenLM: Faster and smaller language model queries Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation Adam: A method for stochastic optimization Moses: Open source toolkit for statistical machine translation Cross-lingual language model pretraining Unsupervised pivot translation for distant languages A focus on neural machine translation for african languages Sequence-to-Sequence Learning for Automatic Yorùbá Diacritic Restoration Fairseq: A fast, extensible toolkit for sequence modeling Bleu: a method for automatic evaluation of machine translation chrF: character n-gram f-score for automatic MT evaluation Unsupervised pretraining for sequence to sequence learning The Bible as a Parallel Corpus: Annotating the 'Book of Self-supervised neural machine translation Integrating Unsupervised Data Generation into Self-Supervised Neural Machine Translation for Low-Resource Languages WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia Ccmatrix: Mining billions of high-quality parallel sentences on the web Neural machine translation of rare words with subword units Revisiting low-resource neural machine translation: A case study LORELEI language packs: Data, tools, and resources for technology development in low resource languages Multilingual translation with extensible multilingual pretraining and finetuning OPUS-MT -building open translation services for the world Attention is all you need mT5: A massively multilingual pre-trained text-to-text transformer Transfer learning for low-resource neural machine translation We would like to thank Adebayo O. Adeojo, Babunde O. Popoola, Olumide Awokoya, Modupe Olaniyi, Princess Folasade, Akinade Idris, Tolulope Adelani, Oluyemisi Olaose, and Benjamin Ajibade for their support in translating English sentences to Yorùbá, verification of Yorùbá diacritics, and human evaluation. We thank Bayo Adebowale and 'Dele 'Adelani for donating their books ("Out of His Mind", and "Ojowu"). We thank Iroro Orife for providing the Bible corpus and Yorùbá Proverbs corpus. We thank Marine Carpuat, Mathias Müller, and the entire Masakhane NLP community for their feedback. We are also thankful to Damyana Gateva for evaluations with open-source models. This project was funded by the AI4D language dataset fellowship (Siminyu et al., 2021) 13 . DIA acknowledges the support of the EU-funded H2020 project COMPRISE under grant agreement No. 3081705. CEB is partially funded by the German Federal Ministry of Education and Research under the funding code 01IW20010. The authors are responsible for the content of this publication. A.1 Dataset Collection for MENYO-20k Table 5 summarizes the texts collected, their source, the original language of the texts and the number of sentences from each source. We collected both parallel corpora freely available on the web and monolingual corpora we are interested in translating (e.g. the TED talks) to build the MENYO-20k corpus. Some few sentences were donated by professional translators such as "short texts" in Table 5 . We provide more specific description of the data sources below.Jehovah Witness News We collected only parallel "newsroom" (or "Ìròyìn" in Yorùbá) articles from JW.org website to gather texts that are not in the religious domain. As shown in Table 5 , we collected 3,508 sentences from their website, and we manually confirmed that the sentences are not in JW300. The content of the news mostly reports persecutions of Jehovah witness members around the world, and may sometimes contain Bible verses to encourage believers. We extracted parallel texts from the VON website, a Nigerian Government news website that supports seven languages with wide audience in the country (Arabic, English, Fulfulde, French, Hausa, Igbo, and Yorùbá). Despite the large availability of texts, the quality of Yorùbá texts is very poor, one can see several issues with orthography and diacritics. We asked translators and other native speakers to verify and correct each sentence.Global Voices News We obtained parallel sentences from the Global Voices website 14 contributed by journalists, writers and volunteers. The website supports over 50 languages, with contents mostly translated from English, French, Portuguese or Spanish.TED Talks Transcripts We selected 28 English TED talks transcripts mostly covering issues around Africa like health, gender equality, corruption, wildlife, and social media e.g "How young Africans found a voice on Twitter" (see the Table 6 for the selected TED talk titles). The articles were translated by a professional translator and verified by another one.14 https://globalvoices.org Proverbs Yorùbá has many proverbs and culturally referred to words of wisdom that are often referenced by elderly people. We obtained 2,700 sentences of parallel yo-en texts from Twitter. 15Book With permission from the author (Bayo Adebowale) of the "Out of His Mind" book, originally published in English, we translated the entire book to Yorùbá and verified the diacritics.Software Localization Texts (Digital) We obtained translations of some software documentations such as Kolibri 16 from past projects of professional translators. These texts include highly technical terms. We obtained the translation of a Nigerian movie "Unsane" on YouTube from the past project of a professional translator. The language of the movie is Yorùbá and English, with transcription also provided in English.Other Short Texts Other short texts like UDHR, Creative Commons License, radio transcripts, and texts were obtained from professional translators and online sources. Table 1 summarizes the number of sentences obtained from each source.