key: cord-0619892-m0jths6s authors: Boucher, Christina; Cenzato, Davide; Lipt'ak, Zsuzsanna; Rossi, Massimiliano; Sciortino, Marinella title: Computing the original eBWT faster, simpler, and with less memory date: 2021-06-21 journal: nan DOI: nan sha: 6d8e8d6a73249c2cbd6110f052dabdfeb21a8ece doc_id: 619892 cord_uid: m0jths6s Mantaci et al. [TCS 2007] defined the eBWT to extend the definition of the BWT to a collection of strings, however, since this introduction, it has been used more generally to describe any BWT of a collection of strings and the fundamental property of the original definition (i.e., the independence from the input order) is frequently disregarded. In this paper, we propose a simple linear-time algorithm for the construction of the original eBWT, which does not require the preprocessing of Bannai et al. [CPM 2021]. As a byproduct, we obtain the first linear-time algorithm for computing the BWT of a single string that uses neither an end-of-string symbol nor Lyndon rotations. We combine our new eBWT construction with a variation of prefix-free parsing to allow for scalable construction of the eBWT. We evaluate our algorithm (pfpebwt) on sets of human chromosomes 19, Salmonella, and SARS-CoV2 genomes, and demonstrate that it is the fastest method for all collections, with a maximum speedup of 7.6x on the second best method. The peak memory is at most 2x larger than the second best method. Comparing with methods that are also, as our algorithm, able to report suffix array samples, we obtain a 57.1x improvement in peak memory. The source code is publicly available at https://github.com/davidecenzato/PFP-eBWT. In the last several decades, the number of sequenced human genomes has been growing at unprecedented pace. In 2015 the number of sequenced genomes was doubling every 7 months [40] -a pace that has not slowed into the current decade. The plethora of resulting sequencing data has expanded our knowledge of the biomarkers responsible for human disease and phenotypes [5, 43, 42] , the evolutionary history between and among species [38] , and will eventually help realize the personalization of healthcare [3] . However, the amount of data for any individual species is large enough that it poses challenges with respect to storage and analysis. One of the most well-known and widely-used methods for compressing and indexing data that has been applied in bioinformatics is the Burrows-Wheeler Transform (BWT), which is a text transformation that compresses the input in a manner that also allows for efficient substring queries. Not only can it be constructed in linear-time in the length of the input, it is also reversible -meaning the original input can be constructed from its compressed form. The BWT is formally defined over a single input string; thus, in order to define and construct it for one or more strings, the input strings need to be concatenated or modified in some way. In 2007 Mantaci et al. [30] presented a formal definition of the BWT for a multiset of strings, which they call the extended Burrows-Wheeler Transform (eBWT). It is a bijective transformation that sorts the cyclic rotations of the strings of the multiset according to the ω-order relation, an order, defined by considering infinite iterations of each string, which is different from the lexicographic order. Since its introduction several algorithms have been developed that construct the BWT of collection of strings for various types of biological data including short sequence reads [6, 4, 11, 14, 27, 13, 14, 1, 19, 36, 37] , protein sequences [44] , metagenomic data [18] and longer DNA sequences such as long sequence reads and whole chromosomes [25] . However, we note that in the development of some of these methods the underlying definition of eBWT was loosened. For example, ropebwt2 [25] tackles a similar problem of building what they describe as the FM-index for a multiset of long sequence reads, however, they do not construct the suffix array (SA) or SA samples, and also, require that the sequences are delimited by separator symbols. Similarly, gsufsort [27] and egap [14] construct the BWT for a collection of strings but do not construct the eBWT according to its original definition. gsufsort [27] requires the collection of strings to be concatenated in a manner that the strings are deliminated by separator symbols that have an augmented relative order among them. egap [14] , which was developed to construct the BWT and LCP for a collection of strings in external memory, uses the gSACA-K algorithm to construct the suffix array of the concatenated input using an additional O(α + 1) log n bits, and then constructs the BWT for the collection from the resulting suffix array. Lastly, we note that there exists a number of methods for construction of the BWT for a collection of short sequence reads, including ble [6] , BCR [4] , G2BWT [13] , egsa [28] ; however, these methods make implicit or explicit use of end-of-string symbols appended to strings in the collection. For an example of the effects of these manipulations, see Section 2, and [10] for a more detailed study. We present an efficient algorithm for constructing the eBWT that preserves the original definition of Mantaci et al. [30] -thus, it does not impose any ordering of the input strings or delimiter symbols. It is an adaptation of the well-known Suffix Array Induced Sorting (SAIS) algorithm of Nong et al. [33] , which computes the suffix array of a single string T ending with an end-of-string character $. Our adaptation is similar to the algorithm proposed by Bannai et al. [2] for computing the BBWT, which can also be used for computing the eBWT, after linear-time preprocessing of the input strings. The key change in our approach is based on the insight that the properties necessary for applying Induced Sorting are valid also for the ω-order between different strings. As a result, is it not necessary that the input be Lyndon words, or that their relative order be known at the beginning. Furthermore, our algorithmic strategy, when applied to a single string, provides the first linear-time algorithm for computing the BWT of the string that uses neither an end-of-string symbol nor Lyndon rotations. We then combine our new eBWT construction with a variation of a preprocessing technique called prefix free parsing (PFP). PFP was introduced by Boucher et al. [8] for building the (run length encoded) BWT of large and highly repetitive input text. Since its original introduction, it has been extended to construct the r-index [24] , been applied as a preprocessing step for building grammars [15] , and used as a data structure itself [7] . Briefly, PFP is a one-pass algorithm that divides the input into overlapping variable length phrases with delimiting prefixes and suffixes; which in effect, leads to the construction of what is referred to as the dictionary and parse of the input. It follows that the BWT can be constructed in the space that is proportional to the size of the dictionary and parse, which is expected to be significantly smaller than linear for repetitive text. In our approach, prefix-free parsing is applied to obtain a parse that is a multiset of cyclic strings (cyclic prefix-free parse) on which our eBWT construction is applied. We implement our approach (called pfpebwt), measure the time and memory required to build the eBWT for sets of increasing size of chromosome 19, Salmonella, and SARS-CoV2 genomes, and compare this to that required by gsufsort, ropebwt2, and egap. We show that pfpebwt is consistently faster and uses less memory than gsufsort and egap on reasonably large input (≥ 4 copies of chromosome 19, ≥ 50 Salmonella genomes, and ≥ 25,000 SARS-CoV2 genomes). Although ropebwt2 uses less memory than pfpebwt on large input, pfpebwt is 7x more efficient in terms of wall clock time, and 2.8x in terms of CPU time. Moreover, pfpebwt is capable of reporting SA samples in addition to the eBWT with a negligible increase in time and memory [24] , whereas ropebwt2 does not have that ability. If we compare pfpebwt only with methods that are able to report SA samples in addition to the eBWT (e.g., egap and gsufsort), we obtain a 57.1x improvement in peak memory. A string T = T [1. .n] is a sequence of characters T [1] · · · T [n] drawn from an ordered alphabet Σ of size σ. We denote by |T | the length n of T , and by ε the empty string, the only string of length 0. Given two integers 1 ≤ i, j ≤ n, we denote by .n] and an integer k, we denote by T k the kn-length string T T · · · T (k-fold concatenation of T ), and by T ω the infinite string T T · · · obtained by concatenating an infinite number of copies of T . A string T is called primitive if T = S k implies T = S and k = 1. For any string T , there exists a unique primitive word S and a unique integer k such that T = S k . We refer to S = S[1.. n k ] as root(T ) and to k as exp(T ). Thus, T = root(T ) exp(T ) . We We denote by ≺ ω the ω-order [16, 30] , defined as follows: for two strings S and T , S ≺ ω T if root(S) = root(T ) and exp(S) < exp(T ), or S ω < lex T ω (this implies root(S) = root(T )). One can verify that the ω-order relation is different from the lexicographic one. For instance, The string S is a conjugate of the string T if S = T [i..n]T [1..i − 1], for some i ∈ {1, . . . , n} (also called the i-th rotation of T ). The conjugate S is also denoted conj i (T ). It is easy to see that T is primitive if and only if it has n distinct conjugates. A Lyndon word is a primitive string which is lexicographically smaller than all of its conjugates. For a string T , the conjugate array 1 The Burrows-Wheeler Transform [9] of T , denoted BWT, is a reversible transformation extensively used in data compression. Given a string T , BWT(T ) is a permutation of the letters of T which equals the last column of the matrix of the lexicographically sorted conjugates of T . The mapping T → BWT(T ) is reversible, up to rotation. It can be made uniquely reversible by adding to BWT(T ) and index indicating the rank of T in the lexicographic order of all of its conjugates. Given BWT(T ) and an index i, the original string T can be computed in linear time [9] . The BWT itself can be computed from the conjugate array, since for all i = 1, . . . , n, BWT(T ) It should be noted that in many applications, it is assumed that an end-of-string-character (usually denoted $), which is not element of Σ, is appended to the string; this character is assumed to be smaller than all characters from Σ. Since T $ has exactly one occurrence of $, BWT(T $) is now uniquely reversible, without the need for the additional index i, since T $ is the unique conjugate ending in $. Moreover, adding a final $ makes the string primitive, and $T is a Lyndon word. Therefore, computing the conjugate array becomes equivalent to computing the suffix array, since CA T $ [i] = SA T $ [i]. Thus, applying one of the linear-time suffix-array computation algorithms [32] leads to linear-time computation of the BWT. When no $-character is appended to the string, the situation is slightly more complex. For primitive strings T , first the Lyndon conjugate of T has to be computed (in linear time, [39] ) and then a linear-time suffix array algorithm can be employed [17] . For strings T which are not primitive, one can take advantage of the following well-known property of the BWT: let T = S k and BWT(S) = U [1..m], then BWT(T ) = U [1] [31] ). Thus, it suffices to compute the BWT of root(T ). The root of T can be found by computing the border array b of T : T is a power if and only if n/(n − b[n]) is an integer, which is then also the length of root(T ). The border array can be computed, for example, by the preprocessing phase of the KMP-algorithm for pattern matching [21] , in linear time in the length of T . Given 2 1 2 1 2 2 2 3 2 1 2 2 2 1 1 2 1 2 From the GCA we can compute eBWT(M) = CTCCACAGAACTAAGCCGCGG, with index set {11, 12, 18}. Note that e.g. the conjugate conj 8 (T 2 ) comes before conj 1 The full list of conjugates is in Appendix A. Remark 2. Note that if end-of-string symbols are appended to the string of the collection the output of eBWT could be quite different. Note that while in the original definition of eBWT [30] , the multiset M was assumed to contain only primitive strings, our definition is more general and allows also for nonprimitive strings. The following lemma shows how to construct the generalized conjugate array GCA M of a multiset M of strings (not necessarily primitive), once we know the generalized conjugate array GCA R of the multiset R of the roots of the strings in M. It follows straightforwardly from the fact that equal conjugates will end up consecutively in the GCA. The generalized conjugate array is then given by From now on we will assume that the multiset M = {T 1 , . . . , T m } consists of m primitive strings. In this section, we describe our algorithm to compute the eBWT of a multiset of strings M. We will assume that all strings in M are primitive, since we can use Lemma 3 to compute the eBWT of M otherwise. Our algorithm is an adaptation of the well-known SAIS algorithm of Nong et al. [33] , which computes the suffix array of a single string T ending with an end-of-string character $. Our adaptation is similar to that of Bannai et al. [2] for computing the BBWT, which can also be used for computing the eBWT. Even though our algorithm does not improve the latter asymptotically (both are linear time), it is significantly simpler, since it does not require first computing and sorting the Lyndon rotations of the input strings. In the following, we assume some familiarity with the SAIS algorithm, focusing on the differences between our algorithm and the original SAIS. Detailed explanations of SAIS can be found in the original paper [33] , or in the books [34, 26] . The main differences between our algorithm and the original SAIS algorithm are: (1) we are comparing conjugates rather than suffixes, (2) we have a multiset of strings rather than just one string, (3) the comparison is done w.r.t. the omega-order rather than the lexicographic order, and (4) the strings are not terminated by an end-of-string symbol. We need the following definition, which is the cyclic version of the definition in [33] (where S stands for smaller, L for larger, and LMS for leftmost-S): Since T is primitive, no two conjugates are equal, and in particular, no two adjacent conjugates are equal. Therefore, the type of every position of T is defined. 2, and 1 ≤ i ≤ |T |. Position i of T is called (cyclic) S-type if conj i (T ) < lex conj i+1 (T ), and (cyclic) L-type if conj i (T ) > lex conj i+1 (T ). An S-type position i is called (cyclic) LMS if i − 1 is L-type (where we view T where we mark LMS-positions with a * . The LMS-substrings are ACA, AACGGTA, CGGCA, and ACGTC. The LMS-prefix of the conjugate conj 7 (T 1 ) = CGGTACAA is CGGTA. Lemma 6 (Cyclic type properties). Let T be primitive string of length at least 2. Let a 1 be the smallest and a σ the largest character of the alphabet. Then the following hold, where T is viewed cyclically: Proof. Once the type of one position is known, then the assignment can be done in one cyclic pass over T from right to left, by Lemma 6. Therefore, it suffices to find the type of one single position. Any position of character a 1 or of character a σ will do; alternatively, any position i such that T [i + 1] = T [i], again by Lemma 6. Since T is primitive and has length at least 2, the latter must exist and can be found in at most one pass over T . Let N be the total length of the strings in M. The algorithm constructs an initially empty array A of size N , which, at termination, will contain the GCA of M. The algorithm also returns the set I containing the set of indices in A representing the positions of the strings of M. The overall procedure consists of the following steps: Algorithm SAIS-for-eBWT Step 1 remove strings of length 1 from M (these will be added back at the end) Step 2 assign cyclic types to all positions of strings from M Step 3 use procedure Induced Sorting to sort cyclic LMS-substrings Step 4 assign names to cyclic LMS-substrings; if all distinct, go to Step 6 Step 5 recurse on new string multiset M , returning array A , map A back to A Step 6 use procedure Induced Sorting to sort all positions in M, add length-1 strings in their respective positions, return (A, I) At the heart of the algorithm is the procedure Induced Sorting of [33] (Algorithms 3.3 and 3.4), which is used once to sort the LMS-substrings (Step 3), and once to induce the order of all conjugates from the correct order of the LMS-positions (Step 6), as in the original SAIS. Before sketching this procedure, we need to define the order according to which the LMS-substrings are sorted in Step 2. Note that our definition of LMS-order is an extension of the LMS-order defined in [33] , to LMS-prefixes. It can be proved that these definitions coincide for LMS-substrings. Given two strings S and T , let U resp. V be their LMS-prefixes. We define U < LMS V if either V is a proper prefix of U , or neither is a proper prefix of the other and U < lex V . The procedure Induced Sorting for the conjugates of the multiset is analogous to the original one, except that strings are viewed cyclically. First, the array A is subdivided into so-called buckets, one for each character. At the end of this procedure, the LMS-substrings are listed in correct relative LMS-order (see Lemma 10) , and they can be named according to their rank. For the recursive step, we define, for i = 1, . . . , m, a new string T i , where each LMS-substring of T i is replaced by its rank. The algorithm is called recursively on M = {T 1 , . . . , T m } (Step 5). Finally (Step 6), the array A = GCA(M ) from the recursive step is mapped back into the original array, resulting in the placement of the LMS-substrings in their correct relative order. This is then used to induce the full array A. All length-1 strings T i which were removed in Step 1 can now be inserted between the L-and S-type positions in their bucket (Lemma 9). See Figure 1 for a full example. The following lemma shows that the individual steps of Induced Sorting are applicable for the ω-order on conjugates of a multiset (part 1), that L-type conjugates (of all strings) come before the S-type conjugates within the same bucket (part 2), and that length-1 strings are placed between S-type and L-type conjugates (part 3). The second property was originally proved for the lexicographic order between suffixes in [22] : Lemma 9 (Induced sorting for multisets). Let U, V ∈ Σ * . , i is an L-type position, and j an S-type position, then conj i (U ) ≺ ω conj j (V ). Step 3 -use procedure Induced Sorting to sort cyclic LMS-substrings Step 4 -Assign names to cyclic LM S-substrings A 2 1 2 3 4 1 1 1 2 2 2 2 Step 5 -recurse on new string multiset M Step 6 -use procedure Induced Sorting to sort cyclic LMS-substrings, add length-1 strings in their respective positions Generalized conjugate array of M conj i (U ) < lex c |U | , and therefore, conj i (U ) ≺ ω c. Analogously, if j is the next character in V s.t. V [j ] = c, then by Lemma 6, V [j ] > c, and therefore, c ≺ ω conj j (V ). Next, we show that after applying procedure Induced Sorting, the conjugates will appear in A such that they are correctly sorted w.r.t. to the LMS-order of their LMS-prefixes, while the order in which conjugates with identical LMS-prefixes appear in A is determined by the input order of the LMS-positions. Lemma 10 (Extension of Thm. 3.12 of [33] ). Let T 1 , T 2 ∈ M, let U be the LMS-prefix of conj i (T 1 ), with i the last position of U ; let V be the LMS-prefix of conj j (T 2 ), and j the last position of V . Let k 1 be the position of conj i (T 1 ) in array A after the procedure Induced Sorting, and k 2 that of conj j (T 2 ). If U < LMS V , then k 1 < k 2 . If U = V , then k 1 < k 2 if and only if conj i (T 1 ) was placed before conj j (T 2 ) at the start of the procedure. Proof. Both claims follow from Lemma 9, and the fact that from one LMS-position to the previous one, there is exactly one run of L-type positions, preceded by one run of S-type positions. The next lemma shows that the LMS-order of the LMS-prefixes respects the ω-order. Proof. By Lemma 6, Step 2 correctly assigns the types. Step 3 correctly sorts the LMSsubstrings by Lemma 10. It follows from Lemma 11 that the order of the conjugates of the new strings T i coincides with the relative order of the LMS-conjugates. In Step 6, the LMS-conjugates are placed in A in correct relative order from the recursion; by Lemmas 10 and 11, this results in the correct placement of all conjugates of strings of length > 1, while the positioning of the length-1 strings is given by Lemma 9. For the running time, note that Step 1 takes time at most 2N . The Induced Sorting procedure also runs in linear time O(N ). Finally, since no two LMS-positions are consecutive, and we remove strings of length 1, the problem size in the recursion step is reduced to at most N/2. 1 2 3 4 5 6 b a n a n a L S L S L S * * * Step 2 a b n S * 2 4 6 L 1 3 5 S 6 2 4 6 2 4 1 3 5 Step 3 6 a b a A 2 a n a B 4 a n a B Step 4 Step 5 a b n 6 4 2 1 5 3 GCA 6 4 2 1 5 3 BWT n n b a a a Step 6 Figure 2 Example for computing the BWT for one string, start index marked in bold. The special case where M consists of one single string leads to a new algorithm for computing the BWT, since for a singleton set, the eBWT coincides with the BWT. To the best of our knowledge, this is the first linear-time algorithm for computing the BWT of a string without an end-of-string character that uses neither Lyndon rotations nor end-of-string characters. We demonstrate the algorithm on a well-known example, T = banana. We get the following types, from left to right: LSLSLS, and all three S-type positions are LMS. We insert 2, 4, 6 into the array A; after the left-to-right pass, indices are in the order 2, 4, 6, 1, 3, 5, and after the right-to-left pass, in the order 6, 2, 4, 1, 3, 5. The LMS-substring aba (pos. 6) gets the name A, and the LMS-substring ana (pos. 2,4) gets the name B. In the recursive step, the new string T = ABB, with types SLL and only one LMS-position 1, the GCA gets induced in just one pass: 1, 3, 2. This maps back to the original string: 6, 2, 4, and one more pass over the array A results in 6, 4, 2, 1, 5, 3 and the BWT nnbaaa. See Figure 2 . In this section, we show how to extend the prefix-free parsing to build the eBWT. We define the cyclic prefix-free parse for a multiset of strings M = {T 1 , T 2 , . . . , T m } (with |T i | = n i , 1 ≤ i ≤ m) as the multiset of parses P = {P 1 , P 2 , . . . , P m } with dictionary D, where we consider T i as circular, and P i is the parse of T i . We denote by p i the length of the parse P i . Next, given a positive integer w, let E be a set of strings of length w called trigger strings. We assume that each string T h ∈ M has length at least w and at least one cyclic factor in E. We divide each string T h ∈ M into overlapping phrases as follows: a phrase is a circular factor of T h of length > w that starts and ends with a trigger string and has no internal occurrences of a trigger string. The set of phrases obtained from strings in M is the dictionary D. The parse P h can be computed from the string T h by replacing each occurrence of a phrase in T h with its lexicographic rank in D. where P 2 = 2 5 means that the parsing of T 2 is given by the second and fifth phrases of the dictionary. Note that the string T 2 has a trigger string AC that spans the first position of T 2 . We denote by S the set of suffixes of D having length greater than w. The first important property of the dictionary D is that the set S prefix-free, i.e., no string in S is prefix of another string of S. This follows directly from [8] . The computation of eBWT from the prefix-free parse consists of three steps: computing the cyclic prefix-free parse of M (denoted as P), computing the eBWT of P by using the algorithm described in Section 3; and lastly, computing the eBWT of M from the eBWT of P using the lexicographically sorted dictionary D = {D 1 , D 2 , . . . , D |D| } and its prefix-free suffix set S. We now describe the last step as follows. We define δ as the function that uniquely maps each character of T h [j] to the pair (i, k), where with 1 ≤ i ≤ p h , k > w, and T h [j] corresponds to the k-th character of the P h [i]-th phrase of D. We call i and k the position and the offset of T h [j], respectively. Furthermore, we define α as the function that uniquely associates to each conjugate conj j (T h ) the element s ∈ S such that s is the k-th suffix of the P h [i]-th element of D, where (i, k) = δ(T h [j]). By extension, i and k are also called the position and the offset of the suffix α(conj j (T h )). In Example 13, δ(T 2 [4]) = (1, 2) since T 2 [4] is the second character (offset 2) of the phrase ACTTGC , which is the first phrase (position 1) of P 2 . Moreover, α(conj 4 (T 2 )) = CTTGC since CTTGC is the suffix of D 3 , which is prefix of conj 4 (T 2 ) = CTTGCTAGACCA. Proof. It follows from the definition of α that α(conj i (T g )) and α(conj j (T h )) are prefixes of conj i (T g ) and conj j (T h ), respectively. Proposition 17. Given two strings T g , T h ∈ M. Let conj i (T g ) and conj j (T h ) be the i-th and j-th conjugates of T g and T h , respectively, and let (i , g ) = δ(T g [i]) and (j , h ) = δ(T h [j]). Then conj i (T g ) ≺ ω conj j (T h ) if and only if either α(conj i (T g )) < lex α(conj j (T h )), or conj i +1 (P g ) ≺ ω conj j +1 (P h ), i.e., P g [i ] precedes P h [j ] in eBWT(P). , where g = |α(conj i (T g ))| and h = |α(conj j (T h ))|, respectively. Moreover, conj i (T g ) ≺ ω conj j (T h ) if and only if either α(conj j (T h )) < lex α(conj j (T h )) or conj i+g −w (T g ) ≺ ω conj j+h −w (T h ), where w is the length of trigger strings. It is easy to verify that the position of T g [i + g − w] and T h [j + h − w] is i + 1 and j + 1, respectively. Moreover, since T g [i + g − w] and T h [j + h − w] are the first character of a phrase, we have that conj i+g −w (T g ) ≺ ω conj j+h −w (T h ) if and only if conj i +1 (P g ) ≺ ω conj j +1 (P h ). Next, using Proposition 17, we define how to build the eBWT of the multiset of strings M from P and D. First, we note that we will iterate through all the suffixes in S in lexicographic order, and build the eBWT of M in blocks corresponding to the suffixes in S. Hence, it follows that we only need to describe how to build an eBWT block corresponding to a suffix s ∈ S. Given s ∈ S, we let S s be the set of the lexicographic ranks of the phrases of D that have s as a suffix, i.e., S s = {i | 1 ≤ i ≤ |D|, s is a suffix of D i ∈ D}. Moreover, given the string T h ∈ M, we let conj i (T h ) be the i-th conjugate of T h , let j and k be the position and offset of T h [i], and lastly, let p be the position of P h [j] in eBWT(P). We define where we view P h as a cyclic string. So far, we showed how to compute the first component of the eBWT. Now we show how to compute the second component of the eBWT i.e., the set of indices marking the first rotation of each string. The idea is to keep track of the starting positions of each text in the parse, by marking the offset of the first position of each string in the last phrase of the corresponding parse. We propagate this information during the computation of the eBWT of the parse. When scanning the suffixes of S, we check if one of the phrases sharing the same suffix s ∈ S is marked as a phrase containing a starting position, and if the offset of the starting position coincides with the offset of the suffix. If so, when generating the elements of O s , we mark the element corresponding to the occurrence of the first rotation of a string, and we output the index of the eBWT when that element is processed. In practice, as in [8] , we implicitly select the set of trigger strings E, by rolling a Karp-Rabin hash over consecutive windows of size w and take as a trigger strings of length w all windows such that their hash value is congruent 0 modulo a parameter p. In our version of the PFP, we also need to ensure that there is at least one trigger string on each sequence of the collection. Hence, we change the way we select the trigger strings as follows. We define a set D of remainders and we select a window of length w as a trigger string with hash value congruent d modulo p if d ∈ D. Note that if we set D = {0} we obtain the same set of trigger strings as in the original definition. We choose the set D in a greedy way. We start with D = {0} by scanning the set of sequences and checking if the current sequence has a trigger string according to the current D. As soon as we find one, we move to the next sequence. If we don't find any trigger string, we take the reminder of the last window we checked, and we include it in the set D. We note that we consider S to be the set of suffixes of the phrases of D such that s ∈ S is not a phrase in D nor it has length smaller than w in the implementation. This allows us to compute f more efficiently since we can compute the preceding character of all the occurrences of a suffix in S from its corresponding phrase in D. Moreover, as in [8] , for each phrase in D, we keep an ordered list of their occurrences in the eBWT of the parse. For a given suffix s ∈ S, we do not generate O s all at once and sort it -but rather, we visit the elements of O s in order using a min-heap as we merge the ordered lists of the occurrences in the eBWT of the parse of the phrases that share the same suffix s. We implemented the algorithm for building the eBWT and measured its performance on real biological data. We performed the experiments on a server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with 16 cores and 62 gigabytes of RAM running Ubuntu 16.04 (64bit, kernel 4.4.0). The compiler was g++ version 9.4.0 with -O3 -DNDEBUG -funroll-loops -msse4.2 options. We recorded the runtime and memory usage using the wall clock time, CPU time, and maximum resident set size from /usr/bin/time. The source code is available online at: https://github.com/davidecenzato/PFP-eBWT. We compared our method (pfpebwt) with the BCR algorithm implementation of [25] (ropebwt2), gsufsort [27] , and egap [14] . We did not compare against G2BWT [13] , lba [6] , and BCR [4] since they are currently implemented only for short reads 2 . We did not compare against egsa [28] since it is the predecessor of egap or against methods that construct the BWT of a multiset of strings using one of the methods we evaluated against, i.e., LiME [18] , BEETL [11] , metaBEETL [1] , and ebwt2snp [36, 37] . We evaluated our method using 2,048 copies of human chromosomes 19 from the 1000 Genomes Project [42] ; 10,000 Salmonella genomes taken from the GenomeTrakr project [41] , and 400,000 SARS-CoV2 genomes from EBI's COVID-19 data portal [12] . The sequence data for the Salmonella genomes were assembled, and the assembled sequences that had length less than 500 bp were removed. In addition, we note that we replaced all degenerate bases in the SARS-CoV2 genomes with N's and filtered all sequences with more than 95% N's. A brief description of the datasets is reported in Table 1 . We used 12 sets of variants of human chromosome 19 (chr19), containing 2 i variants for i = 0, . . . , 11 respectively. We used 6 collections of Salmonella genomes (salmonella) containing 50, 100, 500, 1,000, 5,000, and 10,000 genomes respectively. We used 5 sets of SARS-CoV2 genomes (sars-cov2) containing Table 1 Datasets used in the experiments. We give the alphabet size in column 3. We report the length of the file and the ratio of the length to the number of runs in the eBWT in columns 4 and 5, respectively. 25,000, 50,000, 100,000, 200,000, 400,000 genomes respectively. Each collection is a superset of the previous one. We run pfpebwt and ropebwt2 with 16 threads, and gsufsort and egap with a single thread since they do not support multi-threading. Using pfpebwt, we set w = 10 and p = 100. Furthermore, for pfpebwt on the salmonella dataset, we used up to three different remainders to build the eBWT. We used ropebwt2 with the -R flag to exclude the reverse complement of the sequences from the computation of the BWT. All other methods were run with default parameters. We repeated each experiment five times, and report the average CPU time and peak memory for the set of chromosomes 19 up to 64 distinct variants, for Salmonella up to 1,000 sequences, and for all SARS-CoV2. The experiments that exceeded 48 hours of wall clock time or exceeded 62 GB of memory were omitted for further consideration, e.g., 128 sequences of chr19, 5000 sequences of salmonella and 400,000 sequences of sars-cov2 for egap. Furthermore, gsufsort failed to successfully build the eBWT for 256 sequences of chr19, 5000 sequences of salmonella, and 400,000 sequences of sars-cov2 or more, because it exceeded the 62GB memory limit. In Figures 3, 4 , and 5 we illustrate the construction time and memory usage to build the eBWT and the BWT of collections of strings for the chromosome 19 dataset, the Salmonella dataset, and the SARS-CoV2 dataset, respectively. pfpebwt was the fastest method to build the eBWT of 4 or more sequences of chromosome 19, with a maximum speedup of 7.6x of wall-clock time and 2.9x of CPU time over ropebwt2 on 256 sequences of chromosomes 19, 2.7x of CPU time over egap on 64 sequences, and 3.8x of CPU time over gsufsort on 128 sequences. On Salmonella sequences, pfpebwt was always the fastest method, except for 10,000 sequences where ropebwt2 was the fastest method on wall-clock time. pfpebwt had a maximum speedup of 3.0x of wall-clock time over ropebwt2 on 100 sequences of salmonella. Considering the CPU time, pfpebwt was the fastest for ≥ 500 sequences with a maximum speedup of 1.7x over ropebwt2 on 100 sequences and 1.2x over gsufsort and egap on 1,000 sequences. On SARS-CoV2 sequences, pfpebwt was always the fastest method, with a maximum speedup of 2.4x of wall-clock time over ropebwt2 while a maximum speedup of 1.3x of CPU time over ropebwt2 on 400,000 sequences, 2.9x over gsufsort and 2.7x over egap on 200,000 sequences of SARS-CoV2. Considering the peak memory, on the chromosomes 19 dataset, ropebwt2 used the smallest amount of memory for 1, 2, 4, 8, and 2,048 sequences, while pfpebwt used the smallest amount of memory in all other cases. pfpebwt used a maximum of 5.6x less memory than ropebwt2 metaBEETL: high-throughput analysis of heterogeneous microbial populations from shotgun DNA sequences Constructing the bijective and the extended Burrows-Wheeler-Transform in linear time Challenges in implementing genomic medicine: the 100,000 Genomes Project Lightweight algorithms for constructing and inverting the BWT of string collections Clinical analysis of whole genome sequencing in cancer patients Computing the multi-string BWT and LCP array in external memory PFP compressed suffix trees Computing the original eBWT faster, simpler, and with less memory A block sorting lossless data compression algorithm On different variants of the extended Burrows-Wheeler-Transform. Unpublished manuscript Large-scale compression of genomic sequence databases with the Burrows-Wheeler transform The COVID-19 Data Portal Efficient construction of the extended BWT from grammar-compressed DNA sequencing reads External memory BWT and LCP computation for sequence collections with applications Rpair: Rescaling repair with rsync Counting permutations with given cycle structure and descent set From first principles to the Burrows and Wheeler transform and beyond, via combinatorial optimization Metagenomic analysis through the extended Burrows-Wheeler transform Lightweight Metagenomic Classification via eBWT Efficient Algorithm for Circular Burrows-Wheeler Transform Fast pattern matching in strings Space efficient linear time construction of suffix arrays On the combinatorics of suffix arrays Efficient construction of a complete index for pan-genomics read alignment Fast construction of FM-index for long sequence reads Construction of Fundamental Data Structures for Strings gsufsort: constructing suffix arrays, LCP arrays and BWTs for string collections Generalized enhanced suffix array construction in external memory Suffix arrays: a new method for on-line string searches An extension of the Burrows-Wheeler Transform Burrows-Wheeler transform and Sturmian words Compact Data Structures: A Practical Approach Two efficient algorithms for linear time suffix array construction Bioinformatics Algorithms: Sequence Analysis, Genome Rearrangements, and Phylogenetic Reconstruction Enumerative combinatorics on words SNPs detection by eBWT positional clustering Variable-order reference-free variant discovery with the Burrows-Wheeler Transform Towards complete and error-free genome assemblies of all vertebrate species Fast canonization of circular strings Big Data: Astronomical or Genomical? The public health impact of a publically available, environmental database of microbial genomes The 1000 Genomes Project Consortium. A global reference for human genetic variation The 100,000 genomes project: bringing whole genome sequencing to the NHS The Burrows-Wheeler similarity distribution between biological sequences based on Burrows-Wheeler transform We described the first linear-time algorithm for building the eBWT of a collection of strings that does not require the manipulation of the input sequence, i.e., neither the addition of an end-of-string character, nor computing and sorting the Lyndon rotations of the input strings. We also combined our algorithm with an extension of the prefix-free parsing to enable scalable construction of the eBWT. We demonstrated pfpebwt was efficient with respect to both memory and time when the input is highly repetitive. Lastly, we curated a novel dataset of 400,000 SARS-CoV2 genomes from EBI's COVID-19 data portal, which we believe will be important for future benchmarking of data structures that have potential use in bioinformatics. on 256 sequences of chromosomes 19, 28.0x less than egap on 64 sequences, and 45.3x less than gsufsort on 128 sequences. On Salmonella sequences, pfpebwt used more memory than ropebwt2 for 50, 100, and 10,000 sequences, while pfpebwt used the smallest amount of memory on all other cases. The largest gap between ropebwt2 and pfpebwt memory peak is of 1.7x on 50 sequences. On the other hand, pfpebwt used a maximum of 17.0x less memory than egap and gsufsort on 1,000 sequences. On SARS-CoV2 sequences, pfpebwt always used the smallest amount of memory, with a maximum of 6.4x less memory than ropebwt2 on 25,000 sequences of SARS-CoV2, 57.1x over gsufsort and egap on 200,000 sequences.