key: cord-0642911-3n8m1hmr authors: Yamakami, Tomoyuki title: Between SC and LOGDCFL: Families of Languages Accepted by Logarithmic-Space Deterministic Auxiliary Depth-k Storage Automata date: 2022-03-18 journal: nan DOI: nan sha: d7cdb30b39d233f77cb9ead2b10cd1fa8e301685 doc_id: 642911 cord_uid: 3n8m1hmr The closure of deterministic context-free languages under logarithmic-space many-one reductions ($mathrm{L}$-m-reductions), known as LOGDCFL, has been studied in depth from an aspect of parallel computability because it is nicely situated between $mathrm{L}$ and $mathrm{AC}^1capmathrm{SC}^2$. By changing a memory device from pushdown stacks to access-controlled storage tapes, we introduce a computational model of one-way deterministic depth-$k$ storage automata ($k$-sda's) whose tape cells are freely modified during the first $k$ accesses and then become blank forever. These $k$-sda's naturally induce the language family $kmathrm{SDA}$. Similarly to $mathrm{LOGDCFL}$, we study the closure $mathrm{LOG}kmathrm{SDA}$ of all languages in $kmathrm{SDA}$ under $mathrm{L}$-m-reductions. We first demonstrate that $mathrm{DCFL}subseteq kmathrm{SDA}subseteq mathrm{SC}^k$ by significantly extending Cook's early result (1979) of $mathrm{DCFL}subseteq mathrm{SC}^2$. The entire hierarch of $mathrm{LOG}kmathrm{SDA}$ for all $kgeq1$ therefore lies between $mathrm{LOGDCFL}$ and $mathrm{SC}$. As an immediate consequence, we obtain the same simulation bounds for Hibbard's limited automata. We further characterize the closure $mathrm{LOG}kmathrm{SDA}$ in terms of a new machine model, called logarithmic-space deterministic auxiliary depth-$k$ storage automata that run in polynomial time. These machines are as powerful as a polynomial-time two-way multi-head deterministic depth-$k$ storage automata. We also provide"generic"$mathrm{LOG}kmathrm{SDA}$-complete languages under $mathrm{L}$-m-reductions by constructing a universal simulator working for all two-way $k$-sda's. In the literature, numerous computational models and associated language families have been proposed to capture various aspects of parallel computation. Of those language families, we wish to pay special attention to the family known as LOGDCFL, which is obtained from DCFL, the family of all deterministic context-free (dcf ) languages, by taking the closure under logarithmic-space many-one reductions (or L-m-reductions, for short) [3, 23] . These dcf languages were first defined in 1966 by Ginsburg and Greibach [8] and their fundamental properties were studied extensively since then. It is well known that DCFL is a proper subfamily of CFL, the family of all context-free languages, because the context-free language {ww R | w ∈ {0, 1} * } (where w R means the reverse of w), for instance, does not belong to DCFL. The dcf languages in general behave quite differently from the context-free languages. As an example of such differences, DCFL is closed under complementation, while CFL is not. This fact structurally distinguishes between DCFL and CFL. Moreover, dcf languages requires computational resources of polynomial time and O(log 2 n) space simultaneously [4] ; however, we do not know the same statement holds for context-free languages. Although dcf languages are accepted by one-way deterministic pushdown automata (or 1dpda's), these languages have a close connection to highly parallelized computation, and thus LOGDCFL has played a key role in discussing parallel complexity issues within P because of the nice inclusions NC 1 ⊆ L ⊆ NL ⊆ LOGCFL ⊆ AC 1 and L ⊆ LOGDCFL ⊆ SC 2 . It is known that LOGDCFL can be characterized without using L-m-reductions by several other intriguing machine models, which include: Cook's polynomial-time logarithmic-space deterministic auxiliary pushdown automata [3] , two-way multi-head deterministic pushdown automata running in polynomial time, logarithmic-time CROW-PRAMs with polynomially many processors [5] , and circuits made up of polynomially many multiplex select gates having logarithmic depth [6] or having polynomial proof-tree size [18] . Such a variety of characterizations prove LOGDCFL to be a robust and fully-applicable notion in computer science. Another important feature of LOGDCFL (as well as its underlying DCFL) is the existence of "complete" languages, which are practically the most difficult languages in DCFL to recognize. Notice that a language L is said to be L-m-complete for a family F of languages (or L is F-complete, for short) if L belongs to F and every language in F is L-m-reducible to L. Sudborough [23] first constructed such a language, called L (m) 0 , which possess the highest complexity (which he quoted as "tape hardest") among all dcf languages under L-m-reductions; therefore, L (m) 0 are L-m-complete for DCFL and also for LOGDCFL. Using Sudborough's hardest languages, Lohrey [17] lately presented another LOGDCFL-complete problem based on semi-Thue systems. Nonetheless, only a few languages are known today to be complete for DCFL as well as LOGDCFL. A large void seems to lie between DCFL and CFL (as well as co-CFL). This void has been filled with, for example, the union hierarchy {DCFL[k] | k ≥ 1} and the intersection hierarchy {DCFL(k) | k ≥ 1} over DCFL, where DCFL[k] (resp., DCFL(k)) is composed of all unions (resp., intersections) of k dcf languages. They truly form distinctive infinite hierarchies [14, 26] . Taking a quite different approach, Hibbard [12] devised specific rewriting systems, known as deterministic scan limited automata. Those rewriting system were lately remodeled in [20, 21] as single input/storage-tape 2-way deterministic linear-bounded automata that can modify the contents of tape cells whenever the associated tape heads access them (when a tape head makes a turn, however, we count it twice); however, such modifications are limited to only the first k accesses. We call those machines deterministic k-limited automata (or k-lda's, for short). Numerous followup studies, including Pighizzini and Prigioniero [22] , Kutrib and Wendlandt [16] , and Yamakami [25] , have lately revitalized an old study on k-lda's. It is possible to make k-lda's satisfy the so-called blank-skipping property [25] , by which each tape cell becomes blank after the k accesses and inner states cannot be changed while reading any blank symbol. A drawback of Hibbard's model is that the use of a single tape prohibits us from accessing memory and input simultaneously. It seems quite natural to seek out a reasonable extension of LOGDCFL by generalizing its underlying machines in a simple way. A basis of LOGDCFL is of course 1dpda's, each of which is equipped with a read-once 3 input tape together with a storage device called a stack. Each stack allows two major operations. A pop operation is a deletion of a symbol and a push operation is a rewriting of a symbol on the topmost stack cell. The stack usage of pushdown storage seems too restrictive in practice and various extensions of such pushdown automata have been sought in the past literature. For instance, a stack automaton of Ginsburg, Greibach, and Harrison [9, 10] is capable of freely traversing the inside of the stack to access each stored item but it is disallowed to modify them unless the scanning stack head eventually comes to the top of the stack. Thus, each cell of the stack could be accessed a number of times. Meduna's deep pushdown automata [19] also allow stack heads to move deeper into the content of the stacks and to replace some symbols by appropriate strings. Other extensions of pushdown automata include [2, 13] . To argue parallel computations, we intend to seek a reasonable restriction of stack automata by introducing access-controlled storage device. Each cell content of such a device is modified by its own tape head, which moves sequentially back and forth along the storage tape. This special tape and its tape head naturally allow underlying machines to make more flexible memory manipulations. In real-life circumstances, it seems reasonable to limit the number of times to access data sets stored in the storage device. For instance, rewriting data items in blocks of a memory device, such as external hard drives or rewritable DVDs, is usually costly and it may be restricted during each execution of a computer program. We thus need to demand that every memory cell on this device can be modified only during the first few accesses and, in case of exceeding the intended access limit, the storage cell turns unusable and no more rewriting is possible. We refer to the number of times that the content of a storage cell is modified as "depth". The aforementioned blank-skipping property of k-lda's, for instance, (?????) While scanning such unaccessible data sets, reading more input symbols may or may not be restricted. We leave a further discussion on this restriction to Section 2.2. To understand the role of depth limit for an underlying machines, let us consider how to recognize the non-context-free language L abc = {a n b n c 2n | n ≥ 0} under an additional requirement that new input symbols are only read while storage cells are non-blank. Given an input of the form a l b m c n , we first write a l into the first l cells of the storage device, check if l = m by simultaneously reading b m and traversing the storage device backward by changing a to b, and then check if l + m = n by simultaneously reading c n together with moving the device's scanning head back and forth by changing b to c and then c to B (blank). This procedure requires depth 4. A storage device whose cells have depth at most k is called a depth-k storage tape in this exposition and its scanning head is hereafter cited as a depth-k storage-tape head for convenience. The machines equipped with those devices are succinctly called one-way deterministic depth-k storage automata (or k-sda's, for short). Our k-sda's naturally expand Hibbard's k-lda's 4 Each storage cell is initially empty and turned blank after exceeding its depth limit. This latter requirement is imperative because, without it, the machines become as powerful as polynomial-time Turing machines. This follows from the fact that non-erasing stack automata can recognize the circuit value problem, which is a P-complete problem. In a simultaneous access input and storage tape cells, the behaviors of k-sda's are influenced by the circumstances where an access to new input symbols is "immune" or "susceptible" to the depth of storage-tape cells. For convenience, we introduce the notation kSDA for each index k ≥ 2 to express the family of all languages recognized by those k-sda's (for a more precise definition, see Section 2.2). As the aforementioned example shows, 4SDA contains even non-context-free languages. With the use of L-m-reductions analogously to LOGDCFL, for any index k ≥ 2, we consider the closure of kSDA under L-m-reductions, denoted by LOGkSDA. It follows from the definitions that LOGDCFL ⊆ LOGkSDA ⊆ LOG(k + 1)SDA ⊆ P. Among many intriguing questions, we wish to raise the following three simple questions regarding our new language families kSDA as well as their L-m-closures. (1) What is the computational complexity of language families kSDA as well as LOGkSDA? (2) Is there any natural machine model that can precisely characterize LOGkSDA in order to avoid the use of L-m-reductions? (3) Is there any language that is L-m-complete for LOGkSDA? The sole purpose of this exposition is to partially answer these questions through Sections 3-5 after a formal introduction of k-sad's in Section 2. We formally define a new computational model, dubbed as deterministic storage automata, and show basic properties of them. We begin with fundamental notions and notation necessary to introduce a new computation model of storage automata. The two notations Z and N represent the set of all integers and that of all natural numbers (i.e., nonnegative integers), respectively. Given two numbers m, n ∈ Z with m ≤ n, [m, n] Z denotes the integer interval {m, m + 1, m + 2, . . . , n}. In particular, when n ≥ 1, we abbreviate [1, n] Z as [n]. Given a set S, P(S) denotes the power set of S, namely, the set of all subsets of S. An alphabet is a finite nonempty set of "symbols" or "letters." Let Σ denote any alphabet. A string over Σ is a finite sequence of symbols in Σ. The length of a string x is the total number of symbols in x and it is denoted by |x|. The special notation ε is used to express the empty string of length 0. Given a string x = x 1 x 2 · · · x n , the reverse of x is x n x n−1 · · · x 1 and is denoted x R . For two strings x and y over the same alphabet, x is said to be a prefix of y if there exists a string z for which y = xz. In this case, z is called a suffix of y. Given a language A over Σ, let P ref (A) denote the set of all prefixes of any string in A, namely, {w | ∃y ∈ Σ * [wy ∈ A]}. The notation Σ * denotes the set of all strings over Σ. A language over Σ is simply a subset of Σ * . As customarily, we freely identify a decision problem with its corresponding language. We use the binary representations of natural numbers. For such a representation x, the notation (x) (2) denotes the corresponding natural number of x. For instance, we obtain (0) (2) = 0, (1) (2) = 1, (10) (2) = 2, (11) (2) = 3, (100) (2) = 4, etc. We assume the reader's familiarity with multi-tape Turing machines and we abbreviate deterministic Turing machines as DTMs. To manage a sub-linear space DTM, we assume that its input tape is read only and an additional rewritable index tape is used to access a tape cell whose address is specified by the content of the index tape. Since the index tape requires log |x| bits to specify each symbol of an input x, the space usage of an underlying machine is thus limited to only work tapes. For each constant k ≥ 1, Steve's class SC k is the family of all languages recognized by DTMs in polynomial time using O(log k n) space [3] . Let SC denote the union k∈N + SC k . It follows that L = SC 1 ⊆ SC k ⊆ SC k+1 ⊆ SC ⊆ P. In order to compute a function, we further provide a DTM with an extra write-once output tape so that the machine produces output strings, where a tape is write-once if its tape head never moves to the left and, whenever its tape head writes a nonempty symbol, it must move to the right. All (total) functions computable by such DTMs with output tapes in polynomial time using only logarithmic space form the function class, known as FL. Given two languages L 1 over alphabet Σ 1 and L 2 over Σ 2 , we say that L 1 is L-m-reducible to L 2 (denoted by L 1 ≤ L m L 2 ) if there exists a function f computed by an appropriate polynomial-time DTM using only O(log n) space such that, for any We use DCFL for the collection of all deterministic context-free (dcf) languages. Those languages are recognized by one-way deterministic pushdown automata (or 1dpda's, for short). The notation LOGDCFL expresses the L-m-closure of DCFL. In relation to our new machine model, introduced in Section 2.2, we briefly explain Hibbard's thencalled "scan-limited automata" [12] , which were lately reformulated by Pighizzini and Pisoni [20, 21] using a single-tape Turing machine model. This exposition follows their formulation of deterministic k-limited automata (or k-lda's, for short) for any positive integer k. A k-lda is a single-tape linear automaton with two endmarkers. Initially, an input/work tape holds an input string and a tape head modifies its tape symbol whenever the tape head reads it for the first k visits (when the tape head makes a turn, we double count the visit). We expand the standard model of pushdown automata by substituting its stack for a more flexible storage device, called a storage tape. Formally, a storage tape is a semi-infinite rewritable tape whose cells are initially blank (filled with distinguished initial symbols 2) and are accessed sequentially by a tape head that can move back and forth along the tape by changing tape symbols as it passes through. In what follows, we fix a constant k ∈ N + . A one-way deterministic depth-k storage automaton (or a k-sda, for short) M is a 2-tape DTM (equipped only with a read-only input tape and a rewritable work tape) of the form (Q, Σ, {Γ (e) } e∈[0,k] Z , { , }, δ, q 0 , Q acc , Q rej ) with a finite set Q of inner states, an input alphabet Σ, storage alphabets Γ (e) for indices e ∈ [0, k] Z with Γ = e∈[0,k] Z Γ (e) , a transition function δ from Q ×Σ × Γ to Q × Γ × D 1 × D 2 withΣ = Σ ∪ { , }, D 1 = {0, +1}, and D 2 = {−1, 0, +1}, an initial state q 0 in Q, and sets Q acc and Q rej of accepting states and rejecting states, respectively, with Q acc ∪ Q rej ⊆ Q and Q acc ∩ Q rej = ∅, provided that Γ (0) = {2} (where 2 is a distinguished initial symbol), Γ (k) = { , B} (where B is a unique blank symbol) and Γ (e1) ∩ Γ (e2) = ∅ for any distinct pair e 1 , e 2 ∈ [0, k] Z . The two sets D 1 and D 2 indicate the direction of the input-tape head and that of the storage-tape head, respectively. The choice of D 1 forces the input tape to be read only once. We say that the input tape is read once if its tape head either moves to the right or stays still with scanning no input symbol. A single move (or step) of M is dictated by δ. If M is in inner state q, scanning σ on the input tape and τ on the storage tape, a transition δ(q, σ, τ ) = (p, ξ, d 1 , d 2 ) forces M to change q to p, overwrite τ by ξ, and move the input-tape head in direction d 1 and the storage-tape head in direction d 2 . Instead of making ε-moves (i.e., a tape head neither moves nor reads any tape symbol), we allow a tape head to make a stationary move, 5 by which the tape head stays still and the scanning symbol is unaltered. The tape head direction "0" indicates such a stationary move. For stationary moves, M must satisfy the following stationary-move requirement: assuming that δ(q, σ, τ ) = (p, ξ, d 1 , d 2 ), (i) if d 2 = 0, then γ = ξ and (ii) either d 1 = 0 or d 2 = 0. Thus, whenever the storage-tape head moves to a neighboring cell, it must change a tape symbol. All tape cells are indexed by natural numbers from left to right. The leftmost tape cell is the start cell indexed 0. An input tape has endmarkers { , } and a storage tape has only the left endmarker . When an input string x is given to the input tape, it should be surrounded by the two endmarkers as x so that is located at the start cell and is at the cell indexed |x| + 1. For any index i ∈ N, x (i) denotes the tape symbol written on the ith input-tape cell, provided that x (0) = (left endmarker) and x (n+1) = (right endmarker). Similarly, when z represents the non-2 portion of the content of a storage tape, the notation z (i) expresses the symbol in the ith tape cell. Note that z (0) = . For the storage tape, we request the following rewriting restriction, called the depth-k requirement, to be satisfied. Whenever the storage-tape head passes through a tape cell containing a symbol in Γ (e) with e < k, the machine must replace it by another symbol in Γ (e+1) except for the case of the following "turns". We distinguish two types of turns. A left turn at step t refers to M 's step at which, after M 's tape head moves to the right at step t − 1, it moves to the left at step t. In other words, M takes two transitions δ(q, σ 1 , γ 1 ) = (p, τ, d, +1) at step t − 1 and δ(p, σ 2 , γ 2 ) = (r, ξ, e, −1) at step t. Similarly, we say that M makes a right turn at step t if M 's tape head moves from the left at step t − 1 and changes its direction to the right at step t. Whenever a tape head makes a turn, we treat this case as "double accesses." More formally, at a turn, any symbol in Γ (e) with e < k must be changed to another symbol in Γ (min{k,e+2}) . No symbol in Γ (k) can be modified at any time. Although we use various storage alphabets Γ (e) , we can easily discern from which direction the tape head arrives simply by scanning a storage tape symbol written in the current tape cell. Assuming that M modifies σ ∈ Γ (e) on a storage tape with e ∈ [0, k−1] Z to τ and moves its storage-tape head in direction d, if d equals (−1) e , then τ must belong to Γ (min{e+1,k}) ; if d = 0, then both σ and τ must be the same; otherwise, τ must be in Γ (min{e+2,k}) . A storage tape that satisfies the depth-k requirement is succinctly called a depth-k storage tape. Let us consider two different models whose input-tape head is either depth-susceptible or depthimmune to the content of a storage-tape cell. A tape head (except for the storage-tape head) is called depth-susceptible if, while the currently scanning symbol γ on a storage tape cell is in Γ (k−1) ∪ Γ (k) , the tape head must make a stationary move; namely, if δ(q, σ, γ) = (p, ξ, d 1 , d 2 ) with γ ∈ Γ (k−1) ∪ Γ (k) , then d 1 = 0 follows. The tape head is called depth-immune if there is no restriction on the scanning symbol γ. A surface configuration of M on input x is of the form (q, l 1 , l 2 , z) with q ∈ Q, l 1 ∈ [0, |x| + 1] Z , l 2 ∈ N, and z ∈ (Γ − {2}) * , which indicates the situation where M is in state q, the storage tape contains z (except for the tape symbol 2), and two tape heads scan the l 1 th cell of the input tape and the l 2 th cell of the storage tape. For readability, we drop the word "surface" altogether in the subsequent sections. The initial (surface) configuration has the form (q 0 , , 0, 0) and δ describes how to reach the next surface configuration in a single step. For convenience, we define the depth value dv(C) of a surface configuration C = (q, l 1 , l 2 , z) to be the number e satisfying z (l2) ∈ Γ (e) . An accepting configuration (resp., a rejecting configuration) is of the form (q, h 1 , h 2 , w) with q ∈ Q acc (resp., q ∈ Q rej ). A halting configuration means either an accepting configuration or a rejecting configuration. A computation of M starts with the initial configuration and ends with a halting configuration. The k-sda M accepts (resp., rejects) x if M starts with the initial configuration with the input x and reaches an accepting configuration (resp., a rejecting configuration). For a language L over Σ, we say that M recognizes (accepts or solves) L if, for any input string x ∈ Σ * , (i) if x ∈ L, then M accepts x and (ii) if x / ∈ L, then M rejects x. For two k-sda's M 1 and M 2 over the same input alphabet Σ, we say that M 1 is (computationally) equivalent to M 2 if, for any input x ∈ Σ * , M 1 accepts (resp., rejects) x iff M 2 accepts (resp., rejects) x. For notational convenience, we write kSDA for the collection of all languages recognized by depthsusceptible k-sda's and LOGkSDA for the collection of languages that are respectively L-m-reducible to certain languages in kSDA. Moreover, we set ωSDA to be the union k∈N + kSDA. With this notation, the non-context-free language L abc , discussed in Section 1, belongs to 4SDA. Thus, 4SDA CFL follows instantly. For the depth-immune model of k-sda, we similarly define kSDA imm , ωSDA imm , and LOGkSDA imm . As a special case of k = 2, the following holds. This demonstrates the fact that k-sda's truly expand 1dpda's. It was shown that DCFL is precisely characterized by deterministic 2-limited automata (or 2-lda's, for short) [12, 21] . Since any 2-lda can be transformed to another 2-lda with the blankskipping property [25] , depth-susceptible 2-sda's can simulate 2-lda's; thus, we immediately conclude that DCFL ⊆ 2SDA. For the converse, it suffices to simulate depth-susceptible 2-sda's by appropriate 1dpda's. Given a depth-susceptible 2-sda M , we design a one-way deterministic pushdown automaton (or a 1dpda) N that works as follows. We want to treat the storage-tape of M as a stack by ignoring all symbols in Γ (2) except for the left endmarker . When M modifies the initial symbol 2 on its storage tape to a new symbol σ in Γ (1) , N pushes σ to a stack. Consider the case where M modifies a storage symbol σ to B, N pops the same symbol σ. Note that, since M is depth-susceptible, it cannot move the input-tape head. As for the behavior of N 's input-tape head, if M reads an input symbol σ and moves to the right, then N does the same. On a tape cell containing σ, if M 's input-tape head makes a series of stationary moves, then N reads σ as the first move, remembers σ, and makes ε-moves afterwards until M ends its stationary moves. Obviously, the resulting machine is a 1dpda and it simulates M . 2 Remark. Remember that tape cells of k-sda's become blank after the first k accesses. If we allow such tape cells to keep the last written symbols instead of erasing them, then the resulting machines get enough power to recognize languages to which even P-complete problems are L-m-reducible. In this exposition, we do not further delve into this topic. We begin with a structural study of LOGkSDA, which is the closure of kSDA under L-m-reductions, defined in Section 2.2. We intend to seek different characterizations of LOGkSDA with no use of Lm-reductions. The idea of the elimination of such reductions is attributed to Sudborough [23] , who characterized LOGDCFL using two machine models: polynomial-time log-space auxiliary deterministic pushdown automata and polynomial-time multi-head deterministic pushdown automata. Our goal here is to expand these machine models to fit into the framework of depth-k storage automata. Let us expand deterministic auxiliary pushdown automata to deterministic auxiliary depth-k storage automata, each of which is equipped with a two-way read-only depth-susceptible input tape, an auxiliary rewritable tape, a depth-k storage tape. Let us formally formulate deterministic auxiliary depth-k storage automata. For the description of such a machine, we first prepare a two-way read-only input tape and a depth-k storage tape and secondly we supply a new space-bounded auxiliary rewritable work tape whose cells are freely modified by a two-way tape head. Notice that the storage-tape head is allowed to make stationary moves (that is, the storage-tape head neither reads any symbol nor moves to any adjacent cell). A deterministic auxiliary depth-k storage automaton (or an aux-k-sda, for short) M is formally a 3-tape DTM (Q, Σ, Θ, {Γ (e) } e∈[0,k] Z , δ, q 0 , Q acc , Q rej ) with a read-only input tape, an auxiliary rewritable (work) tape with an alphabet Θ, and a depth-k storage tape. Initially, the input tape is filled with x , the auxiliary tape is blank, and the depth-k storage tape has only designated blank symbols 2 except for the left endmarker . We set indicates that, on reading two input symbols σ and σ , M changes its inner state q to p by moving two tape heads in directions d and d 1 , changes auxiliary tape symbol τ to θ by moving a tape head in direction d 2 , and changes storage tape symbol γ to ξ by moving in direction d 3 . A string x is accepted (resp., rejected ) if M enters an inner state in Q acc (resp., Q rej ). When excluding (Θ, D 2 ) from the definition of M , the resulting automaton must fulfill the depth-k requirement of k-sda's given in Section 2.2. The depth-susceptibility condition is stated as follows: if To implement any stationary move of two tape heads, M should satisfy a similar requirement as underlying k-sda's; namely, assuming that δ(q, σ, θ, γ) = (p, τ, ξ, Given an aux-k-sda M , take a positive integer c > 0 and a polynomial p so that M 's auxiliary tape uses at most c log n tape cells within p(n) steps on any input of length n ≥ 2. We reduce this space bound down to log n by introducing a larger auxiliary tape alphabet Θ as follows. Since we can express each element of Θ as a binary string of length log |Θ | (by adding 0s as dummy bits if necessary), we can split the auxiliary tape into log |Θ | tracks to hold the element of Θ . Without loss of generality, our machine M can be assumed to have an auxiliary tape composed of c tracks holding binary symbols for an appropriately chosen constant c ≥ 1 and to use at most log n tape cells on this auxiliary tape. We also assume that M initially writes all 0s on every track of the auxiliary tape. We further introduce another useful machine model by expanding multi-head pushdown automata to (two-way) multi-head deterministic depth-k storage automata, each of which uses two-way multiple tape heads to read a given input beside a tape head that modifies symbols on a depth-k storage tape. For each fixed number ≥ 1, we define an -head deterministic depth-k storage automaton as a 2-tape DTM with read-only depth-susceptible tape heads scanning over an input tape and a single read/write tape head over a depth-k storage tape. For convenience, we call such a machine by a k-sda 2 ( ), where the subscript "2" emphasizes that all subordinate tape heads move in both directions (except for the case of stationary moves). Notice that each k-sda 2 ( ) has actually + 1 tape heads, including one tape head working along the storage tape. More formally, a k- is in inner state q, scanning a tuple (σ 1 , . . . , σ ) of symbols on the input tape by the read-only tape heads as well as symbol γ on the depth-k storage tape by the rewritable tape head, then M enters inner state p and writes ξ on the depth-k storage tape by moving the ith input-tape head in direction d i for every index i ∈ [ ] and the storage-tape head in direction d +1 . Remember that a k-sda 2 (1) and a depth-susceptible k-sda are similar in their machine structures but the former can move its input-tape head to the left. The treatment of the acceptance/rejection criteria is the same as underlying k-sda's. A stationary move requires that (i) d +1 = 0 implies γ = ξ and (ii) at least one of d 1 , d 2 , . . . , d +1 is not 0. The depthsusceptibility condition of M says that, for any transition δ(q, We intend to demonstrate that the two new machine models introduced in Sections 3.1-3.2 precisely characterize LOGkSDA. This result can be seen as a natural extension of Sudborough's machine characterization of LOGDCFL to LOGkSDA. Theorem 3.1 Let k ≥ 2. Let L be any language. The following three statements are logically equivalent. 1. L is in LOGkSDA. 2. There exists an aux-k-sda that recognizes L in polynomial time using logarithmic space. 3. There exist a number ≥ 2 and a k-sda 2 ( ) that recognizes L in polynomial time. In the rest of this section, we intend to prove Theorem 3.1. Note that Sudborough's proof [23, for LOGDCFL relies on the heavy use of stack operations, which are applied only to the topmost symbol of the stack but the other symbols in the stack are intact. In our case, however, we need to deal with the operations of a storage-tape head, which can move back and forth along a storage tape by modifying cell's content as many as k times. Sudborough's characterization utilizes a simulation procedure [11, pp.338 -339] of Hartmanis and a proof argument [7, Lemma 4.3] of Galil; however, we cannot directly use them, and thus a new idea is definitely needed to establish Theorem 3.1. The proof of the theorem therefore requires a technically challenging simulation among FL-functions and the other machine models of aux-k-sda and k-sda 2 ( ). Given a function f : Σ * 1 → Σ * 2 in FL for certain alphabets Σ 1 and Σ 2 and a depth-susceptible k-sda M working over Σ 2 in polynomial time, there exists a log-space aux-k-sda N that recognizes Proof. Let f : Σ * 1 → Σ * 2 be any function in FL. We take a DTM M f , equipped with an read-only input tape, a logarithmic space-bounded rewritable work tape, and a write-once output tape, and assume that M f computes f in polynomial time. A given depth-susceptible k-sda running in polynomial time is We design the desired aux-k-sda N for L as follows. An input tape of M holds the string f (x) for any given x. In the following construction of N , we treat the input tape of M as an imaginary tape. Given any input x in Σ * 1 , we repeat the following process until M enters a certain halting state. Using an auxiliary tape of N , we keep track of the content on M f 's work tape, two head positions of M 's input tape and M f 's input tape. Assume that N 's input-tape head is scanning the same tape cell as M 's input-tape head. Assume that M 's head of the imaginary input tape is located at cell h ∈ [0, |f (x)| + 1] Z . (1) If M makes an input-stationary move, then we simply simulate one step of the behavior of M 's storage-tape head since we can reuse the last produced output symbol of M f . (2) Assume that the current step is not any input-stationary move and that M moves to cell h + 1 on the imaginary input tape and scans this cell. We remember the position of the input-tape head, return to the last position of M f 's input-tape head, and resume the simulation of M f with the use of N 's auxiliary tape as a work tape by a series of stationary moves of the input-tape head and the storage-tape head using both the input tape and the auxiliary tape of N until M f produces the (h + 1)th output symbol, say, σ. This is possible because the output tape of M f is write-once. Once σ is obtained, N remembers the positions of M f 's tape heads, moves its input-tape head to restore its location. We then simulate a single step of M on σ by reading current content cell of the storage tape together with a stationary move of both the input-tape head and the auxiliary-tape head. Note that the movement of the storage-tape head requires only the symbol σ but does not need any movement of the other tape heads. We then update the tape-head positions. It is not difficult to show that N eventually reaches the same type of halting states as M does in polynomially many steps. Notice that the storage-tape head and the auxiliary-tape head do not work simultaneously. The depth-susceptibility of N comes from that of M since M f is simulated only when M 's storage-tape head reads a symbol not in We then transform an aux-k-sda to a k-sda 2 (5c + 2), which mimics the behavior of the aux-k-sda. Proof. Let k ≥ 2 and let M denote any aux-k-sda that runs in polynomial time using logarithmic space on all inputs of length n ≥ 2. We use M 's depth-susceptible input tape as the principal tape head of N . We introduce additional tape heads to simulate the behavior of an auxiliary-tape head of M as follows. As noted in Section 3.1, we assume that the auxiliary tape of M is split into c tracks for a certain constant c > 0 and that M uses at most log n cells on the auxiliary tape, where n indicates input length. We want to construct a polynomial-time k-sda 2 (5c + 2) N for which L(N ) coincides with L(M ). Since each track of the auxiliary tape uses the binary alphabet {0, 1}, we can view the content of each track as the binary representation of a natural number. In what follows, we fix one of such tracks. If the track contains a string w of length log n , we treat it as the binary number (1w R ) (2) . We use two tape heads to remember the positions of the input-tape head and the auxiliary-tape head of M . To remember the number (1w R ) (2) , we need additional 5 tape heads (other than the storage-tape head). Since there are c tracks, we need 5c + 2 heads for our intended simulation. Head 1 keeps the tape head position. Whenever M moves the auxiliary tape head, N moves head 1 as well. Head 2 moves backward to measure the distance l of the auxiliary tape head from the left end of the auxiliary tape. Using this information, we move head 3 as follows. If the tape head changes 0 to 1 (resp., 1 to 0) on this target track, then we need to move the head 2 l cells to the right (resp., to the left). How can we move one head to cell 2 l from ? Following [11] , we use three tape heads to achieve this goal as follows. We move head 4 one cell to the right. As head 4 takes one step on the way back to , we move head 5 two cells to the right. We then switch the roles of heads 4 and 5. As head 5 takes one step back to , we move head 4 two cells to the right. If we repeat this process t times, one of the heads indeed reaches cell 2 t . Hence, for the lth run, we should move head 3 to cell 2 l . This process requires 3 tapes. Thus, the total of 5 tape heads are sufficient to simulate the operation on the track content of the auxiliary tape. By the above behaviors of the additional 5c tape heads, they are depth-susceptible. 2 We lessen the number of input-tape heads from + 2 to + 1 for any ≥ 1 by implementing a "counter head" to record the movement of one input-tape head. A counter head is a two-way depth-susceptible tape head along an input tape such that, once this tape head is activated, it starts moving from # to the right and comes back to # without stopping or reading any input symbol on its way. Proof. Let ≥ 1. Let M denote any k-sda 2 ( + 2) with a counter head running in polynomial time. Among all read-only tape heads, we call the principal tape head by head 1 and choose two subordinate tape heads (except for the counter head) as head 2 and head 3. We want to simulate these three tape heads by two tape heads, eliminating one subordinate tape head. We leave all the remaining tape heads unmodified so that they stay working over x. This simulation can be carried out on an appropriate polynomial-time k-sda 2 ( + 1), say, N . With the use of a and b, let t = |axb|. The associated input to N is x . Initially, heads 1-3 are stationed at cell 0. We move heads 2 and 3 to the leftmost a ofx. Assume that head 2 is originally located at cell i and head 3 is at cell j. Such a location pair is expressed as (i, j). We call each block ax i b ofx as block i for any i ∈ [t]. For convenience, and are respectively called block 0 and block t + 1. We want to express the pair (i, j) by stationing a single tape head at the ith symbol of the jth block of N 's input tape. We assume that the associated input-tape head of N , say, head h is located at the ith symbol of the jth block. We force N to hold two input symbols, say, (σ i , σ j ) written at the ith and the jth cells of M 's input tape. We assume that, in a single step, head 2 and head 3 of M move to a new location pair (i + d 1 , j + d 2 ) for d 1 , d 2 ∈ {−1, 0, +1}. To simulate this single step of M , N needs to move head h to a new location and read two symbols (σ i+d1 , σ j+d2 ). If d 2 = 0, then N moves head h in direction d 1 . By contrast, if d 2 = 0, then N moves head h in direction d 1 , reads its input symbol σ i+d1 , and remember it using the inner states. As N moves its tape head leftward to the nearest a, N moves the counter head to the right (from the start cell) to remember the value i + d 1 . Next, N moves head h to the symbol a in block j + d 2 without moving the counter head. Finally, N moves head h to the right until the counter head comes back to the start cell. After the counter head arrives at the start cell, head h reaches the (i + d 1 )th cell in block j + d 2 . Next, N reads an input symbol σ j+d2 and remember (σ i+d1 , σ j+d2 ) using its inner states. Note that the tape head on the storage tape never moves during the above process. 2 Notice that a k-sda owns only a one-way input-tape head whereas a k-sda 2 (2) uses a two-way input-tape head. We thus need to restrict two-way head moves of a k-sda 2 (2) onto one-way head moves. For this purpose, we utilize a counter again together with the use of the reverse of an input. Let M be any polynomial-time k-sda 2 (1) with a counter head. We simulate the two-way movement of M 's input-tape head by an appropriate one-way tape head in the following way. Assume that i represents the position of M 's input-tape head. Let σ i and σ i−1 denote the tape symbols at cells i and i − 1. For simplicity, the input-tape head is called head 1. If M 's input-tape head moves to the right or makes a stationary move, then we exactly simulate the M 's step. In what follows, we consider the case where M 's input-tape head moves to the left; that is, the new position of the input-tape head is i − 1. By the depth-susceptibility condition of M , the current storage-tape cell contains no symbol in Γ (k−1) ∪ Γ (k) . In this case, we move head 1 and the counter head simultaneously tot he right until head 1 reaches the first encounter of b. We continue moving head 1 to the right while we move the counter head back to the left endmarker. We finally make head 1 shift to the right cell. Head 1 then reaches σ i−1 . 2 Next, we show how to eliminate a counter head using the fact that the counter head is depth-stopping. Lemma 3.6 Let M denote any k-sda 2 (2) M with a one-way input-tape head and a counter head running in polynomial time. There exists a depth-susceptible k-sda N such that (1) N 's input tape is also oneway and (2) Sudborough made a similar claim, whose proof relies on Galil's argument [7] , which uses a stack to store and remove specific symbols to remember the distance of a tape head from a particular input tape cell. However, since tape cells on the storage tape are not allowed to modify more than k times, we need to develop a different strategy to prove Lemma 3.6. For this purpose, we use an additional string of the form 1 n to make enough blank space on the depth-k storage tape for future recording of the movement. Proof of Lemma 3.6. Take any k-sda 2 (2) M with a counter head. Since the counter head is depthsusceptible, it thus suffices to show how to simulate the behaviors of the counter head using a storage tape while the current storage tape cell holds no symbol from Γ (k−1) ∪ Γ (k) . In what follows, we describe the simulation procedure. Let x denote any input and set n = | x |. Whenever the counter head is activated, it starts at , moves to the right for a certain number of steps, say, i, and moves back to the start cell to complete a "counting" process. To simulate this process on a k-sda N , there are three cases to consider separately. Note that, even in the case of γ / ∈ Γ (k−1) ∪ Γ (k) , if the counter head is not activated, then we simply moves an input-tape head as M does. Using its inner states, N can remember (a) which direction the storage-tape head comes from and (b) the contents of the currently scanning cell and its left and right adjacent tape cells (if any). Assume that γ 1 γ 2 γ 3 is the content of three neighboring cells, the middle of which is being scanned by M 's storage-tape head. We partition the storage tape into a number of "regions". A region consists of 2k blocks, each of which contains n cells. Each region is used to simulate one run of the counter head and basically holds the information on one storage symbol. Two regions are separated by one separator block of n cells. We call a tape cell a representative if it holds the information on the tape symbol stored in a storage-tape cell of M . A block is called active if it contains a representative, and all other blocks are called passive. In particular, we call a block consumed if it contains B but no representative. In a run of the procedure described below, we maintain the situation that there is at most one active block in a region. Moreover, we maintain the following condition as well. (*) Between γ 1 and γ 2 (resp., between γ 2 and γ), all symbols appearing in this tape region of We use a new storage alphabet consisting of symbols of the form γ (0,e2) and γ (e1,0) for e 1 , e 2 ∈ [0, k] Z , where the parameter e 1 (resp., e 2 ) indicates that there are e 1 consumed blocks in the area of the region left (resp., right) to the currently scanning cell. Assume that M is scanning γ 2 . (1) Assume that M 's storage-tape head comes from the left. If the counter head is not activated but M writes γ new and moves its storage-tape head in direction d (d ∈ {−1, +1}). In this case, N overwrites γ (e1,0) 2 by γ (e1,0) new and moves its storage-tape head in direction d. Using 1 n as well as the value e 1 , N moves to the right, find a border to the neighboring region. If N finds a representative in this neighboring region, then N stops. If N reaches the right border of the neighboring region without finding any representative, then this region must represent B and N returns and stops at the center of the neighboring region. If M 's storage-tape head comes from the left, we use γ (2) Consider the case where M 's storage-tape head sits at the cell that has never been visited before. Note that this cell is blank 2 and all cells located at its right area are also blank. (a) If M writes γ new and moves its storage-tape head to the left, then we need to secure enough space for future executions of (3)-(5) because the storage-tape head must move around to mark certain cells. Assume that N is scanning γ (e1,0) 2 . To simulate the counter head moves, N writesB (1) as a marker, moves its storage-tape head for i steps, and comes back toB (1) , moves to the right for n steps, writes γ (e1+1,0) new , and then moves to the left to search for a representative in the left neighboring region as in (1) . (b) In contrast, if M moves to the left, then we further need to produce a new region. For this purpose, N further moves to the right for (k − e 1 − 1)n cells to find a new border, continues moving for kn cells to find the center of this new region, and finally stops. (3) Consider the case where M 's storage-tape head reads a non-blank tape symbol γ 2 ∈ Γ (a2) with 0 < a 2 < k, write a tape symbol γ new over it, and moves in direction d ∈ {−1, +1}. (a) Assume that M had moved to γ 2 from the left. See Fig. 1(1) . Notice that e 2 < k − 1. At present, N is assumed to be scanning γ (e1,0) 2 . In this case, N remembers γ (e1,0) 2 in its inner states, modifies it tô B (a2+1) ( / ∈ Γ (k) ), and moves its tape head for i steps to the right as the counter head does, by changing every encountered symbol of the form B (c) with c ∈ [0, k] Z to B (min{c+1,k}) on its way. When N makes i steps, it makes a left turn, returns toB (e2+1) , and writes B. This mimics the back-and-forth movement of the counter head. The tape head again starts moving rightwards by changing B (c) for c ∈ [0, k − 1] Z to B (c+1) for exactly n steps, and it finally writes γ (e1+1,0) new . This is possible because the input-tape head reads 1 n and (*) is satisfied. Finally, if d = +1, then N moves to the right looking for a representative of the right region. On the contrary, when d = −1, N moves to the left looking for a representative of the left region. (b) Assume that M had moved to γ 2 from the right. This case is treated symmetrically to (a). See Fig. 1(2) . (4) Consider the case where M 's storage-tape head reads a tape symbol γ 2 = and moves in direction d = +1. In this case, N behaves similarly to (1) except that its tape head writes a new markerˆ since γ new = . (5) Consider the case where M 's storage-tape head reads the symbol γ in Γ (k−1) ∪ Γ (k) . We then move the storage-tape head in direction d. In this case, N 'sw storage-tape head is at the center of the current region. Since we do not need to simulate the counter head, we writes B (whenever γ ∈ Γ (k−1) ) and we follow the procedure of moving to the neighboring region as in (1). 2 Finally, we combine all the lemmas (Lemmas 3.2-3.5) and verify Theorem 3.1. Proof of Theorem 3.1. Let k ≥ 2. The implication (1)⇒(2) is shown as follows. Take any language L in LOGkSDA over alphabet Σ. There exist a function f : Σ * → Σ * 2 in FL for an appropriate alphabet Σ 2 and a depth-susceptible k-sda M working over Σ 2 such that, for any string x ∈ Σ * , if x ∈ L, then M accepts f (x); otherwise, M rejects f (x). By Lemma 3.2, we can obtain a log-space aux-k-sda N that recognizes {x ∈ Σ * | f (x) ∈ L(M )} in polynomial time. Lemma 3.3 obviously leads to the implication (2)⇒(3). Finally, we want to show that (3) implies (1). Given a language L, we assume that there is a polynomial-time k-sda 2 ( ) M recognizing L for a certain number ≥ 2. We transform this k-sda 2 ( ) to another k-sda 2 ( + 1) M by providing a (dummy) counter head. We repeatedly apply Lemma 3.4 to reduce the number of input-tape heads down to 2. Lemma 3.5 then implies the existence of a k-sda 2 (1) N with a one-way input-tape head that correctly recognizes L rev in polynomial time. By Lemma 3.6, we further obtain a depth-susceptible k-sda K that can recognize L in polynomial time. Since L consists of strings of the formx, it suffices to set f (x) =x. By the definition ofx, we can compute this function f using log space. This concludes that L belongs to LOGkSDA. 2 As a major significant feature, we intend to prove the existence of L-m-complete languages in LOGkSDA for each k ≥ 2. For this purpose, we first construct a universal simulator that can simulate all k-sda 2 's by properly encoding k-sda 2 's and their inputs. We further force this universal simulator to be a k-sda 2 (3). Sudborough [23] earlier proposed, for every number m ≥ 2, the special language L (m) 0 , which is Lm-complete for DCFL and thus for LOGDCFL because LOGDCFL is closed under L-m-reductions. Sudborough discovered a "tape-hardest" language, which literally encodes transitions of deterministic pushdown automata. Sudborough's success comes from the fact that the use of one-way and two-way deterministic pushdown automata makes no difference in formulating LOGDCFL. For kSDA, we propose the following decision problem (or a language) MEMB k . Recall that a decision problem is identified with its associated language. Membership kSDA Problem (MEMB k ): • Instance: an encoding M, x of a k-sda M with alphabet Σ and an input x. • Question: does M accept x? A key to an introduction of MEMB k is a generic scheme of encoding both a k-sda M and an input x into a single string M, x over a fixed alphabet, not depending on the choice of M and x. In Section 4.2, we will explain such an encoding scheme in details. 1. The language MEMB k belongs to LOGkSDA. 2. MEMB k is L-m-hard for kSDA and thus for LOGkSDA. As an immediate consequence, there exists a concrete generic L-m-complete language for LOGkSDA for each index k ≥ 2. Hereafter, we want to describe the desired encoding M, x of a k-sda M and an input x. We need to heed special attention to how to encode a pair of M and x so that we can easily retrieve them from the encoded string M, x using only depth-k storage tapes. Since the storage usage of k-sda's are quite different from that of deterministic pushdown automata, our encoding scheme is therefore quite different (1) δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h3 , d 1 , +1) (moving to the right) if e < k is even and e = min{e + 1, k}. (2) δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h3 , d 1 , −1) (left turn) if e < k is even and e = min{e + 2, k}. (3) δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h3 , d 1 , −1) (moving to the left) if e < k is odd and e = min{e + 1, k}. (4) δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h3 , d 1 , +1) (right turn) if e < k is odd and e = min{e + 2, k}. (5) δ(p h2 , σ h5 , ) = (p h4 , , d 1 , +1). (6) δ(p h2 , σ h5 , γ h1 ) = (p h4 , B, 0, d 2 ) if d 2 = 0 and either e = k or e = k − 1. (7) δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h1 , d 1 , 0) (stationary move) with d 2 ∈ {−1, +1}. Since a k-sda uses arbitrary sets Q, Σ, and Γ, we first need to express them using only fixed alphabets independent of M and x. We encode an input x = x 1 x 2 · · · x n of length n symbol by symbol as follows. Let X 0 ≡ #, X n+1 ≡ #, and X l ≡x l # for any position l ∈ [n]. These strings will be later combined with an encoding of transitions of M . Notice that all information stored in a storage tape will be unwillingly modified as a tape head traverses through the tape. Since an input tape is read only once from left to right, we need to encode multiple copies of all the transition rules. We simulate all storage-tape operations of M on a storage tape and also we remember the M 's current inner states. We translate Items (1)-(7) into stringsũ h,e , which are defined below, where the superscripts "lt" and "rt" respectively refer to a left turn and a right turn. In what follows, for convenience, p n , σ n , and γ n are assumed to be already translated. and d 1 ∈ {0, +1}. Items (1 )-(7 ) directly correspond to Items (1)-(7) given above. The direction d 2 of the storage-tape head is expressed by the order of middle blocks because the storage-tape head moves to the left and to the right to access the symbols that the tape head has already modified earlier. For any j ∈ {h, h , h } and any e ∈ {0, 1, 2, 3}, the segment γ h1 #p h2 # (as well as #p h2 #) is called a receptor ofũ j,e and the segment 001#(γ h3 #) θ #p h4 # is called a residue ofũ j,e . We setũ In the end, we set P (+) to beũ The other three strings P (lt) , P (−) , and P (rt) are defined similarly. Combining those four strings, we define P to be P (+) #P (lt) #P (−) #P (rt) #. Let S denote the string of the formp 1 #p 2 # · · ·p m1 #. Finally, we define M, x as a string of the form 1 t #S#X 0 P θ 2 kn X 1 P θ 2 kn · · · X n+1 P θ 2 kn #, where t = ( log c + 1)θ. Note that the length of M, x is bounded by O((|Q M ||Γ M ||x|) 4 ). It is not difficult to see that the encoding M, x is uniquely obtained from M and x. Our goal of this subsection is to provide the proofs of Theorem 4.1 and Lemma 4.3. For this purpose, we first present, given an arbitrary encoded string M, x , how to simulate M on x precisely using a depth-k storage tape. We first introduce necessary terminology. A storage-tape content refers to a sequence of storagetape symbols written on the storage tape from the start cell to the rightmost non-2 symbol by simply ignoring the tape cells that are not yet visited. For our convenience, we call a string ξ of the form 0 b σ 0 σ 1 · · · σ l−1σl σ l+1 · · · σ m a storage configuration of M if M is scanning cell b of the input tape, l is the location of the storage-tape head of M , σ 0 equals ,σ l is of the form (1 h , σ l ), and the string σ 0 · · · σ l−1 σ l σ l+1 · · · σ m is a storage-tape content of M . Let x = x 1 x 2 · · · x n denote any input of length n. The idea behind the following proof of Lemma 4.3 is that a storage tape keeps the information on the current surface configuration. To simulate the next move, we go through all transitions given in an input tape to locate the target transition to be taken at the next move by way of removing any cancelling pair. If we successfully delete the old surface configuration, we write a new surface configuration into an "appropriately chosen" new section of the storage tape. Since our storage tape is of depth k, we cannot place a new symbol at an arbitrary location. Proof of Lemma 4.3. Let k ≥ 2. We define a universal k-sda(3) simulator working for all k-sda's. M } e∈[0,k] Z , δ, p 1 , Q M,acc , Q M,rej ) be any k-sda and let x = x 1 x 2 · · · x n be any input of length n given to M . Additionally, we set x 0 = and x n+1 = . Let us recall the notationsΣ,Σ, Γ (e) ,Γ, m 1 , m 2 , and θ defined in Section 4.2, associated with M . Assume that M, x has the form 1 t #S#X 0 P θ 2 kn X 1 P θ 2 kn · · · X n+1 P θ 2 kn #. To represent each cell on the depth-k storage tape of M , we use a "block" of k "sections". Each section is a string of the form (σ#) θ and B t and an entire block has the form (B t ) g1 (σ#) θg2 (B t ) g3 , where t = ( log c + 1)θ and g 1 + g 2 + g 3 = k with g 1 , g 2 , g 3 ∈ N. The non-blank string (σ#) θ is called the core of this block if it exists. Let Y denote a series Y 0 Y 1 · · · Y n+1 of such blocks except for Y 0 ≡ . For simplicity, we say that a block is currently accessed if the storage-tape head is stationed in the block. Similarly,p h is currently accessed if Head 2 is readingp h . We first decode this sequence Y into a storage configuration y 0 y 1 · · · y n as follows. If Y l is empty, then we automatically set y l = 2. If Y l is blank, then we set y l = B. If the core of Y l is (σ#) θ and we currently access Y l andp h , then we set y l = (1 h , σ), which indicates that M is in inner state p h and σ appears in the cell of M 's storage-tape head location l. In contrast, if Y l equals (σ#) θ but Y l is not currently accessed, then we set y l = σ. In what follows, we want to demonstrate that M, x correctly "induces" the computation of M on x by induction on the number r of steps. To express Y at step r, we write Y (r) . Similarly, we write y (r) l for y l at step r. Initially, we define Our goal is to verify that the following statement (*) is true. (*) For each index r ≥ 1, we read X r P θ 2 kn and produce n+1 on the depth-k storage tape and the string y ≡ y If (*) is true, then (*) is also true at the time just after we process X n+1 P θ 2 kn #. Since the storage configuration of M contains a halting state, we check that the currently scanning block containsp 2 #, which indicates that M has an accepting state. If so, then we accept the input M, x ; otherwise, we reject it. Hereafter, we focus on proving (*) by induction on r ∈ N + . We call the input-tape heads of our simulator by Head 1, Head 2, and Head 3. Head 1 covers the string 1 t , Head 2 points the current inner state out of the list S. (Step r = 1) Note that X 0 ≡ #, X n+1 ≡ #, and X l ≡x l # for any index l ∈ [n]. At step 1, since M applies a transition of the form δ(p 1 , , ) = (p h4 , , d 1 , +1) in Item (5), the tape head moves from cell 0 to cell 1, and thus the storage configuration of M on x at step 1 is (1, )x . We search inside of P forũ M , we cannot modify it. We then move the storage-tape head to the right to access 2. We define Y (2) Consider the case where M 's transition is of the form δ(p h2 , σ h5 , γ h1 ) = (p h4 , γ h3 , d 1 , −1) with σ h5 ∈ {x s , ε}, γ h1 ∈ Γ (e) with e < k, and γ h3 ∈ Γ (min{e+2,k}) . (i) Consider the case where min{e + 1, k} < k. Assume that σ h5 = x s . From P , we look forũ (lt) h,0 (≡ σ h5 #γ h1 #p h2 #100#(γ h3 #) θ #p h4 #1 d1+1 0 1−d1+1 #) by the receptor search using the storage-tape head over the first encountered section (γ h1 #) θ in Y (r−1) l and Head 2 over p h2 . We then discover the residue 100#(γ h3 #) θ #p h4 #1 d1+1 0 1−d1 #. We then overwrite all the remaining non-blank sections (γ h1 #) θ by (γ h3 #) θ by moving to the right. If we reach the rightmost cell of Y (r−1) l , then we make a left turn and return to the rightmost cell of Y (r−1) We then obtain y (r) l ≡ γ h3 and y (r) For all the other y (r) j , we set it as y (r−1) j . The resulting string y (r) clearly matches the storage configuration of M on x at step r. The case of σ h5 = ε is similarly handled except that we do not need to read X s . (ii) In the case of min{e + 2, k} = k, since γ h3 must be B, we write B log c +1 instead of γ h3 #. Hence, every section of Y This completes the proof. 2 To close this section, we still need to prove Theorem 4.1. Proof of Theorem 4.1. (1) This directly comes from the existence of a universal simulator for all k-sda's, guaranteed by Lemma 4.3. (2) Let M = (Q, Σ, {Γ (e) } e∈[0,k] Z , δ, q 0 , Q acc , Q rej ) be any k-sda. We want to show that L(M ) is L-m-reducible to MEMB k . Take any input x and consider the encoding M, x of M and x. By the definition, M, x has the form X 1 P g(k,n) X 0 P g(k,n) · · · X n+1 P g(k,n) #. Since Q, Σ, and Γ are all fixed finite sets independent of x, the construction of M, x from x requires polynomial time using only logarithmic space. Let f M (x) = M, x for any x ∈ Σ * . By the definition of MEMB k , it follows that M accepts x iff f M (x) is in MEMB k . Therefore, we conclude that L(M ) is L-m-reducible to MEMB k via f M . Since LOGkSDA is closed under L-m-reductions, MEMB k is also L-m-hard for LOGkSDA. 2 We have discussed the model of depth-susceptible k-sda's in Sections 3-4. We here turn our interest to another model of depth-immune k-sda's. The computational complexity of kSDA imm is unclear at this moment. Concerning the complexity of DCFL, Cook [4] earlier demonstrated that DCFL is included in SC 2 . In what follows, we intend to present a non-trivial upper bound on the complexity of kSDA imm : kSDA imm ⊆ SC k for any k ≥ 2. Since kSDA imm includes DCFL, this upper bound of kSDA imm significantly extends Cook's result. Theorem 5.1 For any integer k ≥ 2, kSDA imm ⊆ SC k . Thus, ωSDA imm ⊆ SC follows. Since SC k is closed under L-m-reductions, Theorem 5.1 instantly yields the following corollary. Corollary 5.2 For any k ≥ 2, LOGkSDA imm ⊆ SC k . As another immediate consequence of Theorem 5.1, we obtain a complexity upper bound of Hibbard's deterministic k-limited automata (k-lda's, for short) because k-lda's (with the blank-skipping property [25] ) can be easily simulated by even depth-immune k-sda's by pretending that all input symbols are written on a storage tape. Notice that, in the past literature, no upper bound except for CFL has been shown for Hibbard's language families [12] . Thus, this is the first time to obtain non-trivial upper bounds for those families. To verify Theorem 5.1, we attempt to employ a divide-and-conquer argument to simulate the behaviors of each depth-immune k-sda in polynomial time using only O(log k n) space. Our simulation procedure is based on [1] but expanded significantly to cope with more complex moves of k-sda's. We first need to lay out a basic framework to describe the desired simulation procedures for k-sda's. Let M = (Q, Σ, {Γ (e) } e∈[0,k] Z , δ, q 0 , Q acc , Q rej ) denote any depth-immune k-sda. We fix an arbitrary input x ∈ Σ * and set n = |x|. Since M halts in polynomial time, we choose a polynomial p that upper-bounds the running time of M . To simulate the behavior of M on x, we introduce an important notion of "marker". A computation of a depth-immune k-sda is characterized by a series of markers. A marker C is a quintuple (q, l 1 , l 2 , σ, r, t) that indicates the following circumstance: at time t with section time r (which will be explained later), M is in inner state q, its input-tape head is located at l 1 , and the storage-tape head is at the l 2 th cell, which contains symbol σ. To express the entries of C, we use the following notations: state(C) = q, in-loc(C) = l 1 , st-loc(C) = l 2 , symb(C) = σ, sectime(C) = r, and time(C) = t. Let M x denote the set of all markers of M on input x. To emphasize "l 2 ", in addition, we occasionally call C an l 2 -marker if st-loc(C) = l 2 . Similarly, we call C a t-marker if time(C) = t. Assume that C t has the form (q, l 1 , l 2 , σ, r, t) and δ has a transition of the from δ(q, x (l1) , σ) = (p, τ, d 1 , d 2 ) with d 1 , d 2 ∈ {−1, 0, +1}. We then set next-state(C t ) = p, next-loc(C t ) = l 2 + d 2 , and next-symb(C t ) = τ . To obtain the next marker C t+1 (at time t + 1) from C t , we also need the most recent (l 2 + d 2 )-marker C with time(C ) < t. If C is obtained with ξ = next-state(C ), then the quintuple (p, l 1 + d 1 , l 2 + d 2 , ξ , r + |d 2 |, t + 1) is the desired marker C t+1 . We remark that, when ξ is known, we do not need to use C to compute C t+1 . We then introduce additional critical notions. Let C = (q, l 1 , l 2 , σ, r, t) and C = (q , l 1 , l 2 , σ , r , t ) be two arbitrary markers of M on x. We say that C is left-visible from C if (i) C is a marker with l 2 < l 2 and t < t and (ii) there is no markerC satisfying both st-loc(C) ≤ l 2 and t + 1 ≤ time(C) < t. Moreover, C is the left-cut of C if C is left-visible from C and t is the largest number with t < t (i.e., C is the most recent left-visible marker). If the storage-tape head moves to the left from cell l 2 at time t, then the left cut of C t contains the latest content of cell l 2 − 1. This makes it possible to determine the renewed content of cell l 2 − 1 by applying δ directly. In symmetry, we can define the notions of right-visibility and right-cut. Recall that the storage-tape head may make stationary moves. To distinguish those from any other stationary move of the input-tape head, we call the former storage-stationary-moves. A 0-section consists of either (a) a right/left turn (i.e., leftmost/rightmost point) or (b) a non-storage-stationary-move to the right/left followed by a (possibly empty) series of consecutive storage-stationary-moves. For any d ≥ 0, a Figure 4 : A storage content history. Here, we assume that an underlying k-sda makes no storage-stationarymove. Thus, runtime matches section time (or "sectime"). (d + 1)-section is the union of two consecutive d-sections. The section time of C is the total number of 0-sections before C (not including the 0-section containing C). Given a set of markers, C is said to be the leftmost marker (resp., the rightmost marker) if st-loc(C) is the smallest (resp., the largest) among the markers in the given set. The leftmost (resp., rightmost) marker in each d-section S is called the left-representative (resp., right-representative) of S. Given a marker C, a section S is called current for C if S contains C, S is completed for C if C appears after the end of S, and S is the last-left-good for C if S is the latest completed section whose left-representative is left-visible from C. In a similar way, we define the notion of "last-right-good." Let us introduce a subroutine, called A, which simulates one step of M on input x. Given x, we set e x = log |x| . Let C t = (q, l 1 , l 2 , σ, r, t) denote an arbitrary marker of M on x. To explain the notion of contingency tree, we first define a contingency list at time t, which consists of the four items described below. Let d ∈ [0, e x ] Z . (a) L (l) last (d, l 2 , t), which consists of all markers C satisfying the following requirement: there exist a d-section S and a (d + 1)-section S such that (i) C is the left-representative of S, (iii) S is enclosed in S , and (iv) S is the last-left-good section for C t . cur (d, l 2 , t), which consists of all markers C satisfying the following requirement: there exist a d-section S and a (d + 1)-section S such that (i) C is the left-representative of S, (ii) S is enclosed in S , (iii) S is current for C t , and (iv) C is left-visible from C t . (c) L (r) last (d, l 2 , t) and L (l) cur (d, l 2 , t), which are defined similarly to (a) and (b) by simply replacing "left" with "right". Notice that, by the definition, the left-cut (resp., the right-cut) of C t is the latest (i.e., the most recent) marker in L (l) (0, l 2 , t) (resp., L (r) (0, l 2 , t)). For every index d ∈ [0, e x ] Z and any a ∈ {l, r}, let L (a) (d, l 2 , t) = L (a) cur (d, l 2 , t) and set L(e 1 , e 2 , l 2 , t) = ( d∈[0,e1] Z L (l) (d, l 2 , t)) ∪ ( d∈[0,e2] Z L (r) (d, l 2 , t)) for any pair e 1 , e 2 ∈ [0, e x ] Z . We always assume that all markers in L(e 1 , e 2 , l 2 , t) are enumerated according to their time. This contingency list L(e 1 , e 2 , l 2 , t) is said to be linked to C t . A 2-contingency tree at time t is a two-node tree whose root C t and a child is a contingency list linked to C t . A (k + 1)-contingency tree at time t is a leveled, rooted tree whose top 2 levels form a 1-contingency tree at time t and each t -marker (except for C t ) appearing in this 2-contingency tree is a root of another k-contingency tree at time t with t < t. See Fig. 3 . In particular, C 0 links itself to a unique empty contingency list. To describe a k-contingency tree at time t, we use the notation D(k, e 1 , e 2 , l 2 , t). The principal contingency list of such a tree refers to the node attached directly to the root C t of the tree. Proof. Let n = |x|. Each contingency list inside D(k, e 1 , e 2 , l 2 , t) has only O(log n) markers because 0 ≤ e 1 , e 2 ≤ e x = log n . Each node in D(k, e 1 , e 2 , l 2 , t) except for the root is a contingency list and there are k − 1 levels of such nodes in the tree. Since the second level of the tree has only one node, D(k, e 1 , e 2 , l 2 , t) must contain O(log k−2 n) markers. 2 We will describe in Section 5.3 the desired procedure to simulate a depth-immune k-sda in polynomial time using only O(log k n) space. The core of this procedure is the following subroutine, which manages k-contingency trees to recover the past behaviors of a k-sda to determine its next moves. Subroutine A: An input is of the form (C t , dv, e 1 , e 2 , , D(dv, e 1 , e 2 , l 2 , t)) with C t = (q, l 1 , l 2 , σ, r, t) ∈ M t , l 2 = next-loc(C t ), dv ∈ [0, k] Z , and e 1 , e 2 ∈ [0, e x ] Z . If dv ≥ dv(C t+1 ) holds for the (t+1)-marker C t+1 = (q , l 1 , l 2 , σ , r , t+1), then A returns (C t+1 , dv, e 1 , e 2 , D(dv, e 1 , e 2 , l 2 , t+1)) by making a series of recursive calls to itself. Otherwise, it may return anything. Hereafter, we describe the behaviors of Subroutine A in details. [Description of Subroutine A] 1. Consider the case where the storage-tape head stays still (i.e., makes a storage-stationary move), that is, l 2 = l 2 . Define r = r. Since we do not need to alter the contingency tree, we automatically set L (l) (d, l 2 , t + 1) = L (l) (d, l 2 , t) and L (r) (d , l 2 , t + 1) = L (r) (d , l 2 , t) for any d ∈ [0, e 1 ] Z and d ∈ [0, e 2 ] Z in the principal contingency list of the contingency tree D(dv, e 1 , e 2 , l 2 , t + 1). We further compute C t+1 from C t directly by applying δ. 2. Consider the case where the storage-tape head is moving to the right, that is, l 2 = l 2 + 1. Define r = r + 1. (a) [list L (l) (d, l 2 , t + 1) with d ∈ [0, e 1 ] Z ] Consider the principal contingency list L(e 1 , e 2 , l 2 , t) of D(dv, e 1 , e 2 , l 2 , t). We compute L(e 1 , e 2 , l 2 , t + 1) as follows. Take the maximum numbers d 0 ∈ [0, e 1 ] Z and d 2 ∈ [0, e 1 ] Z satisfying that r ≡ 0 (mod 2 d0 ) and r ≡ 0 (mod 2 d2 ). For each number i ∈ [0, max{d 0 , d 2 } − 1] Z , we inductively define A 0 = L (l) For any i ∈ [d 0 + 1, e 1 ] Z and any subscript b ∈ {cur, last}, we set L b (d, l 2 , t). We update all the others without changing the content. In the example of t = 8 in Fig. 5 , we have r = 9, d 0 = 0, and d 2 = 3. (iii) For all the remaining "nodes" in the contingency tree D(dv, e 1 , e 2 , l 2 , t), we also update them according to the above changes. This process is obtained from Item 3(a) by simply replacing "l" with "r", "left" with "right", "l 2 − 1" with "l 2 + 1", "d" with "d ", and "e 1 " with "e 2 ". (c) [marker C t+1 ] The marker C t+1 is calculated as follows. It suffices to compute the left-cut (if any) to determine C t+1 from C t . (i) Assume that dv ≥ 0. If there is a left-cut C of C t at time < t either stored inside D(dv, e 1 , e 2 , l 2 , t) or obtained by 2(b), the desired marker C t+1 is directly calculated from C t and symb(C ) by applying δ. Otherwise, if there is no right-representative at location ≥ l 2 at time < t inside D(dv, e 1 , e 2 , l 2 , t), then we know that the storage-tape head has never visited cell l 2 , and thus cell l 2 is still 2. By setting σ = 2, we compute the l 2 -marker C t+1 from C t alone by δ. (ii) If dv < 0, then cell l 2 must be already blank B. Let σ = B and computer C t+1 from C t alone by applying δ. 3. Consider the case where the storage-tape head is moving to the left, that is, l 2 = l 2 − 1. Define r = r + 1. (a) [list L (l) (d, l 2 , t + 1) with d ∈ [0, e 1 ] Z ] Consider the principal contingency list L(e 1 , e 2 , l 2 , t) of D(dv, e 1 , e 2 , l 2 , t) and compute L(e 1 , e 2 , l 2 , t + 1) as follows. Take the maximum number d 0 in [0, e 1 ] Z satisfying r ≡ 0 (mod 2 d0 ) and define d 1 to be the minimal number in [0, e 1 ] Z satisfying |L (l) (d 1 , l 2 , t)| ≥ 2. If there is no such d 1 , then we automatically set d 1 = e 1 . (i) For each index d ∈ [0, d 1 ] Z , we reset L (l) (d, l 2 , t) to be L (l) (d, l 2 , t) − {C }, where C is the newest marker in L (l) (d, l 2 , t) according to time. The contingency tree D(dv, e 1 , e 2 , l 2 , t) is thus updated. If d 1 = 0, then we define L (l) (d, l 2 , t + 1) = L (l) (d, l 2 , t) for any d ∈ [0, e 1 ] Z . In the example of t = 18 in Fig. 6 , we have r = 19 and d 0 = d 1 = 0. Assume otherwise. If the left-cut of C t+1 still remains in D(dv, e 1 , e 2 , l 2 , t), then we set C to be this left-cut. This case is depicted as the example of t = 15 in Fig. 6 with r = 16, d 0 = 4, and d 1 = 0. We then obtain C = C 13 . In the case where there is no left-cut of C t+1 stored in D(dv, e 1 , e 2 , l 2 , t), we choose the latest marker C from L (l) (d 1 , l 2 , t). Let l = st-loc(C ) and t = time(C ). Since d 1 > d 0 , C is the left-representative of a completed d 1 -section. See the example of t = 16 in Fig. 6 with r = 17, d 0 = 0, d 1 = 2, and C = C 8 . We then compute the left-cut C of C t+1 , which appears in certain last-left-good d -sections with d < d 1 , where st-loc(C ) = l 2 and t < time(C ) < t. This is done by recursively calling A as follows. Starting from C with D(k, d 1 − 1, e 2 , l , t ), we inductively generate a pair ofC and D(dv − 1, d 1 − 1, e 2 ,l,t) witht = time(C) andl = st-loc(C), and run A(C, dv − 1, d 1 − 1, e 2 , D(dv − 1, d 1 − 1, e 2 ,l,t)) to obtain the next markerC and its contingency tree D(dv−1, d 1 −1, e 2 ,l ,t+1) withl = st-loc(C ) until we reach C . Note that the final recursive call must be of the form A(C, dv−1, d 1 −1, e 2 , D(dv−1, d 1 −1, e 2 , l 2 −1,t)) with l 2 − 1 = st-loc(C). In the example of t = 16 in Fig. 6 , C becomes C 10 . last (d 1 , l 2 , t) ∪ B d1 . In the example of t = 16 in Fig. 6 , we obtain t = 10 because of C = C 10 . (iii) In the case of d 0 > d 1 , we further define E 0 = ∅ and E i+1 = L (l) cur (i, l 2 , t) ∪ {C i } for any i ∈ [d 1 + 1, d 0 − 1] Z , whereC i is the oldest marker in E i . We define L (l) last (i, l 2 , t + 1) = E i and L (l) cur (i, l 2 , t + 1) = ∅ for any i ∈ [d 1 + 1, d 0 − 1] Z , and L (l) cur (d 0 , l 2 , t + 1) = E d0 . We update all the others without changing the content of the old contingency lists L (l) (d, l 2 , t). This process exemplified in the case of t = 15 in Fig. 6 , where r = 16, d 0 = 4, and d 1 = 0. then we simply shift all entries from current sections to last-left-good sections. Assume otherwise. In this case, the newest marker is the left-cut and is removed. A series of recursive calls helps us find the left-cut C of C t+1 . Up to d 1 − 1, we move all entries of current sections to last-left-good sections by first adding the left-cut C and shifting the oldest markers in d-sections to (d + 1)-sections. In the case of d 0 > d 1 , for all d ≤ d 0 , any d-section begins at section time t + 1. We move all entries in last-peft-good sections to current sections by shifting the oldest markers in d-sections to (d + 1)-sections. Items 2(b) and 3(b) are handled symmetrically. 2 Claim 2 If L (a) (d, l 2 , t + 1) is correct and C i is correct for all i ∈ [0, t] Z , then C t+1 is correct, where a ∈ {l, r}. Proof. We need to check Items 1, 2(c), and 3(c). The case of Item (1) is obvious. Consider Item 2(c)(i). If there is a left-cut, then we can compute C i+1 from C i and the left-cut. For Item 2(c)(ii), we require only C i to compute C i+1 . The case of 3(c) is similar. 2 Next, we argue that the depth of recursion of A is O(log n). To compute D(dv, e 1 , e 2 , l , t ), consider Item 3. Concerning Item 3(b), the update of D(dv, e 1 , e 2 , l , t ) to D(dv, e 1 , e 2 , l + 1, t + 1) in Item 3(b) is similar to Item 2(a) and thus easy. Next, consider Item 3(a). If there is no right-cut C for C t , then we need to make a series of recursive calls. Note that each recursive call in the run of A(C, k, d 1 − 1, e 2 , D(k, d 1 − 1, e 2 ,l,t)) uses the parameter value d 1 − 1, which is smaller than e 1 . Since e 1 , e 2 ∈ [0, e x ] Z and e x = log |x| , the depth of recursion must be O(log n). A similar argument is applicable to Item 2(b). Consider Item 3(a) of A. To compute C t+1 with l 2 = st-loc(C t+1 ), we need to compute markers C (1) , C (2) , . . . , C (m) satisfying l 2 = st-loc(C (i) ) and 0 ≤ time(C (i+1) ) < time(C (i) ) < t for any i ∈ [m]. If m > dv(C t+1 ), then Subroutine A eventually acknowledges that the cell l 2 has already become blank and it correctly computes C t+1 . At that point, the recursion ends. 2 Finally, we prove Theorem 5.1, which provides an SC k -upper bound on the computational complexity of kSDA for each k ≥ 2. Proof of Theorem 5.1. Our goal is to prove that kSDA imm ⊆ SC k . Since SC k is closed under L-m-reductions, we immediately obtain LOGkSDA imm ⊆ SC k as well. Take an arbitrary language L in kSDA imm and any depth-immune k-sda M that recognizes L in polynomial time. Consider the following procedure that simulates M step by step on input x. Initially, we prepare the initial marker C 0 = (q 0 , 0, 0, , 0) and the unique k-contingency list D(k, e x , e x , 0, 0) linked to C 0 . Simulation Procedure P for M : On input x ∈ Σ * , set n = |x| and e x = log |x| , prepare C 0 and D(k, e x , e x , 0, 0). Set I 0 = (C 0 , k, e x , e x , D(k, e x , e x , 0, 0)). Recursively, we apply A to I i and obtain I i+1 until M halts at time, say, t with t ≤ p(n). Let I t = (C t , k, e x , e x , D(k, e x , e x , l 2 , t)) denote the final outcome of A. If C t contains an accepting state, then we accept x; otherwise, we reject x. By Lemma 5.5, if I i correctly represents the configuration of M at time i, then Subroutine A correctly computes I i+1 . By the mathematical induction, I t correctly represents the halting configuration of M . This implies that the above simulation procedure correctly determine whether or not x belongs to L. To store a contingency tree requires O(log k−1 n) bits by Lemma 5.4 since each contingency list in the tree needs O(log n) bits to express. Lemma 5.5 then concludes that we need only polynomial runtime and O(log k n) memory bits to perform the aforementioned simulation procedure. Therefore, L belongs to SC k . 2 We have introduced two new machine models of one-way deterministic depth-k storage automata (or k-sda's) as access-controlled extensions of deterministic pushdown automata. We have then studied the fundamental properties of languages recognized by such machines and languages that are logarithmic many-one reducible (or L-m-reducible) to those languages. We have called by kSDA (resp., kSDA imm ) the language family induced by depth-susceptible k-sda's (resp., depth-immune k-sda 2 's). Those machines naturally expand Hibbard's model of scan limited automata [12] . The L-m-closure of kSDA (resp., kSDA imm ) is succinctly denoted by LOGkSDA (resp., LOGkSDA imm ). We wish to list some of the important problems that have been neither discussed nor solved in this exposition. we update them according to the above changes. (b) t) or obtained by 3(a), then the desired marker C t+1 is directly calculated from C t and symb(C ) by applying δ. Otherwise, if there is a left-representative for C t at time t 0 < t and no left turn exists at location ≥ l 2 at time < t 0 , then we start the left-representative C rep and compute a series of markerC one by one (by incrementing time) from C rep using 2 as inputs with no recursive call until we obtain the l 2 -marker C If dv ≤ 0, then the cell l 2 must be already blank B. In this case, we do not need the left-cut of C t . Let σ = B and computer C t+1 from C t alone by applying δ depth of recursion with respect to inputs of length n refers to the maximum number of recursive Let C t and C t+1 be two markers at times t and t + 1, respectively. Let e 1 , e 2 ∈ [0, e x ] Z and let n = |x|. If dv ≥ dv(C t+1 ) 1)). In particular, Item 3(a) as well as Item 2(b) requires a series of recursive calls to A itself Recall that the tape head moves to the right. If r is odd, then a d-section never ends at time t. Note that d 2 equals the previous value of d 0 at time t − 1. Thus, we move all entries of d-section into (d + 1)-section for all d ≤ d 2 . By contrast, if r is even, then any d-section never begins at time t. For all d ≤ d 0 , any d-section ends at time t for all d. Since the current 0-section changes, we set it to empty at time t + 1 and move all entries in current sections to last-left-good sections by shifting the oldest markers in d-sections to (d + 1)-sections. Next, we consider Item 3(a) However, we do not know whether this is the best upper bound that we can achieve. We can ask if the computational complexity of kSDA imm is much lower than SC k . For instance, is it true that kSDA imm ⊆ SC k−1 ? Other important variants include a nondeterministic analogue of k-sda's, called depth-k nondeterministic storage automata (or k-sna's), and a randomized (or probabilistic) analogue of k-sda's, called depth-k probabilistic storage automata (or k-spa's). If we write kSNA and kSBPA to denote the families of all languages recognized by k-sna's and of all languages recognized by bounded-error k-spa's, then kSNA is a subclass of NP and kSBPA is a subclass of BPP Since we have discussed in this work no closure property of kSDA. We expect that a further study on closure properties will reveal a different feature of kSDA As noted in Section 1, LOGDCFL is known to be characterized by numerous models. Similarly, we have shown in Section 3 two machine models for LOGkSDA with no use of L-m-reduction. It is important to discover natural models that also capture the depth-immune counterpart Section 4, for each index k ≥ 2, we have constructed a language, MEMB k , which is complete for LOGkSDA under L-m-reductions. This language MEMB k is generic but looks quite artificial The recognition of determinsitic CFLs in small time and space Pushdown automata with reversal-bounded counters Characterizations of pushdown machines in terms of time-bounded computers Determinsitic CFLs are accepted simultaneously in polynomial time and log squared space Parallel RAMs with owned global memory and deterministic context-free language recognition Advocating owership Some open problems in the theory of computation as questions about two-way determinsitic pushdown automaton languages Deterministic context free languages Stack automata and compiling One-way stack automata On non-determinacy in simple computing devices A generalization of context-free determinism Characterizations of some tapes and time complexity classes of Tuirng machines in terms of multihead and auxiliary stack automata An infinite hierarchy of intersections of context-free langauges Descriptiopnal complexity of limited automata Reversible limited automata Decidability and complexity in automatic monoids Circuits and context-free languages Deep pushdown automata Limited automata and regular languages Limited automata and context-free languages Limited automata and unary languages On the tape complexity of determinsitic context-free languages Theory of one-tape linear-time Turing machines Behavioral strengths and weaknesses of various models of limited automata Intersection and union hierarchies of deterministic context-free languages and pumping lemmas A pumping lemma for determinsitic context-free languages (1) l ≡ Y 1 · · · y (1) n+1 matches the storage configuration of M on x at step 1. ( Step r ≥ 2) Next, we consider Step r. Assume by induction hypothesis that (*) is true for Y (r−1) at step r − 1. Take a number l ∈ [0, n + 1] Z and assume that Y (r−1) l is currently accessed and the core of Yfor an appropriate triplet (g 1 , g 2 , g 3 ). Assume also that Head 2 resides over the string p h2 in S, indicating that M is in inner state p h2 . Assume that we are scanning X s containingx s . This means that Head 3 of M is at cell s, which contains the input bit x s , where s ∈ [0, n + 1] Z .Since there is a chance that the next move of M is a storage-stationary move, we first look for the receptor of the formε#p h2 # together with σ h5 . The case of a storage-stationary move corresponds to Item (7) . Since M 's storage-tape head does not move, γ h1 / ∈ Γ (k−1) ∪ Γ (k) follows (otherwise, we obtain d 2 = 0, a contradiction). In this case, we look forũ (0) h ,3 by the receptor search. We do not need to move the storage-tape head. We obtain the residue of the forε#p h4 #1 d1+1 0 1−d1 #. We change p h2 to p h4 .In what follows, we consider 6 cases (1)-(6) separately. Head 3 reads X s .(1) Assume that M applies a transition of the form δ (1) with e < k, and γ h3 ∈ Γ (min{e+1,k}) . On the storage tape, assume that the tape head stays at the first symbol of the first non-blank section (γ h1 #) θ in the current block Y (r−1) l . We then split this case into two subcases.(i) Consider the case of min{e + 1, k} < k. If σ h5 = x s , then we find the segment X s P θ 2 kn by moving Heads 2-3.(a) Consider the case of γ h1 = 2.We need to look forũof the form x s #γ h1 #p h2 #001#(γ h3 #) θ #p h4 #1 d1+1 0 1−d1 # in P by conducting the following receptor search for the receptor γ h1 #p h2 # ofũ (+) h,0 . We move Head 2 overp h2 back and forth on the input tape and simultaneously we move the storage-tape head rightwards over the first encountered section (γ h1 #) θ in Y h,0 . We move Head 3 over γ h1 in γ h1 #p h2 #. After reading the string γ h1 # one by one in the first encountered section (γ h1 #) θ in the block Y (r−1) l , we make it to blank B log c +1 so that we mark it as read. Since θ is large enough, we can findũ (+) h,0 in P . We then continue modifying tape symbols until we completely change the entire section (γ h1 #) θ to blank (B log c +1 ) θ = B t . Note that the other sections are intact at this moment.This receptor search is possible because we have enough copies of γ h1 # in Y (r−1) l . Once we find the receptor, we read its corresponding residue 001#(γ h3 #) θ #p h4 #1 d1+1 0 1−d1 #. We then overwrite all non-blank sections (γ h1 #) θ by (γ h3 #) θ one by one.The newly modified block YWe further move the storage-tape head rightwards to the beginning of Y (r−1) l+1 . The end of the block Y (r−1) l is easily calculated by moving Head 1 over 1 kt from left to right and vice versa in a "sweeping" fashion. See Fig. 2 . It is important to note that, while we read "B", we need to force all the input-tape heads except for Head 1 to make a stationary move.We obtain y (r) ≡ γ h2 and y (r)Obviously, y (r) matches the storage configuration of M on x at step r.(b) Consider the next case of γ h1 = 2. We first conduct the aforementioned receptor search to find u (+) h,0 . Since γ h1 = 2, we move only Heads 1-3 by making storage-stationary moves. We write (γ h3 #) kθ into block Y (r) l and move the tape head to the rightside of the first encountered section (γ h1 #) θ in Y (r−1) l . We also move Head 2 to stay over p h4 and Head 3 to the right next to X s .(ii) Consider the case of min{e + 1, k} = k. In this case, we write B log c +1 instead of (γ h3 #) θ .