key: cord-0046132-3nsg9m2d authors: Carl, Merlin title: Clockability for Ordinal Turing Machines date: 2020-06-24 journal: Beyond the Horizon of Computability DOI: 10.1007/978-3-030-51466-2_2 sha: 6efe9a1f22d87a81e44542306263efb770ec6bee doc_id: 46132 cord_uid: 3nsg9m2d We study clockability for Ordinal Turing Machines (OTMs). In particular, we show that, in contrast to the situation for ITTMs, admissible ordinals can be OTM-clockable, that [Formula: see text]-admissible ordinals are never OTM-clockable and that gaps in the OTM-clockable ordinals are always started by admissible limits of admissible ordinals. This partially answers two questions in [3]. In ordinal computability, "clockability" denotes the property of an ordinal that it is the halting time of some program. The term was introduced in [9] , which was the paper that triggered the bulk of research in the area of ordinal computability by introducing Infinite Time Turing Machines (ITTMs). 1 By now, a lot is known about clockability for ITTMs. To give a few examples: In [9] , it was proved that there are gaps in the ITTM-clockable ordinals, i.e., there are ordinals α < β < γ such that α and γ are ITTM-clockable, but β is not. Moreover, it is known that no admissible ordinal is ITTM-clockable (Hamkins and Lewis, [9] ), that the first ordinal in a gap is always admissible (Welch, [14] ), that the supremum λ of the ITTM-writable ordinals (i.e. ordinals coded by a real number that is the output of some halting ITTM-computation) equals the supremum of the ITTMclockable ordinals (Welch, [14] ), that an ITTM-clockable γ has a code that is ITTM-writable in γ many steps (Welch, [14] ) and that ITTM-writable ordinals have real codes that are ITTM-writable at the point the next clockable appears. Moreover, it is known that not every admissible below λ starts a gap, there are admissibles properly inside gaps, and occasionally many of them (Carl, Durand, Lafitte, Ouazzani, [6] ). And indeed, clockability turned out to be a central topic in ordinal computability; it was, for example, crucial for Welch's analysis of the computational strength of ITTMs. Besides ITTMs, clockability was also considered for Infinite Time Register Machines (ITRMs), where the picture turned out to be quite different: In particular, there are no gaps in the ITRM-clockable ordinals (see [5] ), and in fact, the ITRM-clockable ordinals are exactly those below ω CK ω , which thus includes ω CK n for every n ∈ ω, i.e. the first ω many admissible ordinals. For other models, clockability received comparably little attention. This work arose out of a question of T. Kihara during the CTFM 2 conference in 2019 in Wuhan who, after hearing that admissible ordinals are never ITTM-clockable, asked whether the same holds for OTMs. After most of the results of this paper had been proved, we found two questions in the report of the 2007 BIWOC (Bonn International Workshop on Ordinal Computability) [3] concering this topic: the first (p. 42, question 9), due to J. Reitz, was whether ω CK 1 was OTM-clockable, the second, due to J. Hamkins, whether gap-starting ordinals for OTMs can be characterized as "something stronger" than being admissible. In [3] , both are considered to be answered by the claim that no admissible ordinal is OTMclockable, which is attributed to J. Reitz and S. Warner. Upon personal inquiry, Reitz told us that they had a sketch of a proof which, however, did not entirely work; what it does show with a few modifications, though, is that Σ 2 -admissible ordinals are not OTM-clockable, and the argument that Reitz sketched in personal correspondence to us in fact resembles the one of Theorem 6 below. We thus regard Reitz and Warner as the first discoverers of this theorem. Both the argument of Reitz and Warner from 2007 and the one we found during the CTFM in 2019 are adaptations of Welch's argument that admissible ordinals are not ITTM-clockable. The statement actually made in [3] , is, however, false: As we will show below, ω CK n is OTM-clockable for any n ∈ ω. Thus, there are plenty of admissible ordinals that are OTM-clockable, and the answer to the first question is positive. The idea is to use the ITRM-clockability of these ordinals, which follows from Lemma 3 in [5] , together with a slightly modified version of the obvious procedure for simulating ITRMs on OTMs. This actually shows that ω CK n is clockable on an ITTM with tape length α as soon as α > ω. Thus, the strong connection between admissibility and clockability seems to depend rather strongly on the details of the ITTM-architecture. We remark that this is a good example of how the studies of different models of infinitary computability can fruitfully interact: At least for us, it would not have been possible to find this result while only focusing on OTMs. Moreover, we will answer the second question in the positive as well by showing that, if α starts a gap in the OTM-clockable ordinals, then α is an admissible limit of admissible ordinals. 3 Of course, the gap between "admissible limit of admissible ordinals" and "Σ 2 -admissible" is quite wide. In particular, we do not know whether every gap starting ordinal for OTMs is Σ 2 -admissible, though we conjecture this to be false. Ordinal Turing Machines (OTMs) were introduced by Koepke in [10] as a kind of "symmetrization" of ITTMs: Instead of having a tape of length ω and the whole class of ordinals as their working time, OTMs have a tape of proper class length On while retaining On as their "working time" structure. We refer to [10] for details. In contrast to Koepke's definition but in closer analogy with the setup of ITTMs, we allow finitely many tapes instead of a single one. Each tape has a head, and the heads move independently of each other; the program for such an OTM is simply a program for a (finite) multihead Turing machine. At limite times, the inner state (which is coded by a natural number), the cell contents and the head positions are all determined as the inferior limits of the sequences of the respective earlier values. At successor steps, an OTM-program is carried out as if on a finite Turing machine with the addition that, when a head is moved to the left from a limit posistion, it is reset to the start of the tape. Though models of ordinal computability generally enjoy a good degree of stability under such variations as far as computational strength is concerned, this often makes a difference when it comes to clockability. Intuitively, simulating several tapes with separate read-write-heads on a single tape requires one to check the various head positions to determine whether the simulated machine has halted, which leads to a delay in halting. For ITTMs, this is e.g. demonstrated in [13] . For OTMs, insisting on a single tape would lead to a theory that is "morally" the same as the one described here, but make the results much less compelling and the proofs more technically involved and harder to follow. 4 Thus, allowing multiple tapes seems to be a good idea. An important property of OTMs that will be used below is the existence of an OTM-program P that 'enumerates L'; in particular, P will write (a code for) the constructible level L α on the tape in < α many steps, where α is the smallest exponentially closed ordinal > α (this notation will be used throughout the paper). The following picture of OTM-computations may be useful to some readers: Let us imagine the tape split into ω-blocks. Then an OTM-computation proceeds like this: The head works for a bit in one ω-block, then leaves it to the right, works for a bit in the new ω-portion, again leaves it to the right and so on, until eventually the computation either halts or the head is moved back from a limit position, i.e., goes back to 0 and starts over. Thus, if one imagines an ω-portion as single point, then the head moves from left to right, jumps back to 0, moves right again etc. Moreover, in each ω-portion, we have a classical ITTM-computation (up to the limit rules for the head position and the inner state, which make little difference). We fix some terminology for the rest of this paper. is maximal in the sense that there are cofinally many M -clockable ordinals below α and β is M -clockable. In this case, we say that α "starts" the gap and call α a "gap starting ordinal" or "gap starter" for M . We start with some useful observations that can mostly be obtained by easy adaptations of the corresponding results about ITTM-clockability. We start by noting that the analogue of the speedup-theorem for ITTMs from [9] holds for multitape-OTMs. This is proved by an adaptation of the argument for the speedup-theorems for ITTMs. The main difference is that, in contrast to ITTMs, OTMs do not have their head on position 0 at every limit time and that the head may make long "jumps" when moved to the left from a limit position. This generates a few extra complications. To simplify the proof, we start by building up a few preliminaries. For the ITTM-speedup, the following compactnes property is used: If P halts in δ + n many steps and the head is located at position k at time δ, then only the n cells contents before and after the kth one at time δ are relevant for this. Now, this is a fixed string s of 2n bits. In [9] , a construction is described that achieves that the information whether these 2n cells currently contain s at a limit time γ is coded on some extra tapes at time γ. Due to the special limit rules for ITTMs that set the head back to position 0 at every limit time, the Hamkins-Lewis-proof has this information stored at the initial tape cells, but the construction is easily modified to store the respective information on any other tape position. We will use it in the following way: Suppose that P is an OTM-program that halts at time δ + n, where δ is a limit ordinal and n ∈ ω. We want to "speed up" P by n steps, i.e. to come up with a program Q that halts in δ many steps. Suppose that P halts with the head on position γ + k, where γ is a limit ordinal and k ∈ ω. Let m be k − n if k − n ≥ 0 and 0, otherwise, and let s be the bit string present on positions γ + m until γ + k + n at time δ. Then we use the Hamkins-Lewis-construction to ensure that the information whether the bit string present on positions η + m until η + k + n is equal to s on the (η + k)th cells of three extra tapes, for each limit ordinal η. An extra complication arises from the possibility of a "setback": Within the n steps from time δ to time δ + n, it may happen that the head is moved left from position δ, thus ending up at the start of the tape. Clearly, it will then take < n many further steps at the start of the tape and only consider the first n bits during this time. However, we need to know what these bits are -or rather, whether they are the "right ones", i.e., the ones present at time δ -while our head is located at position δ + k. The idea is then to store this information in the inner state of the sped-up program. We thus create extra states: The new state 2i will represent the old state i together with the information that the first n bits were the "right ones" (i.e. the same ones as at time δ) and 2i + 1 will represent the old state i together with the information that some of these bits deviated from those at time δ. To achieve this, we use an extra tape T 4 . At the start of Q, a 1 is written to each of the first n cells of T 4 ; after that, the head on T 4 is set back to position 0 and then moved along with the head of P . In this way, we will always know whether the head of P is currently located at one of the first n cells. Whenever this is the case, we insert some intermediate steps to read out the first n bits, update the inner state and move the head back to its original position. (This requires some additional states, but we skip the details). Note that, if η is a limit time and the first n bits have been changed unboundedly often before η, then the head will be located at one of these positions at time η by the liminf-rule and thus, a further update will take place so that the state will correctly represent the configuration afterwards. On the other hand, if the first n bits were only changed boundedly often before time η, then letη be the supremum of these times. We just saw that the state will represent the configuration correctly finitely many steps after timeη, after which the first n cell contents remain unchanged, so that the state is still correct at time η. In each case, updating this information and returning to the original configuration will take only finitely many extra steps and thus not cause a delay at limit times. 6 In the following construction, we will need to know whether the head is currently located at a cell the index of which is of the form δ + k, where δ is a limit ordinal and k is a fixed natural number. To achieve this, we add three tapes T 0 , T 1 and T 2 to P . The tape T 0 serves as a flag: By having two cells with alternating contents 01 and 10, we can detect a limit time as a time at which both cells contain 0. On T 2 , we move the head along with the head on P and place a 1 on a cell whenever we encounter a cell on which a 0 is written. Thus, the head occupies a certain limit position for the first time if and only if the head on T 1 reads a 0 at a limit time. Finally, on T 2 , we more the head along with the heads on T 1 and the main tape. Whenever the head on T 1 reads a 0 at a limit time, we interrupt the computation, move the head on T 2 for k many steps to the right, write a 1, move the head k many places to the left, and continue. In this way, the head on T 2 will read a 1 if and only if the head on the main tape is at a position of the desired form. As this merely inserts finitely many steps occasionally, running this procedure along with an OTM-program P will still carry out δ many steps of P at time δ whenever δ is a limit ordinal. We will say that the head is "at a δ + k-position" if the index of the cell where it is currently located is of this form with δ a limit ordinal and, by the construction just described, we can use formulations like "if the head is currently at a δ + kposition" in describing OTM-programs without affecting the running time at limit ordinals. If α + n is OTM-clockable and n ∈ ω, then α is OTM-clockable. Proof. It is clear that finite ordinals are OTM-clockable and that OTM-clockable ordinals are closed under addition (by simply running one program after the other). 7 Thus, it suffices to consider the case that α is a limit ordinal. Moreover, we assume for simplicity that P uses only one tape. 8 Let P be an OTM-program that runs for α + n many steps, where α is a limit ordinal. We want to construct a program Q that runs for α many steps. Let the head position at time α be equal to δ + k, where δ is a limit ordinal and k ∈ ω. As above, let m be k − n if k − n ≥ 0 and otherwise let m = 0. Let s be the bit string present on the positions δ + m until δ + k + n at time α, and let t be the string present on the first n positions. Using the constructions explained above, Q now works as follows: Run P . At each step, determine whether the head is currently at a location of the form η + k with η a limit ordinal and whether one of the two following conditions holds: 1. The head is currently at one of the first n positions and the bit string currently present on the positions η + m up to η + k + n is equal to s. 2. The head is currently not on one of the first n positions, the bit string currently present on the positions η + m up to η + k + n is equal to s and whether the bit string currently present on the first n positions is equal to t. If not, continue with P . Otherwise, halt. As described above, the necessary information can be read off from the various extra tapes and the inner state simultaneously. Now it is clear that, if Q halts at time β, then P will halt at time β + n. Thus, Q halts at time α, as desired. Let σ be the minimal ordinal such that L σ ≺ Σ1 L, i.e. such that L σ is a Σ 1 -submodel of L. Proof. The statement 'The program P halts' is Σ 1 . Moreover, any halting OTMcomputation is contained in L. Consequently, if P halts, its computation is contained in L, and hence in L σ , and thus, the halting time of P , if it exists, is < σ. On the other hand, every real number in L σ is OTM-computable (see, e.g., [12] , proof of Corollary 3), including codes for all ordinals < σ, and thus we can write such a code for any ordinal α < σ and then run through this code, which takes at least α many steps. Thus, there is an OTM-clockable ordinal above α for every α < σ. There are gaps in the OTM-clockable ordinals. That is, there are ordinals α < β < γ such that α and γ are OTM-clockable, but β is not. Proof. This works like the argument in Hamkins and Lewis ( [9] , Theorem 3.4) for the existence of gaps in the ITTM-clockable ordinals: Take the OTM-program that simultaneously simulates all OTM-programs and halts as soon as it arrives at a level at which no OTM-program halts. If there were no gap, then this program would halt after all OTM-halting times, which is a contradiction. The following is an OTM-version of Welch's "quick writing theorem" (see [14] , Lemma 48) for ITTMs. Proof. If α is clocked by some OTM-program P , then L α+ω believes that P halts. Thus, there is a Σ 1 -statement that becomes true between L α and L α+ω for the first time and hence, by finestructure (see [2] , Lemma 1), a real number coding α + 1 is contained in L α+ω . But the OTM-program Q that enumerates L will have (a code for) L α+ω on the tape in < α many steps. So we can simply run this program until we arrive at a code c for a limit L-level that believes that P halts for the first time. Now, we can easily find out the desired real code for α in the code for L α+ω (by searching the coded structure for an element which it believes to be the halting time of P ). Proof. This works by the same argument as the "only admissibles start gaps"theorem for ITTMs, see Welch [14] : Suppose for a contradiction that α starts an OTM-gap, but is not admissible. Pick β < α OTM-clockable and f : β → α such that f is Σ 1 (L α ) and cofinal in α. Let B be an OTM-program that clocks β. By the last lemma, we can compute a real code for β in < β ≤ α many steps. Run the OTM that enumerates L. If β is exponentially closed, then we will have a code for L β on the tape at time β. In addition, for each new L-level, check which ordinals recieve f -images when evaluating the definition of f in that level. Determine the largest ordinal γ such that f is defined on γ. Whenever γ increases, say from γ 0 to γ 1 , let δ be such that γ 0 + δ = γ 1 and run B for δ many steps. When B halts, all elements of β have images, so we have arrived at time α. This suffices for an OTM-analogue of Welch's theorem [14] , Theorem 50: Proof. As α starts an OTM-gap, it is exponentially closed. If α is not admissible, there is a total cofinal Σ 1 (L α )-function f : β → α with β < α. Pick γ ∈ (β, α) OTM-clockable and large enough so that all parameters used in the definition of f are contained in L γ . By Lemma 2, we can write a real code for L γ , and thus for all of its elements in time < γ ≤ α. We can now clock α as in Proposition 5, a contradiction. We now show that no Σ 2 -admissible ordinal α can be the halting time of a parameter-free OTM-computation. The proof is mostly an adapatation of argument in Hamkins and Lewis [9] for the non-clockability of admissible ordinals by ITTMs to the extra subtleties of OTMs. Proof. We will show this for the case of a single-tape OTM for the sake of simplicity. Let α be Σ 2 -admissible and assume for a contradiction that α is the halting time of the parameter-free OTM-program P . At time α, suppose that the readwrite-head is at position ρ, the program is in state s ∈ ω and the head reads the symbol z ∈ {0, 1}. As one cannot move the head more than α many places to the right in α many steps, we have ρ ≤ α. By the limit rules, z must have been the symbol on cell ρ cofinally often before time α and similarly, s must have been the program state cofinally often before time α. By recursively building an increasing 'interleaving' sequence of ordinals of both kinds, we see that the set S of times at which the program state was s and the symbol on ρ was z, we see that S is closed and unbounded in α. We now distinguish three cases. Case 1: ρ < α and the head position ρ was assumed cofinally often before time α. Let β be the order type of the set of times at which ρ was the head position in the computation of P . We show that β = α. If not, then β < α; let f : β → α be the function sending each ι < β to the ιth time at which ρ was the head position. Then f is Σ 1 over L α and thus, by admissibility of α, f [β] is bounded in α, contradicting the case assumption. Let T be the set of times at which ρ was the head position. Then, by the limit rules and the case assumption, T is closed and unbounded in α. As S and T are both Σ 1 over L α and α is admissible, it follows that S ∩ T is also closed and unbounded in α. In particular, there is an element γ < α in S ∩ T , i.e. there is a time < α at which the head was on position ρ, the cell ρ contained the symbol z and the inner state was s. But then, the situation that prompted P to halt at time α was already given at time γ < α, so P cannot have run up to time α, a contradiction. Case 2: ρ < α and the head position ρ was assumed boundedly often before time α. By the liminf rule for the determination of the head position at time α, this implies that, for every ι < ρ, there is a time τ ι < α such that, from time τ ι on, the head never occupied a position < ι. The function f : ι → τ ι is Π 1 over L α (we have f (ι) = τ if and only if, for all β > τ and all partial P -computations of length β, the head position in the final state of the partial computation was ≥ ι) and thus in particular Σ 2 over L α . By Σ 2 -admissibility of α and the case assumption ρ < α, the set f [ρ] must be bounded in α, say by γ < α. But this implies that, after time γ, all head positions were ≥ ρ. As ρ was assumed only boundedly often as the head position, this means that, from some time < α on, all head positions were actually > ρ. But then, ρ cannot be the inferior limit of the sequence of earlier head positions at time α, contradicting the case assumption that the head is on position ρ at time α. This implies that the head is on position ρ for the first time at time α, so that we must have z = 0, as there was no chance to write on the ρth cell before time α. Let S be the set of times < α at which some head position was assumed for the first time during the computation of P . By the same reason as above, this newly reached cell will contain 0 at that time. If we can show that there is such a time < α at which the inner state is also s, we are done, because that would mean that the halting situation at time α was already given at an earlier time, contradicting the assumption that P halts at time α. As ρ > 0, there must be an ordinal τ < α such that the head was never on position 0 after time τ (otherwise, the liminf rule would force the head to be on position 0 at time α). This means that the head was never moved to the left from a limit position after time τ . This further implies that, after time τ , for any position β that the head occupied, all later positions were at most finitely many positions to the left of β and hence that, if β is a limit ordinal, then it never occupied a position < β afterwards. In particular, the sequence of limit positions that the head occupied after time τ is increasing. Note that the set of head positions occupied before time τ is bounded in α, say by ξ. Let S be the set of elements ι > τ of S such that, at time ι, the head occupied a limit position > ξ for the first time. Then S is a closed and unbounded subset of S. As s is the program state at the limit time α, there must be γ < α such that, after time γ, the program state was never < s and moreover, the program state s itself must have occured cofinally often in α after that time. But now, building an increasing ω-sequence of times starting with γ that alternately belong to S and have the program state s, we see that its limit δ is < α and is a time at which the head was reading z and the state was s, we have the desired contradiction. Since each case leads to a contradiction, our assumption on P must be false; as P was arbitrary, α is not a parameter-free OTM-halting time. To see now that the theorem holds for any finite number of tapes, consider the argument below for each tape separately, note that we showed above that case 2 cannot occur while cases 1 and 3 both imply that, as far as the tape under consideration is concerned, the halting configuration occurred on a closed unbounded set of times before time α. Thus, one can again build an increasing 'interleaving' sequence of times at which each head read the same symbol as in the halting configuration and the inner state was the one in the halting configuration. The supremum of this sequence will be < α, leading again to the contradiction that the program must have halted before α. We will now show that at least the first ω many admissible ordinals are OTMclockable, thus answering the first question mentioned in the introduction positively. To this end, we need some preliminaries about Infinite Time Register Machines (ITRMs). ITRMs were introduced by Koepke in [11] ; we sketch their architecture and refer to [11] for further information. An ITRM has finitely many registers, each of which stores one natural number. ITRM-programs are just programs for (classical) register machines. At successor times, an ITRM proceeds like a classical register machine. At limit levels, the active program line index and the register contents are defined to be the inferior limits of the sequences of earlier program line indices and respective register contents. When that limit is not finite in the case of a register content, the new content is defined to be 0, and one speaks of an 'overflow' of the respective register. We recall Lemma 3 from [5]: Proof. If α < ω 2 , this is straightforward. Now let α ≥ ω 2 . Let P be an ITRM-program that clocks α. We simulate P by an OTMprogram that takes the same running time. The simulation of ITRMs by OTMs here works like this: Use a tape for each register, have i many 1s, followed by 0s, on a tape to represent that the respective register contains i ∈ ω; in addition, after a simulation step is finished, the head position on this tape represents the register content, i.e. it is at the first 0 on the tape. For an ITTM, the simulation takes an extra ω many steps to halt because it takes time to detect an overflow. For an OTM, one can simply use one extra tape for each register, write 1 to their ωth positions at the start of the computation, move their heads along with the heads on the register simulating tapes and know that there is an overflow as soon as one of the heads on the extra tapes reads a 1. 9 Since α ≥ ω 2 , the initial placement of 1s on the ωth tape positions does not affect the running time. For every n ∈ ω, ω CK n is OTM-clockable. This answers the first question mentioned above in the positive. By a relativization of the above argument, we can achieve the same for the second (i.e. whether gap starters for OTMs are something "better" than admissible): Theorem 8. Let α = β + be a successor admissible. Then α does not start an OTM-clockable gap. Proof. Suppose for a contradiction that α = β + starts an OTM-clockable gap. Then there is an OTM-clockable ordinal γ ∈ (β, α); pick one. By Lemma 2 above, a real code c for γ is OTM-writable in < α many steps. Suppose c has been written. Then ω CK,c 1 ≥ α. Thus, α is ITRM-clockable in the oracle c. But now, α is OTM-clockable by first writing c and then ITRM-clocking α relative to c, contradicting the assumption that α starts a gap. This allows a considerable strengthening of Corollary 2: Corollary 4. Every admissible ordinal up to the first admissible limit of admissible ordinals is OTM-clockable. We showed that OTM-gaps are always started by limits of admissible ordinals and that, while admissible ordinals can be OTM-clockable, Σ 2 -admissible ordinals cannot. This provokes the following questions: Question: Is every gap-starting ordinal for OTMs Σ 2 -admissible? 9 The fact that more tapes are needed the more registers P uses may be seen as a little defect. (Note that, by the results of [11] , the halting times of ITRM-programs using n registers are bounded by ω CK n+1 so that indeed arbitrarily large numbers of registers -and thus of tapes -are required to make the above construction work for all α CK n with n ∈ ω.) It would certainly be nicer to have a uniform bound on the number of required tapes. And indeed, by a slightly refined argument using that only two of the used registers are ultimately relevant for the halting of an ITRM, such a bound can be obtained. What is the minimal gap-starting ordinal for OTMs? Does it coincide with first Σ 2 -admissible ordinal? Further worthile topics include clockability for OTMs with a fixed ordinal parameter α and for other models of computability, like the "hypermachines" of Friedman and Welch (see [8] ), α-ITTMs (see [7] ) or α-ITRMs (see [4] ), where the main question left open in [4] is to determine the supremum of the α-ITRMclockable ordinals. Admissible Sets and Structures: An Approach to Definability Theory Degrees of unsolvability of constructible sets of integers Bonn International Workshop on Ordinal Computability Taming Koepkes Zoo II: Register Machines The basic theory of infinite time register machines Admissibles in gaps Taming Koepke's Zoo Infinite time turing machines Turing computations on ordinals Ordinal computability Tree representations via ordinal machines Infinite time turing machines with only one tape Characteristics of discrete transfinite time turing machine models: halting times, stabilization times, and normal form theorems