Randomness and Solovay degrees

We consider the behaviour of Schnorr randomness, a randomness notion weaker than Martin-Löf’s, for left-r.e. reals under Solovay reducibility. Contrasting with results on Martin-Löf-randomenss, we show that Schnorr randomness is not upward closed in the Solovay degrees. Next, some left-r.e. Schnorr random α is the sum of two left-r.e. reals that are far from random. We also show that the left-r.e. reals of effective dimension > r , for some rational r , form a filter in the Solovay degrees.


Randomness notions
The algorithmic theory of randomness defines randomness notions of reals in [0, 1], or equivalently infinite bit sequences, and studies their properties and interactions with computational complexity.The notion of Martin-Löf (ML) randomness was for a long time considered to be the central one.We review the definition.An open set U ⊆ [0, 1] is r.e. if it is a union of computable sequence of open intervals with rational endpoints, that is, U = i∈N (p i , q i ) where p i and q i are computable sequence of rationals.A ML-test is a sequence U n of uniformly r.e.open sets with µ(U n ) ≤ 2 −n where µ is the Lebesgue measure.A real x is ML-random if x ∈ n U n for every ML-test U n .
A suite of alternative notions has been introduced by modifying this definition, both stronger and weaker notions than ML-randomness.See eg Nies [13,Chapter 3].Many of these notions have shown their importance in particular for the interaction of randomness and analysis such as Nies [14] and Brattka, Miller and Nies [2].One of them is the following weakening of ML-randomness, which will be of importance in the present paper.We say that a real x is Schnorr random if it passes all ML-tests U n such that µ(U n ) is computable uniformly in n.

Main results
For left-r.e.reals, ML-randomness interacts closely with Solovay reducibility: a left-r.e.real is ML-random iff it is Solovay complete (a result by Calude, Hertling, Khoussainov and Wang [3] and Kučera and Slaman [9]).As discussed shortly, the complete Solovay degree is join irreducible.
The supremum of two left-r.e.reals in the degree structure induced by Solovay reducibility is given by their arithmetic sum.We are guided by the following two facts that restate some of the results above in terms of the sum.Let α, β be left-r.e.reals.
(2) If α + β is ML-random, then at least one of α and β is ML-random.
A simple direct proof of the first fact can be found in [13,Theorem 3.2.27].For the second, see Demuth [4], and as a more recent (and more readable) reference Downey, Hirschfeldt and Nies [7].
Our first goal is to show that both statements fail for Schnorr randomness (Corollary 3.2 and Theorem 4.1).Thereafter, we prove that in contrast, left-r.e.weakly s-random reals behave like ML-random reals (Theorem 5.6).The reals that have effective packing dimension at most s behave like non-ML-random reals (Proposition 5.7).

Preliminaries
Our notation in the algorithmic theory of randomness is standard as in Nies [13] or Downey and Hirschfeldt [5] (except that we write "r.e." instead of "c.e.").Unless otherwise stated, reals in this paper are in [0, 1].A real α is called left-r.e. if there exists a computable non-decreasing sequence a n n∈N of rationals such that lim n a n = α.We sometimes identify a real α with its binary expansion X ∈ 2 ω and the corresponding set X = {n ∈ N : X(n) = 1}.By X n we denote the initial segment of X of length n.We write 2 Solovay reducibility

Definitions and basic facts
The following was first defined by Solovay in unpublished work.Definition 2.1 (Solovay [15]) For left-r.e.reals α and β , we say that α is Solovay reducible to β , denoted by α ≤ S β , if there exist a constant c and a partial computable function f : Q → Q such that if q ∈ Q and q < β , then f (q) ↓< α and α − f (q) < c(β − q).Solovay's used the terminology "α is dominated by β ".Recall that this definition expresses that α is less complex than β by asking that β is harder to approximate from below.For, let α s , β s be computable nondecreasing sequences of rational converging to α, β respectively.Then α ≤ S β if and only if there are a constant c ∈ N and a computable function g such that α − α g(n) < c(β − β n ) for all n.For a proof, see again [5,Chapter 9], and also [13,Section 3.2].
There is a useful algebraic characterization of Solovay reducibility: α ≤ S β means that β can be obtained from α by literally "adding" information, in the sense of adding a left-r.e.real γ .To make this work we also need to scale β .(For the precise version of the result used here see Nies [13,Theorem 3.2.29].)Proposition 2.2 (Downey, Hirschfeldt and Nies [7]) For left-r.e.reals α, β we have α ≤ S β if and only if there are d ∈ N and a left-r.e.real γ such that α + γ = 2 d β .

Connections to K-reducibility
For any reals α, β one writes where K denotes prefix-free Kolmogorov complexity.The Levin-Schnorr theorem says that X is ML-random if and only if K(X n) ≥ + n.Then α ≤ K β says that β is no less random than α.
Proof First note that β ≤ S α would imply β ≤ K α, contrary to α K β .Hence it suffices to show α ≤ S β .If β is a rational, α K β does not hold.So β is not a rational.If α is a rational, then α ≤ S β holds trivially.Hence, we can assume that α is not a rational.
Let α s s∈N , β s s∈N be increasing computable sequences of rationals converging to α, β respectively.Given n ∈ N, let s n be the first stage s such that α s n = α n.There exists a constant c ∈ N such that for each n: If given α sn n, we can compute the stage s n from the approximation α s , which also computes β sn n, thus the first inequality follows.Since lim for sufficiently large n.Thus, β sn n = β n for all sufficiently large n.Furthermore, β n is not equal to the lexicographic successor of β sn n, for otherwise K(β n) would be within a constant of K(β sn n).Hence, where s is the first stage such that q < β s .If n is sufficiently large and s n ≤ s ≤ s n+1 , then Then there is a rational q * < β such that α − f (q) ≤ 2(β − q) for every rational q with q * ≤ q < β .Modifying f on the rationals ≤ q * shows that α ≤ S β .

Connections to ML-randomness
We restate the two guiding facts from the introduction.They connect Solovay reducibility and ML-randomness.
Here + is the real addition.For a direct proof see Nies [13,Theorem 3.2.27].
By Proposition 2.2, for left-r.e.reals α, γ , if α is ML-random and α ≤ S γ , then γ is ML-random.The implication ≤ S ⇒≤ K is an extension of this result by the Levin-Schnorr theorem.This explains that Solovay reducibility is a measure of algorithmic randomness.
The second fact says that the top degree in the Solovay degrees is join irreducible.Theorem 2.5 (Demuth [4]; Downey, Hirschfeldt and Nies [7]) Let α, β be left-r.e.reals.If α + β is ML-random, then at least one of α and β is ML-random.
For left-r.e.reals α, β , the degree of α + β is the least degree above α, β , thus the degree of α + β is the join of α and β .The theorem says that the join of two non-MLrandom left-r.e.reals is not ML-random either, and the top degree can not be the join of two lesser degrees.

Schnorr randomness is not upward closed in the Solovay degrees
The following will be used to show that for Schnorr randomness, the counterpart of Proposition 2.4 fails.
Theorem 3.1 Let α be a left-r.e.real such that ∀n K(α n) < n − f (n) for some order function f .There exists a left-r.e.real β such that α + β is disjoint from some infinite computable set.
Proof Suppose g, h are strictly increasing computable functions such that the range of h is the complement of the range of g.We define Let α be as in the statement.If g is a sufficiently fast growing computable function, Note that γ is left-r.e.Since α K γ , by Proposition 2.3 we have α ≤ S γ .By Prop.2.2 there exist a natural number d and a left-r.e.real β such that 2 d γ = α + β .Since γ is disjoint from an infinite computable set, so is α + β .Partial computable randomness is a notion in between Schnorr and ML-randomness.For the definition see [13,Section 7.4].

Corollary 3.2
The left-r.e.Schnorr random reals are not upward closed in the Solovay degrees.
Proof By a result of Merkle [12] (also see [13,Remark 7.4.17])there exists a left-r.e.Schnorr random real α such that K(α n) = O(log n).Now take α + β ≥ S α as above.Since α + β is disjoint from some infinite computable set, this can not be Schnorr random.

Remark 3.3
The constructed α can be even partial computably random as in [13,Remark 7.4.17].Furthermore, α+β is not even Kurtz random because of its disjointness from an infinite computable set.The definitions of partial computable randomness and Kurtz randomness can be found in [13] and [5].

Left-r.e. Schnorr random reals can be split into non-Schnorr random reals
We show that the counterpart of Theorem 2.5 for Schnorr randomness does not hold either.Theorem 4.1 below gives a sufficient condition on α so that α is the join of two far-from-random reals.The condition is given by somewhat slow growing initial segment complexity of a left-r.e.real α.The condition is weak in the sense that α can still be Schnorr random using [13,Remark 7.4.17]as in the proof of Corollary 3.2.
We show that α can be written as a sum of left-r.e.reals which are far from random in various senses.One way to express this is saying that their effective Hausdorff dimensions are 0.
We recall the definition here.By Mayordomo [10], for a real X ∈ 2 ω , the effective Hausdorff dimension dim(X) can be characterized by: ( By Athreya, Hitchcock, Lutz and Mayordomo [1] the effective packing dimension Dim(X) can be characterized by: (2) Let a be a Solovay degree.Recall that Solovay reducibility implies K -reducibility.For α, β ∈ a we have α ≡ K β and hence dim(α) = dim(β).So we may well-define dim(a) = dim(α) for some real α ∈ a.A survey of effective Hausdorff dimension is given in Mayordomo [11].
Theorem 4.1 Let α be a left-r.e.real such that C(α n) ≤ n − g(n) for all n, where g is a computable function such that n 2 −g(n) is finite and a computable real.There exist left-r.e.reals β, γ such that α = β + γ , dim(β) = dim(γ) = 0 and both β, γ are not Borel normal and disjoint from infinite computable sets.
Notice that dim(α) can be 1 in which case α is more random than β and γ .
Proof Let α s be an increasing computable sequence of dyadic rationals converging to α.We impose additional properties to α s .For each k ∈ N, we can compute s such that C s (α s n) < n − g(n) for all n ≤ k.Since α s is a dyadic rational, α s • 2 j is a natural number for some j.By taking a subsequence of such s and considering an increasing computable function h that maps k to j, we can assume that α s • 2 h(s) is a natural number and C(α s n) < n − g(n) for all s and all n ≤ h(s).
Let a n be the number of s such that the binary representation of α s+1 − α s has a bit 1 at position n.The numbers {a n } are uniformly computably approximable from below and Notice that a n ≤ 2 n−g(n) for each n.For s such that h(s) < n, α s − α s−1 does not have a bit 1 at position n.For s such that h(s) ≥ n, we have C(α s n) < n − g(n) and there are at most 2 n−g(n) many different strings α s n.
Proof Let f (n) be the biggest natural number less than n + √ n.Note that f : N → N is an increasing computable function.Given a left-r.e.α, let β = {f (n) : n ∈ α}.Notice that β is a left-r.e.real with the approximation β s = {f (n) : n ∈ α s } where α s is an increasing computable approximation of α.Now the following equalities hold: To describe β f (n) one only needs to describe the bits in the positions f (k) for k < n.
for some k.
Clearly β ≤ S α.Assume by way of contradiction that the converse also holds.Then there are constants c, c ∈ N such that Let k ∈ N.There exists a 0 ∈ N such that f (n) − n ≥ k for all n ≥ a 0 .Inductively, we define a n+1 = f (a n ).Then, a n ≥ a 0 + kn.In contrast: Next we consider partial randomness.Definition 5.5 (Tadaki [16]; see also [5,Definition 13.5.1])A test for weak s-ML randomness is a sequence of uniformly r.e.sets of strings for all tests for weak s-ML randomness.
The strings can be replaced with open intervals with dyadic rational endpoints [5, after Definition 13.5.8].
Tadaki [16] showed that X is weakly s-ML-random if and only if K(X n) > sn − O(1).Hence, for every weakly s-ML-random real X , we have dim(X) ≥ s.It is known that the converse fails.This also follows from the construction of Proposition 5.4.
Further, for left-r.e.reals, being weakly s-ML-random is upward closed under ≤ S .
Journal of Logic & Analysis 10:3 (2018) Theorem 5.6 Let s ∈ (0, 1] ∩ Q.The set of left-r.e.weakly-s-random Solovay degrees is a principal filter in Solovay degrees with the degree of Ω s as the bottom element.
Here, we consider only left-r.e.reals, so we restrict s to be a rational.
Proof We follow the argument of Kučera-Slaman theorem [9].Let α be a left-r.e.weakly s-random real.The goal is to show Ω s ≤ S α.
We construct a test for weak s-ML-randomness U k k∈N .At stage t act as follows.If α t ∈ U k [t], then do nothing.Otherwise let t be the last stage at which we put anything into U k (or t = 0 if there is no such stage), and put the intervals Let t 0 = 0, t 1 , • • • be the stages at which we put numbers into U k .The total weight is bounded by: Thus, U k is a test for weak s-ML-randomness.
Since α is weakly s-random, there exists a constant k such that α ∈ U k .Then for each i > 0, so Ω s ≤ S α.
Now we turn to ideals, the dual notion of filters.A nonempty subset I of a partially ordered set (P, ≤) is an ideal if (1) for every x ∈ I , y ≤ x implies x ∈ I , and (2) for every x, y ∈ I , there exists z ∈ I such that x ≤ z and y ≤ z.
Theorem 2.5 says that the non-ML-random Solovay degrees form an ideal.The following is an easy observation.