Greenberg et al. also determined the distance from a dimension 1 sequence to the nearest dimension t sequence. But they left open the general problem of reducing dimension, which is made difficult by the fact that the information in a dimension s sequence can be coded (at least somewhat) redundantly. Goh, Miller, Soskova, and Westrick recently gave a complete solution.
I will talk about both the results in the 2018 paper and the more recent work. In particular, I will discuss some of the coding theory behind these results. No previous knowledge of coding theory is assumed.
In the 1980's, Slaman and Steel proved that the second part of Martin's Conjecture holds for order-preserving Borel functions. In joint work with Benny Siskind, we prove the complementary result that (assuming analytic determinacy) the first part of the conjecture also holds for order-preserving Borel functions (and under AD, for all order-preserving functions). Our methods also yield several other new results, including an equivalence between the first part of Martin's Conjecture and a statement about the Rudin-Keisler order on ultrafilters on the Turing degrees.
In my talk, I will give an overview of Martin's Conjecture and then describe our new results.
into
and within
set
operations in order to construct sets of arbitrary intrinsic density from any
Martin-Löf random. We then show that these operations are useful more
generally for working with other notions of density as well, in particular,
for viewing Church and MWC stochasticity as a form of density.
As we show, (1) the number of relatively random closed sets that can have a non-empty intersection varies depending on the choice of underlying probability measure on the space of closed subsets of Cantor space - this number being the degree of intersectability of a given family of random closed sets - and (2) the effective dimension of a sequence X is inversely proportional to the minimum degree of intersectability of a family of random closed sets, at least one of which contains X as a member. Put more simply, a sequence of lower dimension can only be in random closed sets with more branching, which are thus more intersectable, whereas higher dimension sequences can be in random closed sets with less branching, which are thus less intersectable, and the relationship between these two quantities (that is, effective dimension and degree of intersectability) can be given explicitly.
Complexity profiles are a way of measuring, for two structures A generic Muchnik reducible to B, which subsets of A can be defined using B. The complexity profile of A on itself is the natural analog of considering the relatively intrinsically Σksets in A.
Using complexity profiles, I will compare three generic muchnik degrees: Cantor space < Baire space < the Borel-complete degree. In particular, I will describe some dichotomy theorems regarding simple expansions of these and describe how to build degrees strictly between them. (Joint work with Joseph S. Miller, Noah Schweber, and Mariya Soskova.)
The usual base theory used in reverse mathematics is RCA0, which is intended to correspond roughly to the idea of "computable mathematics". The main two axioms of RCA0 are: comprehension for computable properties of natural numbers and mathematical induction for c.e. properties. A weaker theory, in which induction for c.e. properties is replaced by induction for computable properties, has also been introduced, but it has received much less attention. In the reverse mathematics literature, this weaker theory is known as RCA*0.
In this talk, I will discuss some results concerning the reverse mathematics of combinatorial principles over RCA*0. We will focus mostly on Ramsey's theorem and some of its well-known special cases: the chain-antichain principle CAC, the ascending-descending chain principle ADS, and the cohesiveness principle COH.
The results I will talk about are part of a larger project joint with Marta Fiori Carones, Katarzyna Kowalik, Tin Lok Wong, and Keita Yokoyama.
The monograph "Continuous Model Theory" by Chang and Keisler, Annals of Mathematics Studies (1966), studied structures with truth values in [0,1], with formulas that had continuous functions as connectives, sup and inf as quantifiers, and equality. In 2008, Ben Yaacov, Berenstein, Henson, and Usvyatsov introduced the model theory of metric structures, where equality is replaced by a metric, and all functions and predicates are required to be uniformly continuous. This has led to an explosion of research with results that closely parallel first-order model theory, with many applications to analysis. In my forthcoming paper "Model Theory for Real-valued Structures", the "Expansion Theorem" allows one to extend many model-theoretic results about metric structures to general [0,1]-valued structures - the structures in the 1966 monograph but without equality.
My paper "Ultrapowers Which are Not Saturated", J. Symbolic Logic 32 (1967), 23-46, introduced a pre-ordering M ≤ N on all first-order structures, that holds if every regular ultrafilter that saturates N saturates M, and suggested using it to classify structures. In the last decade, in a remarkable series of papers, Malliaris and Shelah showed that that pre-ordering gives a rich classification of simple first-order structures. Here, we lay the groundwork for using the analogous pre-ordering to classify [0,1]-valued and metric structures.
This work is joint with André Nies.
Joint with Arno Pauly and Liang Yu.
The talk should be generally accessible to university logic students, requiring little beyond familiarity with the incompleteness theorem and some elementary ideas from computability theory.
Discussion and commentary can be made on the speaker's web site at http://jdh.hamkins.org/natural-instances-of-nonlinearity-in-the-hierarchy-of-consistency-strength-uwm-logic-seminar-january-2021/.
Do other branches of mathematics posses the synthetic method, too? For instance, what would “synthetic topology” look like? To build spaces out of sets, as topologists usually do, is the analytic way. The synthetic approach must construe spaces as primitive and axiomatize them directly, without any recourse to sets. It cannot introduce continuity as a desirable property of functions, for that would be like identifying straight lines as the non-bending curves.
It is indeed possible to build the synthetic worlds of topology, smooth analysis, measure theory, and computability. In each of them, the basic structure – topological, smooth, measurable, computable – is implicit by virtue of permeating everything, even logic itself. The synthetic worlds demand an economy of thought that the unaccustomed mind finds frustrating at first, but eventually rewards it with new elegance and conceptual clarity. The synthetic method is still fruitfully related to the analytic method by interpretation of the former in models provided by the latter.
We demonstrate the approach by taking a closer look at synthetic computability, whose central axiom states that there are countably many countable subsets of the natural numbers. The axiom is validated and explained by its interpretation in the effective topos, where it corresponds to the familiar fact that the computably enumerable sets may be computably enumerated. Classic theorems of computability may be proved in a straightforward manner, without reference to any notion of computation. We end by showing that in synthetic computability, Turing reducibility is expressed in terms of sequential continuity of maps between directed-complete partial orders.
In this talk, I will briefly introduce the general theory of dense limits of combinatorial objects (often associated with the name continuous combinatorics) and talk about the notion of natural quasirandomness, a generalization of quasirandomness to the same general setting of universal first-order theories. The main concept explored by our quasirandomness properties is that of unique coupleability that roughly means that any alignment of two limit objects on the same ground set "looks random".
This talk is based on joint work with Alexander A. Razborov.
In this talk, we will provide a brief overview of the area of big Ramsey degrees on Fraïssé limits. Then we will present recent joint work with Rebecca Coulson and Rehana Patel characterizing the big Ramsey degrees for some seemingly disparate Fraïssé classes. We formulate an amalgamation property, which we call the Substructure Free Amalgamation Property, and show that every Fraïssé relational class with finitely many relations satisfying SFAP has big Ramsey degrees which are characterized in a manner as simply as those of the Rado graph. A more general property for disjoint amalgamation classes, which we call SDAP+, also ensures the same simple characterization of big Ramsey degrees. One of the novelties of our approach is that we build trees of quantifier-free 1-types with special nodes coding the vertices in a given enumerated Fraïssé limit. Then we use the method of forcing to do an unbounded search for a finite object, which produces in ZFC the exact big Ramsey degrees for these structures. SDAP+ holds for unrestricted relational structures, relational structures with forbidden 3-irreducible substructures, and others, producing new lines of results while recovering in a streamlined manner several previous results, including those of Laflamme, Sauer, and Vuksanović.
This situation can be improved by using a "synthetic" language for categories. Instead of using set theory as the implicit foundation for all mathematics, we can regard ordinary category-theoretic proofs as implicitly formalized in a type-theoretic language, which contains an apparently "inaccessible" universe of sets. Then we encapsulate McLarty's encodings (which are essentially a variant of "indexed category theory") in a single theorem interpreting this type-theoretic language into a set theory containing a much weaker universe.
One of the main technical tools we use is Montalbán’s “true stage” machinery, originally developed for iterated priority constructions in computable structure theory, but more recently used by Day and Marks for their resolution of the decomposability conjecture.
Joint work with Adam Day, Matthew Harrison-Trainor, and Dan Turetsky.
The needs of the recursion-theoretic constructions translate into two purely model-theoretic questions. I'll discuss the solution to these questions and try to highlight the connections with recursive model theory.
Joint work with Omer Mermelstein.
Their analogs in computability theory will be defined in terms of collections of computable sets, given as the columns of a single set. We study their complexity using Medvedev reducibility. For instance, we show that the ultrafilter number is Medvedev equivalent to the problem of finding a function that dominates all computable functions, that is, highness. In contrast, each nonlow set uniformly computes a maximal tower.
Joint work with Steffen Lempp, Joseph Miller, and Mariya Soskova.
Draft available at here.