# Constructivity

One of the most striking differences between classical foundations and type theory  is the idea of proof relevance, according to which mathematical statements, and even their proofs, become first-class mathematical objects. In type theory, we represent mathematical statements by types, which can be regarded simultaneously as both mathematical constructions and mathematical assertions, a conception also known as propositions as types. Accordingly, we can regard a term $a:A$ as both an element of the type $A$ (or in homotopy type theory, a point of the space $A$), and at the same time, a proof of the proposition  $A$. To take an example, suppose we have sets $A$ and $B$ (discrete spaces), and consider the statement “$A$ is isomorphic to $B$.” In type theory, this can be rendered as:

 $\mathsf{Iso}(A,B):\!\!\equiv\mathchoice{\sum_{(f:A\to B)}\,}{\mathchoice{{% \textstyle\sum_{(f:A\to B)}}}{\sum_{(f:A\to B)}}{\sum_{(f:A\to B)}}{\sum_{(f:A% \to B)}}}{\mathchoice{{\textstyle\sum_{(f:A\to B)}}}{\sum_{(f:A\to B)}}{\sum_{% (f:A\to B)}}{\sum_{(f:A\to B)}}}{\mathchoice{{\textstyle\sum_{(f:A\to B)}}}{% \sum_{(f:A\to B)}}{\sum_{(f:A\to B)}}{\sum_{(f:A\to B)}}}\mathchoice{\sum_{(g:% B\to A)}\,}{\mathchoice{{\textstyle\sum_{(g:B\to A)}}}{\sum_{(g:B\to A)}}{\sum% _{(g:B\to A)}}{\sum_{(g:B\to A)}}}{\mathchoice{{\textstyle\sum_{(g:B\to A)}}}{% \sum_{(g:B\to A)}}{\sum_{(g:B\to A)}}{\sum_{(g:B\to A)}}}{\mathchoice{{% \textstyle\sum_{(g:B\to A)}}}{\sum_{(g:B\to A)}}{\sum_{(g:B\to A)}}{\sum_{(g:B% \to A)}}}\Big{(}\big{(}\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}}{% \prod_{(x:A)}}{\prod_{(x:A)}}g(f(x))=x\big{)}\times\big{(}\mathchoice{{% \textstyle\prod_{(y:B)}}}{\prod_{(y:B)}}{\prod_{(y:B)}}{\prod_{(y:B)}}\,f(g(y)% )=y\big{)}\Big{)}.$

Reading the type constructors $\Sigma,\Pi,\times$ here as “there exists”, “for all”, and “and” respectively yields the usual formulation of “$A$ and $B$ are isomorphic”; on the other hand, reading them as sums and products    yields the type of all isomorphisms    between $A$ and $B$! To prove that $A$ and $B$ are isomorphic, one constructs a proof $p:\mathsf{Iso}(A,B)$, which is therefore the same as constructing an isomorphism between $A$ and $B$, i.e., exhibiting a pair of functions $f,g$ together with proofs that their composites are the respective identity maps. The latter proofs, in turn, are nothing but homotopies  of the appropriate sorts. In this way, proving a proposition is the same as constructing an element of some particular type.

In particular, to prove a statement of the form “$A$ and $B$” is just to prove $A$ and to prove $B$, i.e., to give an element of the type $A\times B$. And to prove that $A$ implies $B$ is just to find an element of $A\to B$, i.e. a function from $A$ to $B$ (determining a mapping of proofs of $A$ to proofs of $B$). This “constructive” conception (for more on which, see ,,) is what gives type theory its good computational character. For instance, every proof that something exists carries with it enough information to actually find such an object; and from a proof that “$A$ or $B$” holds, one can extract either a proof that $A$ holds or one that $B$ holds. Thus, from every proof we can automatically extract an algorithm; this can be very useful in applications to computer programming.

However, this conception of logic does behave in ways that are unfamiliar to most mathematicians. On one hand, a naive translation of the axiom of choice  yields a statement that we can simply prove. Essentially, this notion of “there exists” is strong enough to ensure that, by showing that for every $x:A$ there exists a $y:B$ such that $R(x,y)$, we automatically construct a function $f:A\to B$ such that, for all $x:A$, we have $R(x,f(x))$.

On the other hand, this notion of “or” is so strong that a naive translation of the law of excluded middle   is inconsistent with the univalence axiom. For if we assume “for all $A$, either $A$ or not $A$”, then since proving “$A$” means exhibiting an element of it, we would have a uniform way of selecting an element from every nonempty type — a sort of Hilbertian choice operator. However, univalence implies that the element of $A$ selected by such a choice operator must be invariant under all self-equivalences of $A$, since these are identified with self-identities and every operation  must respect identity   . But clearly some types have automorphisms    with no fixed points  , e.g. we can swap the elements of a two-element type.

Thus, the logic of “proposition as types” suggested by traditional type theory is not the “classical” logic familiar to most mathematicians. But it is also different from the logic sometimes called “intuitionistic”, which may lack both the law of excluded middle and the axiom of choice. For present purposes, it may be called constructive logic (but one should be aware that the terms “intuitionistic” and “constructive” are often used differently).

The computational advantages of constructive logic imply that we should not discard it lightly; but for some purposes in classical mathematics, its non-classical character can be problematic. Many mathematicians are, of course, accustomed to rely on the law of excluded middle; while the “axiom of choice” that is available in constructive logic looks superficially similar to its classical namesake, but does not have all of its strong consequences. Fortunately, homotopy type theory gives a finer analysis of this situation, allowing various different kinds of logic to coexist and intermix.

The new insight that makes this possible is that the system of all types, just like spaces in classical homotopy theory, is “stratified” according to the dimensions   in which their higher homotopy structure  exists or collapses. In particular, Voevodsky has found a purely type-theoretic definition of homotopy $n$-types, corresponding to spaces with no nontrivial homotopy information above dimension $n$. (The $0$-types are the “sets” mentioned previously as satisfying Lawvere’s axioms.) Moreover, with higher inductive types, we can universally “truncate” a type into an $n$-type; in classical homotopy theory this would be its $n^{\mathrm{th}}$ Postnikov section.

With these notions in hand, the homotopy $(-1)$-types, which we call (mere) propositions, support  a logic that is much more like traditional “intuitionistic” logic. (Classically, every $(-1)$-type is empty or contractible  ; we interpret these possibilities as the truth values “false” and “true” respectively.) The “$(-1)$-truncated axiom of choice” is not automatically true, but is a strong assumption  with the same sorts of consequences as its counterpart in classical set theory  . Similarly, the “$(-1)$-truncated law of excluded middle” may be assumed, with many of the same consequences as in classical mathematics. Thus, the homotopical perspective reveals that classical and constructive logic can coexist, as endpoints of a spectrum of different systems, with an infinite   number of possibilities in between (the homotopy $n$-types for $-1). We may speak of “$\mathsf{LEM}_{n}$” and “$\mathsf{AC}_{n}$”, with $\mathsf{AC}_{\infty}$ being provable and $\mathsf{LEM}_{\infty}$ inconsistent with univalence, while $\mathsf{AC}_{-1}$ and $\mathsf{LEM}_{-1}$ are the versions familiar to classical mathematicians (hence in most cases it is appropriate to assume the subscript $(-1)$ when none is given). Indeed, one can even have useful systems in which only certain types satisfy such further “classical” principles, while types in general remain “constructive.”

It is worth emphasizing that univalent foundations does not require the use of constructive or intuitionistic logic  . Most of classical mathematics which depends on the law of excluded middle and the axiom of choice can be performed in univalent foundations, simply by assuming that these two principles hold (in their proper, $(-1)$-truncated, form). However, type theory does encourage avoiding these principles when they are unnecessary, for several reasons.

First of all, every mathematician knows that a theorem is more powerful when proven using fewer assumptions, since it applies to more examples. The situation with $\mathsf{AC}$ and $\mathsf{LEM}$ is no different: type theory admits many interesting “nonstandard” models, such as in sheaf toposes, where classicality principles such as $\mathsf{AC}$ and $\mathsf{LEM}$ tend to fail. Homotopy type theory admits similar models in higher toposes, such as are studied in ,,. Thus, if we avoid using these principles, the theorems we prove will be valid internally to all such models.

Secondly, one of the additional virtues of type theory is its computable character. In addition to being a foundation for mathematics, type theory is a formal theory of computation, and can be treated as a powerful programming language. From this perspective, the rules of the system cannot be chosen arbitrarily the way set-theoretic axioms can: there must be a harmony between them which allows all proofs to be “executed” as programs. We do not yet fully understand the new principles introduced by homotopy type theory, such as univalence and higher inductive types, from this point of view, but the basic outlines are emerging; see, for example, . It has been known for a long time, however, that principles such as $\mathsf{AC}$ and $\mathsf{LEM}$ are fundamentally antithetical to computability, since they assert baldly that certain things exist without giving any way to compute them. Thus, avoiding them is necessary to maintain the character of type theory as a theory of computation.

Fortunately, constructive reasoning is not as hard as it may seem. In some cases, simply by rephrasing some definitions, a theorem can be made constructive and its proof more elegant. Moreover, in univalent foundations this seems to happen more often. For instance:

1. 1.

In set-theoretic foundations, at various points in homotopy theory and category theory     one needs the axiom of choice to perform transfinite constructions. But with higher inductive types, we can encode these constructions directly and constructively. In particular, none of the “synthetic” homotopy theory in http://planetmath.org/node/87582Chapter 8 requires $\mathsf{LEM}$ or $\mathsf{AC}$.

2. 2.
3. 3.
4. 4.

Of course, these simplifications could as well be taken as evidence that the new methods will not, ultimately, prove to be really constructive. However, we emphasize again that the reader does not have to care, or worry, about constructivity in order to read this book. The point is that in all of the above examples, the version of the theory we give has independent advantages, whether or not $\mathsf{LEM}$ and $\mathsf{AC}$ are assumed to be available. Constructivity, if attained, will be an added bonus.

Given this discussion of adding new principles such as univalence, higher inductive types, $\mathsf{AC}$, and $\mathsf{LEM}$, one may wonder whether the resulting system remains consistent. (One of the original virtues of type theory, relative to set theory, was that it can be seen to be consistent by proof-theoretic means). As with any foundational system, consistency is a relative question: “consistent with respect to what?” The short answer is that all of the constructions and axioms considered in this book have a model in the category of Kan complexes, due to Voevodsky  (see  for higher inductive types). Thus, they are known to be consistent relative to ZFC (with as many inaccessible cardinals  as we need nested univalent universes). Giving a more traditionally type-theoretic account of this consistency is work in progress (see, e.g., ,).

We summarize the different points of view of the type-theoretic operations in Table 1. Table 1: Comparing points of view on type-theoretic operations

## References

Title Constructivity
\metatable