# 11.5 Compactness of the interval

We already pointed out that our constructions of reals are entirely compatible with classical logic. Thus, by assuming the law of excluded middle (LABEL:eq:lem) and the axiom of choice (LABEL:eq:ac) we could develop classical analysis, which would essentially amount to copying any standard book on analysis.

Nevertheless, anyone interested in computation, for example a numerical analyst, ought to be curious about developing analysis in a computationally meaningful setting. That analysis in a constructive setting is even possible was demonstrated by [Bishop1967]. As a sample of the differences and similarities between classical and constructive analysis we shall briefly discuss just one topic—compactness of the closed interval $[0,1]$ and a couple of theorems surrounding the concept.

Compactness is no exception to the common phenomenon in constructive mathematics that classically equivalent notions bifurcate. The three most frequently used notions of compactness are:

1. 1.

metrically compact: “Cauchy complete and totally bounded”,

2. 2.

Bolzano–Weierstraß compact: “every sequence has a convergent subsequence”,

3. 3.

Heine-Borel compact: “every open cover has a finite subcover”.

These are all equivalent in classical mathematics. Let us see how they fare in homotopy type theory. We can use either the Dedekind or the Cauchy reals, so we shall denote the reals just as $\mathbb{R}$. We first recall several basic definitions.

###### Definition 11.5.1.

A metric space $(M,d)$ is a set $M$ with a map $d:M\times M\to\mathbb{R}$ satisfying, for all $x,y,z:M$,

 $\displaystyle d(x,y)$ $\displaystyle\geq 0,$ $\displaystyle d(x,y)$ $\displaystyle=d(y,x),$ $\displaystyle d(x,y)$ $\displaystyle=0\Leftrightarrow x=y,$ $\displaystyle d(x,z)$ $\displaystyle\leq d(x,y)+d(y,z).$
###### Definition 11.5.2.

A Cauchy approximation in $M$ is a sequence $x:\mathbb{Q}_{+}\to M$ satisfying

 $\forall(\delta,\epsilon).\,d(x_{\delta},x_{\epsilon})<\delta+\epsilon.$

The limit of a Cauchy approximation $x:\mathbb{Q}_{+}\to M$ is a point $\ell:M$ satisfying

 $\forall(\epsilon,\theta:\mathbb{Q}_{+}).\,d(x_{\epsilon},\ell)<\epsilon+\theta.$

A complete metric space is one in which every Cauchy approximation has a limit.

###### Definition 11.5.3.

For a positive rational $\epsilon$, an $\epsilon$-net in a metric space $(M,d)$ is an element of

 $\mathchoice{\sum_{(n:\mathbb{N})}\,}{\mathchoice{{\textstyle\sum_{(n:\mathbb{N% })}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{% \mathchoice{{\textstyle\sum_{(n:\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n% :\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(n:\mathbb% {N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}% \mathchoice{\sum_{(x_{1},\ldots,x_{n}:M)}\,}{\mathchoice{{\textstyle\sum_{(x_{% 1},\ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}% :M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_{(x_{1},% \ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)% }}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_{(x_{1},\ldots,% x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum% _{(x_{1},\ldots,x_{n}:M)}}}\forall(y:M).\,\exists(k\leq n).\,d(x_{k},y)<\epsilon.$

In words, this is a finite sequence of points $x_{1},\ldots,x_{n}$ such that every point in $M$ merely is within $\epsilon$ of some $x_{k}$.

A metric space $(M,d)$ is totally bounded when it has $\epsilon$-nets of all sizes:

 $\mathchoice{\prod_{(\epsilon:\mathbb{Q}_{+})}\,}{\mathchoice{{\textstyle\prod_% {(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(% \epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\mathchoice{{% \textstyle\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:\mathbb{Q}_{+})% }}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}}{% \mathchoice{{\textstyle\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:% \mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{% Q}_{+})}}}\mathchoice{\sum_{(n:\mathbb{N})}\,}{\mathchoice{{\textstyle\sum_{(n% :\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb% {N})}}}{\mathchoice{{\textstyle\sum_{(n:\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{% \sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(n% :\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb% {N})}}}\mathchoice{\sum_{(x_{1},\ldots,x_{n}:M)}\,}{\mathchoice{{\textstyle% \sum_{(x_{1},\ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},% \ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_% {(x_{1},\ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,% x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_{(x_{1}% ,\ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M% )}}{\sum_{(x_{1},\ldots,x_{n}:M)}}}\forall(y:M).\,\exists(k\leq n).\,d(x_{k},y% )<\epsilon.$
###### Remark 11.5.4.

In the definition of total boundedness we used sloppy notation $\mathchoice{\sum_{(n:\mathbb{N})}\,}{\mathchoice{{\textstyle\sum_{(n:\mathbb{N% })}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{% \mathchoice{{\textstyle\sum_{(n:\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n% :\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(n:\mathbb% {N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}% \mathchoice{\sum_{(x_{1},\ldots,x_{n}:M)}\,}{\mathchoice{{\textstyle\sum_{(x_{% 1},\ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}% :M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_{(x_{1},% \ldots,x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)% }}{\sum_{(x_{1},\ldots,x_{n}:M)}}}{\mathchoice{{\textstyle\sum_{(x_{1},\ldots,% x_{n}:M)}}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum_{(x_{1},\ldots,x_{n}:M)}}{\sum% _{(x_{1},\ldots,x_{n}:M)}}}$. Formally, we should have written $\mathchoice{\sum_{x:\mathsf{List}(M)}\,}{\mathchoice{{\textstyle\sum_{(x:% \mathsf{List}(M))}}}{\sum_{(x:\mathsf{List}(M))}}{\sum_{(x:\mathsf{List}(M))}}% {\sum_{(x:\mathsf{List}(M))}}}{\mathchoice{{\textstyle\sum_{(x:\mathsf{List}(M% ))}}}{\sum_{(x:\mathsf{List}(M))}}{\sum_{(x:\mathsf{List}(M))}}{\sum_{(x:% \mathsf{List}(M))}}}{\mathchoice{{\textstyle\sum_{(x:\mathsf{List}(M))}}}{\sum% _{(x:\mathsf{List}(M))}}{\sum_{(x:\mathsf{List}(M))}}{\sum_{(x:\mathsf{List}(M% ))}}}$ instead, where $\mathsf{List}(M)$ is the inductive type of finite lists from \autorefsec:bool-nat. However, that would make the rest of the statement a bit more cumbersome to express.

Note that in the definition of total boundedness we require pure existence of an $\epsilon$-net, not mere existence. This way we obtain a function which assigns to each $\epsilon:\mathbb{Q}_{+}$ a specific $\epsilon$-net. Such a function might be called a “modulus of total boundedness”. In general, when porting classical metric notions to homotopy type theory, we should use propositional truncation sparingly, typically so that we avoid asking for a non-constant map from $\mathbb{R}$ to $\mathbb{Q}$ or $\mathbb{N}$. For instance, here is the “correct” definition of uniform continuity.

###### Definition 11.5.5.

A map $f:M\to\mathbb{R}$ on a metric space is uniformly continuous when

 $\mathchoice{\prod_{(\epsilon:\mathbb{Q}_{+})}\,}{\mathchoice{{\textstyle\prod_% {(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(% \epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\mathchoice{{% \textstyle\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:\mathbb{Q}_{+})% }}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}}{% \mathchoice{{\textstyle\prod_{(\epsilon:\mathbb{Q}_{+})}}}{\prod_{(\epsilon:% \mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{Q}_{+})}}{\prod_{(\epsilon:\mathbb{% Q}_{+})}}}\mathchoice{\sum_{(\delta:\mathbb{Q}_{+})}\,}{\mathchoice{{% \textstyle\sum_{(\delta:\mathbb{Q}_{+})}}}{\sum_{(\delta:\mathbb{Q}_{+})}}{% \sum_{(\delta:\mathbb{Q}_{+})}}{\sum_{(\delta:\mathbb{Q}_{+})}}}{\mathchoice{{% \textstyle\sum_{(\delta:\mathbb{Q}_{+})}}}{\sum_{(\delta:\mathbb{Q}_{+})}}{% \sum_{(\delta:\mathbb{Q}_{+})}}{\sum_{(\delta:\mathbb{Q}_{+})}}}{\mathchoice{{% \textstyle\sum_{(\delta:\mathbb{Q}_{+})}}}{\sum_{(\delta:\mathbb{Q}_{+})}}{% \sum_{(\delta:\mathbb{Q}_{+})}}{\sum_{(\delta:\mathbb{Q}_{+})}}}\forall(x,y:M)% .\,d(x,y)<\delta\Rightarrow|f(x)-f(y)|<\epsilon.$

In particular, a uniformly continuous map has a modulus of uniform continuity, which is a function that assigns to each $\epsilon$ a corresponding $\delta$.

Let us show that $[0,1]$ is compact in the first sense.

###### Theorem 11.5.6.

The closed interval $[0,1]$ is complete and totally bounded.

###### Proof.

Given $\epsilon:\mathbb{Q}_{+}$, there is $n:\mathbb{N}$ such that $2/k<\epsilon$, so we may take the $\epsilon$-net $x_{i}=i/k$ for $i=0,\ldots,k-1$. This is an $\epsilon$-net because, for every $y:[0,1]$ there merely exists $i$ such that $0\leq i and $(i-1)/k, and so $|y-x_{i}|<2/k<\epsilon$.

For completeness of $[0,1]$, consider a Cauchy approximation $x:\mathbb{Q}_{+}\to[0,1]$ and let $\ell$ be its limit in $\mathbb{R}$. Since $\max$ and $\min$ are Lipschitz maps, the retraction $r:\mathbb{R}\to[0,1]$ defined by $r(x):\!\!\equiv\max(0,\min(1,x))$ commutes with limits of Cauchy approximations, therefore

 $r(\ell)=r(\lim x)=\lim(r\circ x)=r(\lim x)=\ell,$

which means that $0\leq\ell\leq 1$, as required. ∎

We thus have at least one good notion of compactness in homotopy type theory. Unfortunately, it is limited to metric spaces because total boundedness is a metric notion. We shall consider the other two notions shortly, but first we prove that a uniformly continuous map on a totally bounded space has a supremum, i.e. an upper bound which is less than or equal to all other upper bounds.

###### Theorem 11.5.7.

A uniformly continuous map $f:M\to\mathbb{R}$ on a totally bounded metric space $(M,d)$ has a supremum $m:\mathbb{R}$. For every $\epsilon:\mathbb{Q}_{+}$ there exists $u:M$ such that $|m-f(u)|<\epsilon$.

###### Proof.

Let $h:\mathbb{Q}_{+}\to\mathbb{Q}_{+}$ be the modulus of uniform continuity of $f$. We define an approximation $x:\mathbb{Q}_{+}\to\mathbb{R}$ as follows: for any $\epsilon:\mathbb{Q}$ total boundedness of $M$ gives a $h(\epsilon)$-net $y_{0},\ldots,y_{n}$. Define

 $x_{\epsilon}:\!\!\equiv\max(f(y_{0}),\ldots,f(y_{n})).$

We claim that $x$ is a Cauchy approximation. Consider any $\epsilon,\eta:\mathbb{Q}$, so that

 $x_{\epsilon}\equiv\max(f(y_{0}),\ldots,f(y_{n}))\quad\text{and}\quad x_{\eta}% \equiv\max(f(z_{0}),\ldots,f(z_{m}))$

for some $h(\epsilon)$-net $y_{0},\ldots,y_{n}$ and $h(\eta)$-net $z_{0},\ldots,z_{m}$. Every $z_{i}$ is merely $h(\epsilon)$-close to some $y_{j}$, therefore $|f(z_{i})-f(y_{j})|<\epsilon$, from which we may conclude that

 $f(z_{i})<\epsilon+f(y_{j})\leq\epsilon+x_{\epsilon},$

therefore $x_{\eta}<\epsilon+x_{\epsilon}$. Symmetrically we obtain $x_{\eta}<\eta+x_{\eta}$, therefore $|x_{\eta}-x_{\epsilon}|<\eta+\epsilon$.

We claim that $m:\!\!\equiv\lim x$ is the supremum of $f$. To prove that $f(x)\leq m$ for all $x:M$ it suffices to show $\lnot(m. So suppose to the contrary that $m. There is $\epsilon:\mathbb{Q}_{+}$ such that $m+\epsilon. But now merely for some $y_{i}$ participating in the definition of $x_{\epsilon}$ we get $|f(x)-f(y_{i})<\epsilon$, therefore $m, a contradiction.

We finish the proof by showing that $m$ satisfies the second part of the theorem, because it is then automatically a least upper bound. Given any $\epsilon:\mathbb{Q}_{+}$, on one hand $|m-f(x_{\epsilon/2})|<3\epsilon/4$, and on the other $|f(x_{\epsilon/2})-f(y_{i})|<\epsilon/4$ merely for some $y_{i}$ participating in the definition of $x_{\epsilon/2}$, therefore by taking $u:\!\!\equiv y_{i}$ we obtain $|m-f(u)|<\epsilon$ by triangle inequality. ∎

Now, if in \autorefctb-uniformly-continuous-sup we also knew that $M$ were complete, we could hope to weaken the assumption of uniform continuity to continuity, and strengthen the conclusion to existence of a point at which the supremum is attained. The usual proofs of these improvements rely on the facts that in a complete totally bounded space

1. 1.

continuity implies uniform continuity, and

2. 2.

every sequence has a convergent subsequence.

The first statement follows easily from Heine-Borel compactness, and the second is just Bolzano–Weierstraß compactness. Unfortunately, these are both somewhat problematic. Let us first show that Bolzano–Weierstraß compactness implies an instance of excluded middle known as the limited principle of omniscience: for every $\alpha:\mathbb{N}\to\mathbf{2}$,

 $\Bigl{(}\mathchoice{\sum_{n:\mathbb{N}}\,}{\mathchoice{{\textstyle\sum_{(n:% \mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{% N})}}}{\mathchoice{{\textstyle\sum_{(n:\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{% \sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(n% :\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb% {N})}}}\alpha(n)={1_{\mathbf{2}}}\Bigr{)}+\Bigl{(}\mathchoice{\prod_{n:\mathbb% {N}}\,}{\mathchoice{{\textstyle\prod_{(n:\mathbb{N})}}}{\prod_{(n:\mathbb{N})}% }{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle% \prod_{(n:\mathbb{N})}}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}{\prod% _{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\prod_{(n:\mathbb{N})}}}{\prod_{(n:% \mathbb{N})}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}}\alpha(n)={0_{% \mathbf{2}}}\Bigr{)}.$ (11.5.8)

Computationally speaking, we would not expect this principle to hold, because it asks us to decide whether infinitely many values of a function are ${0_{\mathbf{2}}}$.

###### Theorem 11.5.9.

Bolzano–Weierstraß compactness of $[0,1]$ implies the limited principle of omniscience.

###### Proof.

Given any $\alpha:\mathbb{N}\to\mathbf{2}$, define the sequence $x:\mathbb{N}\to[0,1]$ by

 $x_{n}:\!\!\equiv\begin{cases}0&\text{if \alpha(k)={0_{\mathbf{2}}} for all % k

If the Bolzano–Weierstraß property holds, there exists a strictly increasing $f:\mathbb{N}\to\mathbb{N}$ such that $x\circ f$ is a Cauchy sequence. For a sufficiently large $n:\mathbb{N}$ the $n$-th term $x_{f(n)}$ is within $1/6$ of its limit. Either $x_{f(n)}<2/3$ or $x_{f(n)}>1/3$. If $x_{f(n)}<2/3$ then $x_{n}$ converges to $0$ and so $\mathchoice{\prod_{n:\mathbb{N}}\,}{\mathchoice{{\textstyle\prod_{(n:\mathbb{N% })}}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}}% {\mathchoice{{\textstyle\prod_{(n:\mathbb{N})}}}{\prod_{(n:\mathbb{N})}}{\prod% _{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\prod_{(n:% \mathbb{N})}}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:\mathbb{N})}}{\prod_{(n:% \mathbb{N})}}}\alpha(n)={0_{\mathbf{2}}}$. If $x_{f(n)}>1/3$ then $x_{f(n)}=1$, therefore $\mathchoice{\sum_{n:\mathbb{N}}\,}{\mathchoice{{\textstyle\sum_{(n:\mathbb{N})% }}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{% \mathchoice{{\textstyle\sum_{(n:\mathbb{N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n% :\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(n:\mathbb% {N})}}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}{\sum_{(n:\mathbb{N})}}}% \alpha(n)={1_{\mathbf{2}}}$. ∎

While we might not mourn Bolzano–Weierstraß compactness too much, it seems harder to live without Heine–Borel compactness, as attested by the fact that both classical mathematics and Brouwer’s Intuitionism accepted it. As we do not want to wade too deeply into general topology, we shall work with basic open sets. In the case of $\mathbb{R}$ these are the open intervals with rational endpoints. A family of such intervals, indexed by a type $I$, would be a map

 $\mathcal{F}:I\to\setof{(q,r):\mathbb{Q}\times\mathbb{Q}|q

with the idea that a pair of rationals $(q,r)$ with $q determines the type $\setof{x:\mathbb{R}|q. It is slightly more convenient to allow degenerate intervals as well, so we take a family of basic intervals to be a map

 $\mathcal{F}:I\to\mathbb{Q}\times\mathbb{Q}.$

To be quite precise, a family is a dependent pair $(I,\mathcal{F})$, not just $\mathcal{F}$. A finite family of basic intervals is one indexed by $\setof{m:\mathbb{N}|m for some $n:\mathbb{N}$. We usually present it by a finite list $[(q_{0},r_{0}),\ldots,(q_{n-1},r_{n-1})]$. Finally, a finite subfamily of $(I,\mathcal{F})$ is given by a list of indices $[i_{1},\ldots,i_{n}]$ which then determine the finite family $[\mathcal{F}(i_{1}),\ldots,\mathcal{F}(i_{n})]$.

As long as we are aware of the distinction between a pair $(q,r)$ and the corresponding interval $\setof{x:\mathbb{R}|q, we may safely use the same notation $(q,r)$ for both. Intersections and inclusions of intervals are expressible in terms of their endpoints:

 $\displaystyle(q,r)\cap(s,t)$ $\displaystyle\ :\!\!\equiv\ (\max(q,s),\min(r,t)),$ $\displaystyle(q,r)\subseteq(s,t)$ $\displaystyle\ :\!\!\equiv\ (q

We say that $(I,{\lambda}i.\,(q_{i},r_{i}))$ (pointwise) covers $[a,b]$ when

 $\forall(x:[a,b]).\,\exists(i:I).\,q_{i} (11.5.10)

The Heine-Borel compactness for $[0,1]$ states that every covering family of $[0,1]$ merely has a finite subfamily which still covers $[0,1]$.

###### Theorem 11.5.11.

If excluded middle holds then $[0,1]$ is Heine-Borel compact.

###### Proof.

Assume for the purpose of reaching a contradiction that a family $(I,{\lambda}i.\,(a_{i},b_{i}))$ covers $[0,1]$ but no finite subfamily does. We construct a sequence of closed intervals $[q_{n},r_{n}]$ which are nested, their sizes shrink to $0$, and none of them is covered by a finite subfamily of $(I,{\lambda}i.\,(a_{i},b_{i}))$.

We set $[q_{0},r_{0}]:\!\!\equiv[0,1]$. Assuming $[q_{n},r_{n}]$ has been constructed, let $s:\!\!\equiv(2q_{n}+r_{n})/3$ and $t:\!\!\equiv(q_{n}+2r_{n})/3$. Both $[q_{n},t]$ and $[s,r_{n}]$ are covered by $(I,{\lambda}i.\,(a_{i},b_{i}))$, but they cannot both have a finite subcover, or else so would $[q_{n},r_{n}]$. Either $[q_{n},t]$ has a finite subcover or it does not. If it does we set $[q_{n+1},r_{n+1}]:\!\!\equiv[s,r_{n}]$, otherwise we set $[q_{n+1},r_{n+1}]:\!\!\equiv[q_{n},t]$.

The sequences $q_{0},q_{1},\ldots$ and $r_{0},r_{1},\ldots$ are both Cauchy and they converge to a point $x:[0,1]$ which is contained in every $[q_{n},r_{n}]$. There merely exists $i:I$ such that $a_{i}. Because the sizes of the intervals $[q_{n},r_{n}]$ shrink to zero, there is $n:\mathbb{N}$ such that $a_{i}, but this means that $[q_{n},r_{n}]$ is covered by a single interval $(a_{i},b_{i})$, while at the same time it has no finite subcover. A contradiction. ∎

Without excluded middle, or a pinch of Brouwerian Intuitionism, we seem to be stuck. Nevertheless, Heine-Borel compactness of $[0,1]$ can be recovered in a constructive setting, in a fashion that is still compatible with classical mathematics! For this to be done, we need to revisit the notion of cover. The trouble with (11.5.10) is that the truncated existential allows a space to be covered in any haphazard way, and so computationally speaking, we stand no chance of merely extracting a finite subcover. By removing the truncation we get

 $\mathchoice{\prod_{(x:[0,1])}\,}{\mathchoice{{\textstyle\prod_{(x:[0,1])}}}{% \prod_{(x:[0,1])}}{\prod_{(x:[0,1])}}{\prod_{(x:[0,1])}}}{\mathchoice{{% \textstyle\prod_{(x:[0,1])}}}{\prod_{(x:[0,1])}}{\prod_{(x:[0,1])}}{\prod_{(x:% [0,1])}}}{\mathchoice{{\textstyle\prod_{(x:[0,1])}}}{\prod_{(x:[0,1])}}{\prod_% {(x:[0,1])}}{\prod_{(x:[0,1])}}}\mathchoice{\sum_{(i:I)}\,}{\mathchoice{{% \textstyle\sum_{(i:I)}}}{\sum_{(i:I)}}{\sum_{(i:I)}}{\sum_{(i:I)}}}{% \mathchoice{{\textstyle\sum_{(i:I)}}}{\sum_{(i:I)}}{\sum_{(i:I)}}{\sum_{(i:I)}% }}{\mathchoice{{\textstyle\sum_{(i:I)}}}{\sum_{(i:I)}}{\sum_{(i:I)}}{\sum_{(i:% I)}}}q_{i} (11.5.12)

which might help, were it not too demanding of covers. With this definition we could not even show that $(0,3)$ and $(2,5)$ cover $[1,4]$ because that would amount to exhibiting a non-constant map $[1,4]\to\mathbf{2}$, see \autorefex:reals-non-constant-into-Z. Here we can take a lesson from “pointfree topology” (i.e. locale theory): the notion of cover ought to be expressed in terms of open sets, without reference to points. Such a “holistic” view of space will then allow us to analyze the notion of cover, and we shall be able to recover Heine-Borel compactness. Locale theory uses power sets, which we could obtain by assuming propositional resizing; but instead we can steal ideas from the predicative cousin of locale theory, which is called “formal topology”.

Suppose that we have a family ${\mathopen{}(I,\mathcal{F})\mathclose{}}$ and an interval $(a,b)$. How might we express the fact that $(a,b)$ is covered by the family, without referring to points? Here is one: if $(a,b)$ equals some $\mathcal{F}(i)$ then it is covered by the family. And another one: if $(a,b)$ is covered by some other family $(J,\mathcal{G})$, and in turn each $\mathcal{G}(j)$ is covered by ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, then $(a,b)$ is covered ${\mathopen{}(I,\mathcal{F})\mathclose{}}$. Notice that we are listing rules which can be used to deduce that ${\mathopen{}(I,\mathcal{F})\mathclose{}}$ covers $(a,b)$. We should find sufficiently good rules and turn them into an inductive definition.

###### Definition 11.5.13.

The inductive cover $\triangleleft$ is a mere relation

 ${\triangleleft}:(\mathbb{Q}\times\mathbb{Q})\to\Bigl{(}\mathchoice{\sum_{I:% \mathcal{U}}\,}{\mathchoice{{\textstyle\sum_{(I:\mathcal{U})}}}{\sum_{(I:% \mathcal{U})}}{\sum_{(I:\mathcal{U})}}{\sum_{(I:\mathcal{U})}}}{\mathchoice{{% \textstyle\sum_{(I:\mathcal{U})}}}{\sum_{(I:\mathcal{U})}}{\sum_{(I:\mathcal{U% })}}{\sum_{(I:\mathcal{U})}}}{\mathchoice{{\textstyle\sum_{(I:\mathcal{U})}}}{% \sum_{(I:\mathcal{U})}}{\sum_{(I:\mathcal{U})}}{\sum_{(I:\mathcal{U})}}}(I\to% \mathbb{Q}\times\mathbb{Q})\Bigr{)}\to\mathsf{Prop}$

defined inductively by the following rules, where $q,r,s,t$ are rational numbers and ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, ${\mathopen{}(J,\mathcal{G})\mathclose{}}$ are families of basic intervals:

1. 1.

reflexivity: $\mathcal{F}(i)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ for all $i:I$,

2. 2.

transitivity: if $(q,r)\triangleleft{\mathopen{}(J,\mathcal{G})\mathclose{}}$ and $\forall(j:J).\,\mathcal{G}(j)\triangleleft{\mathopen{}(I,\mathcal{F})% \mathclose{}}$ then $(q,r)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$,

3. 3.

monotonicity: if $(q,r)\subseteq(s,t)$ and $(s,t)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ then $(q,r)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$,

4. 4.

localization: if $(q,r)\triangleleft(I,\mathcal{F})$ then $(q,r)\cap(s,t)\triangleleft(I,{\lambda}i.\,(\mathcal{F}(i)\cap(s,t)))$.

5. 5.

if $q then $(q,r)\triangleleft[(q,t),(r,s)]$,

6. 6.

$(q,r)\triangleleft(\setof{(s,t):\mathbb{Q}\times\mathbb{Q}|q.

The definition should be read as a higher-inductive type in which the listed rules are point constructors, and the type is $(-1)$-truncated. The first four clauses are of a general nature and should be intuitively clear. The last two clauses are specific to the real line: one says that an interval may be covered by two intervals if they overlap, while the other one says that an interval may be covered from within. Incidentally, if $r\leq q$ then $(q,r)$ is covered by the empty family by the last clause.

Inductive covers enjoy the Heine-Borel property, the proof of which requires a lemma.

###### Lemma 11.5.14.

Suppose $q and $(q,r)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$. Then there merely exists a finite subfamily of ${\mathopen{}(I,\mathcal{F})\mathclose{}}$ which inductively covers $(s,t)$.

###### Proof.

We prove the statement by induction on $(q,r)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$. There are six cases:

1. 1.

Reflexivity: if $(q,r)=\mathcal{F}(i)$ then by monotonicity $(s,t)$ is covered by the finite subfamily $[\mathcal{F}(i)]$.

2. 2.

Transitivity: suppose $(q,r)\triangleleft{\mathopen{}(J,\mathcal{G})\mathclose{}}$ and $\forall(j:J).\,\mathcal{G}(j)\triangleleft{\mathopen{}(I,\mathcal{F})% \mathclose{}}$. By the inductive hypothesis there merely exists $[\mathcal{G}(j_{1}),\ldots,\mathcal{G}(j_{n})]$ which covers $(s,t)$. Again by the inductive hypothesis, each of $\mathcal{G}(j_{k})$ is covered by a finite subfamily of ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, and we can collect these into a finite subfamily which covers $(s,t)$.

3. 3.

Monotonicity: if $(q,r)\subseteq(u,v)$ and $(u,v)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ then we may apply the inductive hypothesis to $(u,v)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ because $u.

4. 4.

Localization: suppose $(q^{\prime},r^{\prime})\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ and $(q,r)=(q^{\prime},r^{\prime})\cap(a,b)$. Because $q^{\prime}, by the inductive hypothesis there is a finite subcover $[\mathcal{F}(i_{1}),\ldots,\mathcal{F}(i_{n})]$ of $(s,t)$. We also know that $a, therefore $(s,t)=(s,t)\cap(a,b)$ is covered by $[\mathcal{F}(i_{1})\cap(a,b),\ldots,\mathcal{F}(i_{n})\cap(a,b)]$, which is a finite subfamily of $(I,{\lambda}i.\,(\mathcal{F}(i)\cap(a,b)))$.

5. 5.

If $(q,r)\triangleleft[(q,v),(u,r)]$ for some $q then by monotonicity $(s,t)\triangleleft[(q,v),(u,r)]$.

6. 6.

Finally, $(s,t)\triangleleft(\setof{(u,v):\mathbb{Q}\times\mathbb{Q}|q by reflexivity. ∎

Say that ${\mathopen{}(I,\mathcal{F})\mathclose{}}$ inductively covers $[a,b]$ when there merely exists $\epsilon:\mathbb{Q}_{+}$ such that $(a-\epsilon,b+\epsilon)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$.

###### Corollary 11.5.15.

A closed interval is Heine-Borel compact for inductive covers.

###### Proof.

Suppose $[a,b]$ is inductively covered by ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, so there merely is $\epsilon:\mathbb{Q}_{+}$ such that $(a-\epsilon,b+\epsilon)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$. By \autorefreals-formal-topology-locally-compact there is a finite subcover of $(a-\epsilon/2,b+\epsilon/2)$, which is therefore a finite subcover of $[a,b]$. ∎

Experience from formal topology shows that the rules for inductive covers are sufficient for a constructive development of pointfree topology. But we can also provide our own evidence that they are a reasonable notion.

###### Theorem 11.5.16.

1. 1.

An inductive cover is also a pointwise cover.

2. 2.

Assuming excluded middle, a pointwise cover is also an inductive cover.

###### Proof.

1. 1.

Consider a family of basic intervals ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, where we write $(q_{i},r_{i}):\!\!\equiv\mathcal{F}(i)$, an interval $(a,b)$ inductively covered by ${\mathopen{}(I,\mathcal{F})\mathclose{}}$, and $x$ such that $a. We prove by induction on $(a,b)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ that there merely exists $i:I$ such that $q_{i}. Most cases are pretty obvious, so we show just two. If $(a,b)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ by reflexivity, then there merely is some $i:I$ such that $(a,b)=(q_{i},r_{i})$ and so $q_{i}. If $(a,b)\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ by transitivity via $(J,{\lambda}j.\,(s_{j},t_{j}))$ then by the inductive hypothesis there merely is $j:J$ such that $s_{j}, and then since $(s_{j},t_{j})\triangleleft{\mathopen{}(I,\mathcal{F})\mathclose{}}$ again by the inductive hypothesis there merely exists $i:I$ such that $q_{i}. Other cases are just as exciting.

2. 2.

Suppose $(I,{\lambda}i.\,(q_{i},r_{i}))$ pointwise covers $(a,b)$. By \autorefdefn:inductive-cover-interval-2 of \autorefdefn:inductive-cover it suffices to show that $(I,{\lambda}i.\,(q_{i},r_{i}))$ inductively covers $(c,d)$ whenever $a, so consider such $c$ and $d$. By \autorefclassical-Heine-Borel there is a finite subfamily $[i_{1},\ldots,i_{n}]$ which already pointwise covers $[c,d]$, and hence $(c,d)$. Let $\epsilon:\mathbb{Q}_{+}$ be a Lebesgue number for $(q_{i_{1}},r_{i_{1}}),\ldots,(q_{i_{n}},r_{i_{n}})$ as in \autorefex:finite-cover-lebesgue-number. There is a positive $k:\mathbb{N}$ such that $2(d-c)/k<\min(1,\epsilon)$. For $0\leq i\leq k$ let

 $c_{k}:\!\!\equiv((k-i)c+id)/k.$

The intervals $(c_{0},c_{2})$, $(c_{1},c_{3})$, …, $(c_{k-2},c_{k})$ inductively cover $(c,d)$ by repeated use of transitivity and \autorefdefn:inductive-cover-interval-1 in \autorefdefn:inductive-cover. Because their widths are below $\epsilon$ each of them is contained in some $(q_{i},r_{i})$, and we may use transitivity and monotonicity to conclude that $(I,{\lambda}i.\,(q_{i},r_{i}))$ inductively cover $(c,d)$. ∎

The upshot of the previous theorem is that, as far as classical mathematics is concerned, there is no difference between a pointwise and an inductive cover. In particular, since it is consistent to assume excluded middle in homotopy type theory, we cannot exhibit an inductive cover which fails to be a pointwise cover. Or to put it in a different way, the difference between pointwise and inductive covers is not what they cover but in the proofs that they cover.

We could write another book by going on like this, but let us stop here and hope that we have provided ample justification for the claim that analysis can be developed in homotopy type theory. The curious reader should consult \autorefex:mean-value-theorem for constructive versions of the mean value theorem.

Title 11.5 Compactness of the interval
\metatable