# 1.11 Propositions as types

As mentioned in the introduction, to show that a proposition is true in type theory corresponds to exhibiting an element of the type corresponding to that proposition. We regard the elements of this type as evidence or witnesses that the proposition is true. (They are sometimes even called proofs, but this terminology can be misleading, so we generally avoid it.) In general, however, we will not construct witnesses explicitly; instead we present the proofs in ordinary mathematical prose, in such a way that they could be translated into an element of a type. This is no different from reasoning in classical set theory, where we don’t expect to see an explicit derivation using the rules of predicate logic and the axioms of set theory.

However, the type-theoretic perspective on proofs is nevertheless different in important ways. The basic principle of the logic of type theory is that a proposition is not merely true or false, but rather can be seen as the collection of all possible witnesses of its truth. Under this conception, proofs are not just the means by which mathematics is communicated, but rather are mathematical objects in their own right, on a par with more familiar objects such as numbers, mappings, groups, and so on. Thus, since types classify the available mathematical objects and govern how they interact, propositions are nothing but special types — namely, types whose elements are proofs.

The basic observation which makes this identification feasible is that we have the following natural correspondence between logical operations on propositions, expressed in English, and type-theoretic operations on their corresponding types of witnesses.

English Type Theory
True $\mathbf{1}$
False $\mathbf{0}$
$A$ and $B$ $A\times B$
$A$ or $B$ $A+B$
If $A$ then $B$ $A\to B$
$A$ if and only if $B$ $(A\to B)\times(B\to A)$
Not $A$ $A\to\mathbf{0}$

The point of the correspondence is that in each case, the rules for constructing and using elements of the type on the right correspond to the rules for reasoning about the proposition on the left. For instance, the basic way to prove a statement of the form “$A$ and $B$” is to prove $A$ and also prove $B$, while the basic way to construct an element of $A\times B$ is as a pair $(a,b)$, where $a$ is an element (or witness) of $A$ and $b$ is an element (or witness) of $B$. And if we want to use “$A$ and $B$” to prove something else, we are free to use both $A$ and $B$ in doing so, analogously to how the induction principle for $A\times B$ allows us to construct a function out of it by using elements of $A$ and of $B$.

Similarly, the basic way to prove an implication “if $A$ then $B$” is to assume $A$ and prove $B$, while the basic way to construct an element of $A\to B$ is to give an expression which denotes an element (witness) of $B$ which may involve an unspecified variable element (witness) of type $A$. And the basic way to use an implication “if $A$ then $B$” is deduce $B$ if we know $A$, analogously to how we can apply a function $f:A\to B$ to an element of $A$ to produce an element of $B$. We strongly encourage the reader to do the exercise of verifying that the rules governing the other type constructors translate sensibly into logic.

Of special note is that the empty type $\mathbf{0}$ corresponds to falsity. When speaking logically, we refer to an inhabitant of $\mathbf{0}$ as a : thus there is no way to prove a contradiction,11More precisely, there is no basic way to prove a contradiction, i.e. $\mathbf{0}$ has no constructors. If our type theory were inconsistent, then there would be some more complicated way to construct an element of $\mathbf{0}$. while from a contradiction anything can be derived. We also define the of a type $A$ as

 $\neg A\ :\!\!\equiv\ A\to\mathbf{0}.$

Thus, a witness of $\neg A$ is a function $A\to\mathbf{0}$, which we may construct by assuming $x:A$ and deriving an element of $\mathbf{0}$. Note that although the logic we obtain is “constructive”, as discussed in the introduction, this sort of “proof by contradiction” (assume $A$ and derive a contradiction, concluding $\neg A$) is perfectly valid constructively: it is simply invoking the meaning of “negation”. The sort of “proof by contradiction” which is disallowed is to assume $\neg A$ and derive a contradiction as a way of proving $A$. Constructively, such an argument would only allow us to conclude $\neg\neg A$, and the reader can verify that there is no obvious way to get from $\neg\neg A$ (that is, from $(A\to\mathbf{0})\to\mathbf{0}$) to $A$.

The above translation of logical connectives into type-forming operations is referred to as propositions as types: it gives us a way to translate propositions and their proofs, written in English, into types and their elements. For example, suppose we want to prove the following tautology (one of “de Morgan’s laws”):

 “If not $A$ and not $B$, then not ($A$ or $B$)”. (1.11.1)

An ordinary English proof of this fact might go as follows.

Suppose not $A$ and not $B$, and also suppose $A$ or $B$; we will derive a contradiction. There are two cases. If $A$ holds, then since not $A$, we have a contradiction. Similarly, if $B$ holds, then since not $B$, we also have a contradiction. Thus we have a contradiction in either case, so not ($A$ or $B$).

Now, the type corresponding to our tautology (1.11.1), according to the rules given above, is

 $(A\to\mathbf{0})\times(B\to\mathbf{0})\to(A+B\to\mathbf{0})$ (1.11.2)

so we should be able to translate the above proof into an element of this type.

As an example of how such a translation works, let us describe how a mathematician reading the above English proof might simultaneously construct, in his or her head, an element of (1.11.2). The introductory phrase “Suppose not $A$ and not $B$” translates into defining a function, with an implicit application of the recursion principle for the cartesian product in its domain $(A\to\mathbf{0})\times(B\to\mathbf{0})$. This introduces unnamed variables (hypotheses) of types $A\to\mathbf{0}$ and $B\to\mathbf{0}$. When translating into type theory, we have to give these variables names; let us call them $x$ and $y$. At this point our partial definition of an element of (1.11.2) can be written as

 $f((x,y)):\!\!\equiv\;\Box\;:A+B\to\mathbf{0}$

with a “hole” $\Box$ of type $A+B\to\mathbf{0}$ indicating what remains to be done. (We could equivalently write $f:\!\!\equiv\mathsf{rec}_{(A\to\mathbf{0})\times(B\to\mathbf{0})}(A+B\to% \mathbf{0},{\lambda}x.\,{\lambda}y.\,\Box)$, using the recursor instead of pattern matching.) The next phrase “also suppose $A$ or $B$; we will derive a contradiction” indicates filling this hole by a function definition, introducing another unnamed hypothesis $z:A+B$, leading to the proof state:

 $f((x,y))(z):\!\!\equiv\;\Box\;:\mathbf{0}$

Now saying “there are two cases” indicates a case split, i.e. an application of the recursion principle for the coproduct $A+B$. If we write this using the recursor, it would be

 $f((x,y))(z):\!\!\equiv\mathsf{rec}_{A+B}(\mathbf{0},{\lambda}a.\,\Box,{\lambda% }b.\,\Box,z)$

while if we write it using pattern matching, it would be

 $\displaystyle f((x,y))({\mathsf{inl}}(a))$ $\displaystyle:\!\!\equiv\;\Box\;:\mathbf{0}$ $\displaystyle f((x,y))({\mathsf{inr}}(b))$ $\displaystyle:\!\!\equiv\;\Box\;:\mathbf{0}.$

Note that in both cases we now have two “holes” of type $\mathbf{0}$ to fill in, corresponding to the two cases where we have to derive a contradiction. Finally, the conclusion of a contradiction from $a:A$ and $x:A\to\mathbf{0}$ is simply application of the function $x$ to $a$, and similarly in the other case. (Note the convenient coincidence of the phrase “applying a function” with that of “applying a hypothesis” or theorem.) Thus our eventual definition is

 $\displaystyle f((x,y))({\mathsf{inl}}(a))$ $\displaystyle:\!\!\equiv x(a)$ $\displaystyle f((x,y))({\mathsf{inr}}(b))$ $\displaystyle:\!\!\equiv y(b).$

As an exercise, you should verify the converse tautology “If not ($A$ or $B$), then (not $A$) and (not $B$)” by exhibiting an element of

 $((A+B)\to\mathbf{0})\to(A\to\mathbf{0})\times(B\to\mathbf{0}),$

for any types $A$ and $B$, using the rules we have just introduced.

However, not all classical tautologies hold under this interpretation. For example, the rule “If not ($A$ and $B$), then (not $A$) or (not $B$)” is not valid: we cannot, in general, construct an element of the corresponding type

 $((A\times B)\to\mathbf{0})\to(A\to\mathbf{0})+(B\to\mathbf{0}).$

This reflects the fact that the “natural” propositions-as-types logic of type theory is constructive. This means that it does not include certain classical principles, such as the law of excluded middle ($\mathsf{LEM}$) or proof by contradiction, and others which depend on them, such as this instance of de Morgan’s law.

Philosophically, constructive logic is so-called because it confines itself to constructions that can be carried out effectively, which is to say those with a computational meaning. Without being too precise, this means there is some sort of algorithm specifying, step-by-step, how to build an object (and, as a special case, how to see that a theorem is true). This requires omission of $\mathsf{LEM}$, since there is no effective procedure for deciding whether a proposition is true or false.

The constructivity of type-theoretic logic means it has an intrinsic computational meaning, which is of interest to computer scientists. It also means that type theory provides axiomatic freedom. For example, while by default there is no construction witnessing $\mathsf{LEM}$, the logic is still compatible with the existence of one (see §3.4 (http://planetmath.org/34classicalvsintuitionisticlogic)). Thus, because type theory does not deny $\mathsf{LEM}$, we may consistently add it as an assumption, and work conventionally without restriction. In this respect, type theory enriches, rather than constrains, conventional mathematical practice.

We encourage the reader who is unfamiliar with constructive logic to work through some more examples as a means of getting familiar with it. See http://planetmath.org/node/87565Exercise 1.12,http://planetmath.org/node/87566Exercise 1.13 for some suggestions.

So far we have discussed only propositional logic. Now we consider predicate logic, where in addition to logical connectives like “and” and “or” we have quantifiers “there exists” and “for all”. In this case, types play a dual role: they serve as propositions and also as types in the conventional sense, i.e., domains we quantify over. A predicate over a type $A$ is represented as a family $P:A\to\mathcal{U}$, assigning to every element $a:A$ a type $P(a)$ corresponding to the proposition that $P$ holds for $a$. We now extend the above translation with an explanation of the quantifiers:

English Type Theory
For all $x:A$, $P(x)$ holds $\mathchoice{\prod_{x:A}\,}{\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)% }}{\prod_{(x:A)}}{\prod_{(x:A)}}}{\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod% _{(x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}}{\mathchoice{{\textstyle\prod_{(x:A)}}% }{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}}P(x)$
There exists $x:A$ such that $P(x)$ $\mathchoice{\sum_{x:A}\,}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x:A)}}{% \sum_{(x:A)}}{\sum_{(x:A)}}}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x:A)% }}{\sum_{(x:A)}}{\sum_{(x:A)}}}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x% :A)}}{\sum_{(x:A)}}{\sum_{(x:A)}}}$ $P(x)$

As before, we can show that tautologies of (constructive) predicate logic translate into inhabited types. For example, If for all $x:A$, $P(x)$ and $Q(x)$ then (for all $x:A$, $P(x)$) and (for all $x:A$, $Q(x)$) translates to

 $(\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(% x:A)}}P(x)\times Q(x))\to(\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}% }{\prod_{(x:A)}}{\prod_{(x:A)}}P(x))\times(\mathchoice{{\textstyle\prod_{(x:A)% }}}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}Q(x)).$

An informal proof of this tautology might go as follows:

Suppose for all $x$, $P(x)$ and $Q(x)$. First, we suppose given $x$ and prove $P(x)$. By assumption, we have $P(x)$ and $Q(x)$, and hence we have $P(x)$. Second, we suppose given $x$ and prove $Q(x)$. Again by assumption, we have $P(x)$ and $Q(x)$, and hence we have $Q(x)$.

The first sentence begins defining an implication as a function, by introducing a witness for its hypothesis:

 $f(p):\!\!\equiv\;\Box\;:(\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}}% {\prod_{(x:A)}}{\prod_{(x:A)}}P(x))\times(\mathchoice{{\textstyle\prod_{(x:A)}% }}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}Q(x)).$

At this point there is an implicit use of the pairing constructor to produce an element of a product type, which is somewhat signposted in this example by the words “first” and “second”:

 $f(p):\!\!\equiv\Big{(}\;\Box\;:\mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(% x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}P(x)\;,\;\Box\;:\mathchoice{{\textstyle% \prod_{(x:A)}}}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x:A)}}Q(x)\;\Big{)}.$

The phrase “we suppose given $x$ and prove $P(x)$” now indicates defining a dependent function in the usual way, introducing a variable for its input. Since this is inside a pairing constructor, it is natural to write it as a $\lambda$-abstraction:

 $f(p):\!\!\equiv\Big{(}\;{\lambda}x.\,\;\big{(}\Box\;:P(x)\big{)}\;,\;\Box\;:% \mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x% :A)}}Q(x)\;\Big{)}.$

Now “we have $P(x)$ and $Q(x)$” invokes the hypothesis, obtaining $p(x):P(x)\times Q(x)$, and “hence we have $P(x)$” implicitly applies the appropriate projection:

 $f(p):\!\!\equiv\Big{(}\;{\lambda}x.\,\mathsf{pr}_{1}(p(x))\;,\;\Box\;:% \mathchoice{{\textstyle\prod_{(x:A)}}}{\prod_{(x:A)}}{\prod_{(x:A)}}{\prod_{(x% :A)}}Q(x)\;\Big{)}.$

The next two sentences fill the other hole in the obvious way:

 $f(p):\!\!\equiv\Big{(}\;{\lambda}x.\,\mathsf{pr}_{1}(p(x))\;,\;{\lambda}x.\,% \mathsf{pr}_{2}(p(x))\;\Big{)}.$

Of course, the English proofs we have been using as examples are much more verbose than those that mathematicians usually use in practice; they are more like the sort of language one uses in an “introduction to proofs” class. The practicing mathematician has learned to fill in the gaps, so in practice we can omit plenty of details, and we will generally do so. The criterion of validity for proofs, however, is always that they can be translated back into the construction of an element of the corresponding type.

As a more concrete example, consider how to define inequalities of natural numbers. One natural definition is that $n\leq m$ if there exists a $k:\mathbb{N}$ such that $n+k=m$. (This uses again the identity types that we will introduce in the next section, but we will not need very much about them.) Under the propositions-as-types translation, this would yield:

 $(n\leq m):\!\!\equiv\mathchoice{\sum_{k:\mathbb{N}}\,}{\mathchoice{{\textstyle% \sum_{(k:\mathbb{N})}}}{\sum_{(k:\mathbb{N})}}{\sum_{(k:\mathbb{N})}}{\sum_{(k% :\mathbb{N})}}}{\mathchoice{{\textstyle\sum_{(k:\mathbb{N})}}}{\sum_{(k:% \mathbb{N})}}{\sum_{(k:\mathbb{N})}}{\sum_{(k:\mathbb{N})}}}{\mathchoice{{% \textstyle\sum_{(k:\mathbb{N})}}}{\sum_{(k:\mathbb{N})}}{\sum_{(k:\mathbb{N})}% }{\sum_{(k:\mathbb{N})}}}(n+k=m).$

The reader is invited to prove the familiar properties of $\leq$ from this definition. For strict inequality, there are a couple of natural choices, such as

 $(n

or

 $(n

The former is more natural in constructive mathematics, but in this case it is actually equivalent to the latter, since $\mathbb{N}$ has “decidable equality” (see §3.4 (http://planetmath.org/34classicalvsintuitionisticlogic),Theorem 7.2.6 (http://planetmath.org/72uniquenessofidentityproofsandhedbergstheorem#Thmprethm4)).

The representation of propositions as types also allows us to incorporate axioms into the definition of types as mathematical structures using $\Sigma$-types, as discussed in §1.6 (http://planetmath.org/16dependentpairtypes). For example, suppose we want to define a semigroup to be a type $A$ equipped with a binary operation $m:A\to A\to A$ (that is, a magma) and such that for all $x,y,z:A$ we have $m(x,m(y,z))=m(m(x,y),z)$. This latter proposition is represented by the type $\mathchoice{\prod_{x,y,z:A}\,}{\mathchoice{{\textstyle\prod_{(x,y,z:A)}}}{% \prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}}{\mathchoice{{% \textstyle\prod_{(x,y,z:A)}}}{\prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}{\prod_{(x,% y,z:A)}}}{\mathchoice{{\textstyle\prod_{(x,y,z:A)}}}{\prod_{(x,y,z:A)}}{\prod_% {(x,y,z:A)}}{\prod_{(x,y,z:A)}}}m(x,m(y,z))=m(m(x,y),z)$, so the type of semigroups is

 $\mathsf{Semigroup}:\!\!\equiv\mathchoice{\sum_{(A:\mathcal{U})}\,}{\mathchoice% {{\textstyle\sum_{(A:\mathcal{U})}}}{\sum_{(A:\mathcal{U})}}{\sum_{(A:\mathcal% {U})}}{\sum_{(A:\mathcal{U})}}}{\mathchoice{{\textstyle\sum_{(A:\mathcal{U})}}% }{\sum_{(A:\mathcal{U})}}{\sum_{(A:\mathcal{U})}}{\sum_{(A:\mathcal{U})}}}{% \mathchoice{{\textstyle\sum_{(A:\mathcal{U})}}}{\sum_{(A:\mathcal{U})}}{\sum_{% (A:\mathcal{U})}}{\sum_{(A:\mathcal{U})}}}\mathchoice{\sum_{(m:A\to A\to A)}\,% }{\mathchoice{{\textstyle\sum_{(m:A\to A\to A)}}}{\sum_{(m:A\to A\to A)}}{\sum% _{(m:A\to A\to A)}}{\sum_{(m:A\to A\to A)}}}{\mathchoice{{\textstyle\sum_{(m:A% \to A\to A)}}}{\sum_{(m:A\to A\to A)}}{\sum_{(m:A\to A\to A)}}{\sum_{(m:A\to A% \to A)}}}{\mathchoice{{\textstyle\sum_{(m:A\to A\to A)}}}{\sum_{(m:A\to A\to A% )}}{\sum_{(m:A\to A\to A)}}{\sum_{(m:A\to A\to A)}}}\mathchoice{\prod_{(x,y,z:% A)}\,}{\mathchoice{{\textstyle\prod_{(x,y,z:A)}}}{\prod_{(x,y,z:A)}}{\prod_{(x% ,y,z:A)}}{\prod_{(x,y,z:A)}}}{\mathchoice{{\textstyle\prod_{(x,y,z:A)}}}{\prod% _{(x,y,z:A)}}{\prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}}{\mathchoice{{\textstyle% \prod_{(x,y,z:A)}}}{\prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}{\prod_{(x,y,z:A)}}}m% (x,m(y,z))=m(m(x,y),z).$

From an inhabitant of this type we can extract the carrier $A$, the operation $m$, and a witness of the axiom, by applying appropriate projections. We will return to this example in §2.14 (http://planetmath.org/214exampleequalityofstructures).

Note also that we can use the universes in type theory to represent “higher order logic” — that is, we can quantify over all propositions or over all predicates. For example, we can represent the proposition for all properties $P:A\to\mathcal{U}$, if $P(a)$ then $P(b)$ as

 $\mathchoice{\prod_{P:A\to\mathcal{U}}\,}{\mathchoice{{\textstyle\prod_{(P:A\to% \mathcal{U})}}}{\prod_{(P:A\to\mathcal{U})}}{\prod_{(P:A\to\mathcal{U})}}{% \prod_{(P:A\to\mathcal{U})}}}{\mathchoice{{\textstyle\prod_{(P:A\to\mathcal{U}% )}}}{\prod_{(P:A\to\mathcal{U})}}{\prod_{(P:A\to\mathcal{U})}}{\prod_{(P:A\to% \mathcal{U})}}}{\mathchoice{{\textstyle\prod_{(P:A\to\mathcal{U})}}}{\prod_{(P% :A\to\mathcal{U})}}{\prod_{(P:A\to\mathcal{U})}}{\prod_{(P:A\to\mathcal{U})}}}% P(a)\to P(b)$

where $A:\mathcal{U}$ and $a,b:A$. However, a priori this proposition lives in a different, higher, universe than the propositions we are quantifying over; that is

 $\Bigl{(}\mathchoice{\prod_{P:A\to\mathcal{U}_{i}}\,}{\mathchoice{{\textstyle% \prod_{(P:A\to\mathcal{U}_{i})}}}{\prod_{(P:A\to\mathcal{U}_{i})}}{\prod_{(P:A% \to\mathcal{U}_{i})}}{\prod_{(P:A\to\mathcal{U}_{i})}}}{\mathchoice{{% \textstyle\prod_{(P:A\to\mathcal{U}_{i})}}}{\prod_{(P:A\to\mathcal{U}_{i})}}{% \prod_{(P:A\to\mathcal{U}_{i})}}{\prod_{(P:A\to\mathcal{U}_{i})}}}{\mathchoice% {{\textstyle\prod_{(P:A\to\mathcal{U}_{i})}}}{\prod_{(P:A\to\mathcal{U}_{i})}}% {\prod_{(P:A\to\mathcal{U}_{i})}}{\prod_{(P:A\to\mathcal{U}_{i})}}}P(a)\to P(b% )\Bigr{)}:\mathcal{U}_{i+1}.$

We have described here a “proof-relevant” translation of propositions, where the proofs of disjunctions and existential statements carry some information. For instance, if we have an inhabitant of $A+B$, regarded as a witness of “$A$ or $B$”, then we know whether it came from $A$ or from $B$. Similarly, if we have an inhabitant of $\mathchoice{\sum_{x:A}\,}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x:A)}}{% \sum_{(x:A)}}{\sum_{(x:A)}}}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x:A)% }}{\sum_{(x:A)}}{\sum_{(x:A)}}}{\mathchoice{{\textstyle\sum_{(x:A)}}}{\sum_{(x% :A)}}{\sum_{(x:A)}}{\sum_{(x:A)}}}P(x)$, regarded as a witness of “there exists $x:A$ such that $P(x)$”, then we know what the element $x$ is (it is the first projection of the given inhabitant).
As a consequence of the proof-relevant nature of this logic, we may have “$A$ if and only if $B$” (which, recall, means $(A\to B)\times(B\to A)$), and yet the types $A$ and $B$ exhibit different behavior. For instance, it is easy to verify that “$\mathbb{N}$ if and only if $\mathbf{1}$”, and yet clearly $\mathbb{N}$ and $\mathbf{1}$ differ in important ways. The statement “$\mathbb{N}$ if and only if $\mathbf{1}$” tells us only that when regarded as a mere proposition, the type $\mathbb{N}$ represents the same proposition as $\mathbf{1}$ (in this case, the true proposition). We sometimes express “$A$ if and only if $B$” by saying that $A$ and $B$ are logically equivalent. This is to be distinguished from the stronger notion of equivalence of types to be introduced in §2.4 (http://planetmath.org/24homotopiesandequivalences),http://planetmath.org/node/87577Chapter 4: although $\mathbb{N}$ and $\mathbf{1}$ are logically equivalent, they are not equivalent types.
Finally, we note that the propositions-as-types correspondence can be viewed in reverse, allowing us to regard any type $A$ as a proposition, which we prove by exhibiting an element of $A$. Sometimes we will state this proposition as “$A$ is inhabited.” That is, when we say that $A$ is inhabited, we mean that we have given a (particular) element of $A$, but that we are choosing not to give a name to that element. Similarly, to say that $A$ is not inhabited is the same as to give an element of $\neg A$. In particular, the empty type $\mathbf{0}$ is obviously not inhabited, since $\neg\mathbf{0}\equiv(\mathbf{0}\to\mathbf{0})$ is inhabited by $\mathsf{id}_{\mathbf{0}}$.22This should not be confused with the statement that type theory is consistent, which is the meta-theoretic claim that it is not possible to obtain an element of $\mathbf{0}$ by following the rules of type theory.