5.6 The general syntax of inductive definitions
So far, we have been discussing only particular inductive types: , , , , coproducts, products, -types, -types, etc. However, an important aspect of type theory is the ability to define new inductive types, rather than being restricted only to some particular fixed list of them. In order to be able to do this, however, we need to know what sorts of “inductive definitions” are valid or reasonable.
To see that not everything which “looks like an inductive definition” makes sense, consider the following “constructor” of a type :
-
•
.
The recursion principle for such a type ought to say that given a type , in order to construct a function , it suffices to consider the case when the input is of the form for some . Moreover, we would expect to be able to use the “recursive data” of applied to in some way. However, it is not at all clear how to “apply to ”, since both are functions with domain .
We could write down a “recursion principle” for by just supposing (unjustifiably) that there is some way to apply to and obtain a function . Then the input to the recursion rule would ask for a type together with a function
(5.6.1) |
where the two arguments of are and “the result of applying to ”. However, what would the computation rule for the resulting function be? Looking at other computation rules, we would expect something like “” for , but as we have seen, “” does not make sense. The induction principle of is even more problematic; it’s not even clear how to write down the hypotheses. (See also http://planetmath.org/node/87788Exercise 5.7,http://planetmath.org/node/87866Exercise 5.8.)
This example suggests one restriction on inductive definitions: the domains of all the constructors must be covariant functors of the type being defined, so that we can “apply to them” to get the result of the “recursive call”. In other words, if we replace all occurrences of the type being defined with a variable , then each domain of a constructor must be an expression that can be made into a covariant functor of . This is the case for all the examples we have considered so far. For instance, with the constructor , the relevant functor is constant at (i.e. ), while for the constructor , the functor is the identity functor ().
However, this necessary condition is also not sufficient. Covariance prevents the inductive type from occurring on the left of a single function type, as in the argument of the “constructor” considered above, since this yields a contravariant functor rather than a covariant one. However, since the composite of two contravariant functors is covariant, double function types such as are once again covariant. This enables us to reproduce Cantorian-style paradoxes.
For instance, consider an “inductive type” with the following constructor:
-
•
.
Assuming such a type exists, we define functions
by | ||||
Here is defined by the recursion principle of , while and are defined explicitly. Then for any , we have .
In particular, therefore, if , then we have a path . Thus, , and so in particular holds. Hence, is “injective” (although a priori may not be a set). This already sounds suspicious — we have an “injection” of the “power set” of into — and with a little more work we can massage it into a contradiction.
Suppose given , and define by
(5.6.2) |
We claim that . By function extensionality, it suffices to show for any . And by univalence, for this it suffices to show that each implies the other. Now by definition of , we have
Clearly this holds if , since we may take . On the other hand, if we have with and , then since is injective, hence also .
This completes the proof that . Thus, every element is the image under of some element . However, if we define by a classic diagonalization:
then from we deduce . This is a contradiction: no proposition can be equivalent to its negation. (Supposing , if , then , and so ; hence , but then , and so .)
Remark 5.6.3.
There is a question of universe size to be addressed. In general, an inductive type must live in a universe that already contains all the types going into its definition. Thus if in the definition of , the ambiguous notation means , then we do not have but only for some larger universe with . In a predicative theory, therefore, the right-hand side of (5.6.2) lives in , not . So this contradiction does require the propositional resizing axiom mentioned in §3.5 (http://planetmath.org/35subsetsandpropositionalresizing).
This counterexample suggests that we should ban an inductive type from ever appearing on the left of an arrow in the domain of its constructors, even if that appearance is nested in other arrows so as to eventually become covariant. (Similarly, we also forbid it from appearing in the domain of a dependent function type.) This restriction is called strict positivity (ordinary “positivity” being essentially covariance), and it turns out to suffice.
In conclusion, therefore, a valid inductive definition of a type consists of a list of constructors. Each constructor is assigned a type that is a function type taking some number (possibly zero) of inputs (possibly dependent on one another) and returning an element of . Finally, we allow itself to occur in the input types of its constructors, but only strictly positively. This essentially means that each argument of a constructor is either a type not involving , or some iterated function type with codomain . For instance, the following is a valid constructor type:
(5.6.4) |
All of these function types can also be dependent functions (-types).11In the language of §5.4 (http://planetmath.org/54inductivetypesareinitialalgebras), the condition of strict positivity ensures that the relevant endofunctor is polynomial. It is well-known in category theory that not all endofunctors can have initial algebras; restricting to polynomial functors ensures consistency. One can consider various relaxations of this condition, but in this book we will restrict ourselves to strict positivity as defined here.
Note we require that an inductive definition is given by a finite list of constructors. This is simply because we have to write it down on the page. If we want an inductive type which behaves as if it has an infinite number of constructors, we can simply parametrize one constructor by some infinite type. For instance, a constructor such as can be thought of as equivalent to countably many constructors of the form . (Of course, the infinity is now internal to the type theory, but this is as it should be for any foundational system.) Similarly, if we want a constructor that takes “infinitely many arguments”, we can allow it to take a family of arguments parametrized by some infinite type, such as which takes an infinite sequence of elements of .
Now, once we have such an inductive definition, what can we do with it? Firstly, there is a recursion principle stating that in order to define a function , it suffices to consider the case when the input arises from one of the constructors, allowing ourselves to recursively call on the inputs to that constructor. For the example constructor (5.6.4), we would require to be equipped with a function of type
(5.6.5) |
Under these hypotheses, the recursion principle yields , which moreover “preserves the constructor data” in the evident way — this is the computation rule, where we use covariance of the inputs. For instance, in the example (5.6.4), the computation rule says that for any , , , and , we have
(5.6.5) |
The induction principle for a general inductive type is only a little more complicated. Of course, we start with a type family , which we require to be equipped with constructor data “lying over” the constructor data of . That means the “recursive call” arguments such as above must be replaced by dependent functions with types such as . In the full example of (5.6.4), the corresponding hypothesis for the induction principle would require
(5.6.6) |
The corresponding computation rule looks identical to (5.6.5). Of course, the recursion principle is the special case of the induction principle where is a constant family. As we have mentioned before, the induction principle is also called the eliminator, and the recursion principle the non-dependent eliminator.
As discussed in §1.10 (http://planetmath.org/110patternmatchingandrecursion), we also allow ourselves to invoke the induction and recursion principles implicitly, writing a definitional equation with for each expression that would be the hypotheses of the induction principle. This is called giving a definition by (dependent) pattern matching. In our running example, this means we could define by
where and and and are variables that are bound in the right-hand side. Moreover, the right-hand side may involve recursive calls to of the form , , and . When this definition is repackaged in terms of the induction principle, we replace such recursive calls by , , and , respectively, for new variables
Then we could write
where the second argument to has the type of (5.6.6).
We will not attempt to give a formal presentation of the grammar of a valid inductive definition and its resulting induction and recursion principles and pattern matching rules. This is possible to do (indeed, it is necessary to do if implementing a computer proof assistant), but provides no additional insight. With practice, one learns to automatically deduce the induction and recursion principles for any inductive definition, and to use them without having to think twice.
Title | 5.6 The general syntax of inductive definitions |
\metatable |