You are here
Homediamond lemma
Primary tabs
diamond lemma
The diamond lemmas constitute a class of results about the existence of a unique normal form. Diamond lemmas exist in many diverse areas of mathematics, and as a result their technical contents can be quite different, but they are easily recognisable from their overall structure and basic idea — the “diamond” condition from which they inherit their name.
1 Newman’s Diamond Lemma
The basic diamond lemma, that of Newman [3], is today most easily presented in terms of binary relations.
Theorem 1.
Let $X$ be a set and let $\rightarrow$ be a binary relation on $X$. If $\rightarrow$ is terminating, then the following conditions are equivalent:
1. For all $a,b,c\in X$ such that $a\rightarrow b$ and $a\rightarrow c$, then $b$ and $c$ are joinable.
2. Every $a\in X$ has a unique normal form.
Condition 1 can be graphically drawn as
$\xymatrix@2ex{&a\ar[dl]\ar[dr]\\ b\ar[dr]^{>}>>>{*}&&c\ar[dl]_{>}>>>{*}\\ &d}$ 
where $\stackrel{*}{\rightarrow}$ denotes the reflexive transitive closure of $\rightarrow$. This is the “diamond” from which the result derives its name — not crystallised carbon, but merely a square standing on one corner. In relation to this picture, what the condition says is that whenever one has $a,b,c\in X$ forming the top two sides of such a diagram, there is a fourth corner $d\in X$ which completes the diamond.
The typical case to which one applies this theorem is that $X$ is a set of formal terms on which one wishes to define an equivalence relation $\sim$. The $\rightarrow$ relation encodes a set of “elementary” equivalences — an example in standard algebra of such an elementary equivalence might be the instance $a(b+\nobreak c)\rightarrow ab+ac$ of the distributivity axiom — that generate the wanted $\sim$. It is techincally possible to define the equivalence $\sim$ as the reflexive–transitive–symmetric closure of $\rightarrow$, but since $X$ is typically infinite that definition does not lead to an effective method for determining whether $x\sim y$. However when the relation $\rightarrow$ satisfies the conditions in Theorem 1, there does exist an algorithm which given $\rightarrow$ determines whether two arbitrary elements are related by $\sim$.
In a computational setting, the interpretation of $x\rightarrow y$ is usually that there exists a “onestep simplification” that converts the expression $x$ to the expression $y$. The corresponding interpretation of $x\stackrel{*}{\rightarrow}y$ is thus that there exists a finite sequence of “simplifications” that converts $x$ to $y$. The formal term used for “simplifications” in this sense is reductions, so $x\stackrel{*}{\rightarrow}y$ is read as “$x$ reduces to $y$” or “$x$ can be reduced to $y$”.
The theorem has several immediate applications — first of all that there is a unique function $N\colon X\longrightarrow X$ which sends arbitrary elements of $X$ to their normal forms — and this leads to a test for whether arbitrary elements are equivalent.
Lemma 1.
Let $X$ be a set and $\rightarrow$ a binary relation on $X$ such that all elements of $X$ has a unique normal form with respect to $\rightarrow$. Denote by $\sim$ the reflexive transitive symmetric closure of $\rightarrow$ and denote by $N$ the function $X\longrightarrow X$ which sends every element to its normal form with respect to $\rightarrow$. Then for all $x,y\in X$,
$x\sim y\quad\text{if and only if}\quad N(x)=N(y)\text{.}$  (1) 
Proof.
Denote by $\stackrel{*}{\rightarrow}$ the reflexive–transitive closure of $\rightarrow$ and denote by $\leftrightarrow$ the symmetrisation of $\rightarrow$. If $N(x)=N(y)$ then obviously $x\stackrel{*}{\rightarrow}N(x)=N(y)\stackrel{*}{\leftarrow}y$ and thus $x\sim N(x)\sim y$ as claimed. If conversely $x\sim y$ then there must exist some sequence
$z_{0}\leftrightarrow z_{1}\leftrightarrow\cdots\leftrightarrow z_{{n1}}% \leftrightarrow z_{n}$ 
where $z_{0},\ldots,z_{n}\in X$, $z_{0}=x$, and $z_{n}=y$. For every $z_{k}$ one of $z_{k}\rightarrow z_{{k+1}}$ and $z_{k}\leftarrow z_{{k+1}}$ must hold. In the former case $z_{k}\stackrel{*}{\rightarrow}z_{{k+1}}\stackrel{*}{\rightarrow}N(z_{{k+1}})$ and thus by transitivity $z_{k}\stackrel{*}{\rightarrow}N(z_{{k+1}})$, but this means $N(z_{{k+1}})$ is a normal form also of $z_{k}$. In the latter case one similarly shows that $N(z_{k})$ is a normal form of $z_{{k+1}}$. Either way it follows from the uniqueness of the normal form that $N(z_{k})=N(z_{{k+1}})$. Thus $N(z_{0})=N(z_{1})=\cdots=N(z_{{n1}})=N(z_{n})$ and hence $N(x)=N(y)$ as claimed. ∎
Another application of normal forms is to serve as representatives of the equivalence classes of $X$. Many mathematical constructions of algebraic structures (e.g. that of a quotient of a group) end up producing a set $X/{\sim}$ of equivalence classes of some given set $X$, but these are as a rule unfeasible for computational purposes. What one can do instead is to choose an element from each equivalence class and use these as representatives of their equivalence classes. When normal forms are known to be unique, there trivially exists a bijection from
$X_{{\mathrm{nf}}}=\left\{\,x\in X\,\,\vrule\,\,\text{$x$ is on normal form}\,\right\}$ 
to $X/{\sim}$ given by $x\mapsto[x]_{\sim}$, and its inverse can thanks to Lemma 1 be defined in terms of the normal form map $N$ as $[x]_{\sim}\mapsto N(x)$.
How does one know that the normal form map $N$ is effective, though? This is because of the terminating property on the binary relation in Theorem 1.
The primary consequence of being terminating is that $\stackrel{*}{\rightarrow}$ is wellfounded, and it is no surprise that Theorem 1 is proved using wellfounded induction. Being terminating is also what guarantees that the following algorithm “terminates”.
Algorithm 1 (Normal form).
Let $X$ be a set and $\rightarrow$ a strict binary relation on $X$ which satisfies the descending chain condition.
 Input

An element $x\in X$.
 Output

A normal form of $x$.
 Procedure

If $x$ is on normal form with respect to $\rightarrow$ then return $x$. Otherwise there is some $y\in X$ such that $x\rightarrow y$; pick one such $y$. Set $x:=y$ and repeat the procedure from start.
For relations $\rightarrow$ which satisfy the descending chain condition and have unique normal forms, Algorithm 1 and Lemma 1 combine to an algorithm for deciding the relation $\sim$. A typical problem is however that uniqueness of the normal form is a global condition that is very hard to establish using elementary methods. Theorem 1 offers an equivalent local condition that often is straightforward to verify in each particular case by explicit calculations.
2 Diamond lemmas for algebraic structures
One disadvantage of the basic diamond lemma (Theorem 1) is that it requires all reductions to be explicit, whereas the average mathematician probably expects it to handle reduction schemas transparently. To illustate the difference, consider again the reduction $a(b+\nobreak c)\rightarrow ab+ac$. This does not imply $(d+\nobreak e)(b+\nobreak c)\rightarrow(d+\nobreak e)b+(d+\nobreak e)c$, or even $2(b+\nobreak c)\rightarrow 2b+2c$, because all three left hand sides are distinct as formulae (i.e., as elements of $X$). In order to let $a$, $b$, and $c$ be arbitrary, one need to explicitly state that e.g. “$a(b+\nobreak c)\rightarrow ab+ac$ for all wellformed formulae $a$, $b$, and $c$”. Even this may be less than what was intended, since it is often the case that one wants it to follow from $a(b+\nobreak c)\rightarrow ab+ac$ also that $f\bigl(a(b+\nobreak c)\bigr)\rightarrow f(ab+\nobreak ac)$ for arbitrary functionalised expressions $f$. Theorem 1 requires its user to take care of such details explicitly.
Moreover, the conditions in Theorem 1 are stated about arbitrary elements of $X$, so even if one employs schemas in the definition of $\rightarrow$, it is necessary to verify that condition 1 holds for every concrete triple $(a,b,c)$ of elements of $X$ such that $b\leftarrow a\rightarrow c$. This too can be done collectively using proof schemas, but the situation is complicated enough that it is often far from clear whether all cases have been checked. It is furthermore common that trivial verifications of a syntactic origin outnumber the verifications that have interesting mathematical content. This makes it less likely that mathematicians will actually bother to produce a complete proof, and more likely that they will skip it altogether by resorting to the notorious “It is easy to see that …”
Finally, it is not always the case than one just wants an equivalence relation. When the base set $X$ has an established algebraic structure one typically rather wants $\sim$ to be a congruence relation, but the basic diamond lemma can of course not take such sophistications into account. Therefore there exist also a number of specialised diamond lemmas which are tailored to particular algebraic structures and can make do with verifications of significantly fewer cases than the basic diamond lemma.
The most familiar such result is probably the diamond lemma for ring theory of Bergman [1], also known as the composition lemma for associative algebras [2].
[Give full statement of theorem? It’s quite a mouthful.]
[Write about relation to Gröbner basis theory.]
References
 1 G. M. Bergman: The Diamond Lemma for Ring Theory, Adv. Math. 29 (1978), 178–218.
 2 L. A. Bokut${}^{\prime}$: Embeddings into simple associative algebras (Russian), Algebra i Logika 15, no. 2 (1976), pp. 117–142 and 245. English translation in Algebra and Logic, pp. 73–90.
 3 M. H. A. Newman: On theories with a combinatorial definition of “equivalence”, Ann. of Math. 43 (1942), 223–243.
Mathematics Subject Classification
68Q42 no label found03C05 no label found Forums
 Planetary Bugs
 HS/Secondary
 University/Tertiary
 Graduate/Advanced
 Industry/Practice
 Research Topics
 LaTeX help
 Math Comptetitions
 Math History
 Math Humor
 PlanetMath Comments
 PlanetMath System Updates and News
 PlanetMath help
 PlanetMath.ORG
 Strategic Communications Development
 The Math Pub
 Testing messages (ignore)
 Other useful stuff
Recent Activity
Corrections
typo by yark ✓
palatalization by pahio ✘
normal form by CWoo ✓