# logical graph : introduction

A logical graph is a graph-theoretic (http://planetmath.org/Graph) structure in one of the systems of graphical syntax that Charles Sanders Peirce developed for logic.

In his papers on qualitative logic, entitative graphs, and existential graphs, Peirce developed several versions of a graphical formalism, or a graph-theoretic formal language, designed to be interpreted for logic.

In the century since Peirce initiated this line of development, a variety of formal systems have branched out from what is abstractly the same formal base of graph-theoretic structures. This article examines the common basis of these formal systems from a bird’s eye view, focusing on those aspects of form that are shared by the entire family of algebras, calculi, or languages, however they happen to be viewed in a given application.

## 1 Abstract point of view

The bird’s eye view in question is more formally known as the perspective of formal equivalence, from which remove one cannot see many distinctions that appear momentous from lower levels of abstraction. In particular, expressions of different formalisms whose syntactic structures are isomorphic from the standpoint of algebra or topology are not recognized as being different from each other in any significant sense. Though we may note in passing such historical details as the circumstance that Charles Sanders Peirce used a streamer-cross symbol where George Spencer Brown used a carpenter’s square marker, the theme of principal interest at the abstract level of form is indifferent to variations of that order.

## 2 In lieu of a beginning

Consider the formal equations indicated in Figures 1 and 2.

 (1) (2)

For the time being these two forms of transformation may be referred to as axioms or initial equations.

## 3 Duality : logical and topological

There are two types of duality that have to be kept separately mind in the use of logical graphs, logical duality and topological duality.

There is a standard way that graphs of the order that Peirce considered, those embedded in a continuous manifold like that commonly represented by a plane sheet of paper — with or without the paper bridges that Peirce used to augment its topological genus — can be represented in linear text as what are called parse strings or traversal strings and parsed into pointer structures in computer memory.

A blank sheet of paper can be represented in linear text as a blank space, but that way of doing it tends to be confusing unless the logical expression under consideration is set off in a separate display.

For example, consider the formal equation drawn in planar form below:

 (3)

This may be written as $((~{}))~{}=~{}~{}"$ in linear text or set off in the following way:

$\begin{matrix}(~{}(~{})~{})&=&\end{matrix}$

When we turn to representing the corresponding expressions in computer memory, where they can be manipulated with utmost facility, we begin by transforming the planar graphs into their topological duals. The planar regions of the original graph correspond to nodes (or points) of the dual graph, and the boundaries between planar regions in the original graph correspond to edges (or lines) between the nodes of the dual graph.

For example, overlaying the corresponding dual graphs on the plane-embedded graphs shown above, we get the following composite picture:

 (4)

Though it’s not really there in the most abstract topology of the matter, for all sorts of pragmatic reasons we find ourselves compelled to single out the outermost region of the plane in a distinctive way and to mark it as the root node of the corresponding dual graph. In the present style of Figure the root nodes are marked by horizontal strike-throughs.

Extracting the dual graphs from their composite matrix, we get this picture:

 (5)

It is easy to see the relationship between the parenthetical representations of Peirce’s logical graphs, that somewhat clippedly picture the ordered containments of their formal contents, and the associated dual graphs, that form a species of rooted trees to be described in greater detail below.

In the case of our last example, a moment’s contemplation of the following picture will lead us to see that we can get the corresponding parenthesis string by starting at the root of the tree, climbing up the left side of the tree until we reach the top, then climbing back down the right side of the tree until we return to the root, all the while reading off the symbols, in this case either $("$ or $)"$, that we happen to encounter in our travels.

 (6)

This ritual is called traversing the tree, and the string read off is often called the traversal string of the tree. The reverse ritual, that passes from the string to the tree, is called parsing the string, and the tree constructed is often called the parse graph of the string. The users of this jargon tend to use it loosely, though, often using parse string to mean the string that gets parsed into the associated graph.

We have now treated in some detail various forms of the axiom that is formulated in string form as $((~{}))~{}=~{}~{}~{}~{}"$. For the sake of comparison, let’s record the planar and dual forms of the axiom that is formulated in string form as $(~{})(~{})~{}=~{}(~{})"$.

First the plane-embedded maps:

 (7)

Next the plane maps and their dual trees superimposed:

 (8)

Finally the rooted trees by themselves:

 (9)

And here are the parse trees with their traversal strings indicated:

 (10)

We have at this point enough material to begin thinking about the forms of analogy, iconicity, metaphor, morphism, whatever you want to call them, that are pertinent to the use of logical graphs in their various logical interpretations, for instance, those that Peirce described as entitative graphs and existential graphs.

## 4 Computational representation

The parse graphs that we’ve been looking at so far bring us one step closer to the pointer graphs that it takes to make these maps and trees live in computer memory, but they are still a couple of steps too abstract to detail the concrete species of dynamic data structures that we need. The time has come to flesh out the skeletons that we have drawn up to this point.

Nodes in a graph represent records in computer memory. A record is a collection of data that can be conceived to reside at a specific address. The address of a record is analogous to a demonstrative pronoun, on which account programmers commonly describe it as a pointer and semioticians recognize it as a type of sign called an index.

At the next level of concreteness, a pointer data structure can be represented as follows:

 (11)

This portrays $index_{0}$ as the address of a record that contains the following data:

 $datum_{1},datum_{2},datum_{3},\ldots,\mbox{ and so on}.$

What makes it possible to represent graph-theoretical structures as data structures in computer memory is the fact that an address is just another datum, and so we may have a state of affairs like the following:

 (12)

Returning to the abstract level, it takes three nodes to represent the three data records illustrated above: one root node connected to a couple of adjacent nodes. The items of data that do not point any further up the tree are then treated as labels on the record-nodes where they reside, as shown below:

 (13)

Notice that drawing the arrows is optional with rooted trees like these, since singling out a unique node as the root induces a unique orientation on all the edges of the tree, with up being the same direction as away from the root.

## 5 Quick tour of the neighborhood

This much preparation allows us to take up the founding axioms or initial equations that determine the entire system of logical graphs.

### 5.1 Primary arithmetic as semiotic system

Though it may not seem too exciting, logically speaking, there are many reasons to make oneself at home with the system of forms that is represented indifferently, topologically speaking, by rooted trees, by balanced strings of parentheses, and by finite sets of non-intersecting simple closed curves in the plane.

• One reason is that it gives us a respectable example of a sign domain on which to cut our semiotic teeth, non-trivial in the sense that it contains a countable infinity of signs.

• Another reason is that it allows us to study a simple form of computation that is recognizable as a species of semiosis, or sign-transforming process.

This space of forms, along with the two axioms that induce its partition into exactly two equivalence classes, is what George Spencer Brown called the .

The axioms of the primary arithmetic are shown below, as they appear in both graph and string forms, along with pairs of names that come in handy for referring to the two opposing directions of applying the axioms.

 (14) (15)

Let $S$ be the set of rooted trees and let $S_{0}\subset S$ be the 2-element subset consisting of a rooted node and a rooted edge. We may express these definitions more briefly as $S=\{\operatorname{rooted~{}trees}\}$ and $S_{0}=\{\ominus,|\}.$ Simple intuition, or a simple inductive proof, will assure us that any rooted tree can be reduced by means of the axioms of the primary arithmetic either to a root node $\ominus$ or else to a rooted edge $|\ .$

For example, consider the reduction that proceeds as follows:

 (16)

Regarded as a semiotic process, this amounts to a sequence of signs, every one after the first serving as the interpretant of its predecessor, ending in a final sign that may be taken as the canonical sign for their common object, in the upshot being the result of the computation process. Simple as it is, this exhibits the main features of any computation, namely, a semiotic process that proceeds from an obscure sign to a clear sign of the same object, being in its aim and effect an action on behalf of clarification.

### 5.2 Primary algebra as pattern calculus

Experience teaches that complex objects are best approached in a gradual, laminar, modular fashion, one step, one layer, one piece at a time, and it’s just as much the case when the complexity of the object is irreducible, that is, when the articulations of the representation are necessarily at joints that are cloven disjointedly from nature, with some assembly required in the synthetic integrity of the intuition.

That’s one good reason for spending so much time on the first half of zeroth order logic, represented here by the primary arithmetic, a level of formal structure that C.S. Peirce verged on intuiting at numerous points and times in his work on logical graphs, and that Spencer Brown named and brought more completely to life.

There is one other reason for lingering a while longer in these primitive forests, and this is that an acquaintance with “bare trees”, those as yet unadorned with literal or numerical labels, will provide a firm basis for understanding what’s really at issue in such problems as the ‘ontological status of variables’.

It is probably best to illustrate this theme in the setting of a concrete case, which we can do by reviewing the previous example of reductive evaluation shown in Figure 16.

The observation of several semioses, or sign-transformations, of roughly this shape will most likely lead an observer with any observational facility whatever to notice that it doesn’t really matter what sorts of branches happen to sprout from the side of the root aside from the lone edge that also grows there — the end result will always be the same.

Our observer might think to summarize the results of many such observations by introducing a label or variable to signify any shape of branch whatever, writing something like the following:

 (17)

Observations like that, made about an arithmetic of any variety, germinated by their summarizations, are the root of all algebra.

Speaking of algebra, and having encountered already one example of an algebraic law, we might as well introduce the axioms of the primary algebra, once again deriving their substance and their name from the works of Charles Sanders Peirce and George Spencer Brown, respectively.

 (18) (19)

The choice of axioms for any formal system is to some degree a matter of aesthetics, as it is commonly the case that many different selections of formal rules will serve as axioms to derive all the rest as theorems. As it happens, the example of an algebraic law that we noticed first, $a~{}(~{})~{}~{}=~{}~{}(~{})"$, as simple as it appears, proves to be provable as a theorem on the grounds of the foregoing axioms.

We might also notice at this point a subtle difference between the primary arithmetic and the primary algebra with respect to the grounds of justification that we have naturally if tacitly adopted for their respective sets of axioms.

The arithmetic axioms were introduced by fiat, in a quasi-apriori fashion, though of course it is only long prior experience with the practical uses of comparably developed generations of formal systems that would actually induce us to such a quasi-primal move. The algebraic axioms, in contrast, can be seen to derive their motive and their justice from the observation and summarization of patterns that are visible in the arithmetic spectrum.

## 6 Formal development

Discussion of this topic continues at Logical Graph : Formal Development (http://planetmath.org/LogicalGraphFormalDevelopment).

 Title logical graph : introduction Canonical name LogicalGraphIntroduction Date of creation 2013-03-22 17:47:00 Last modified on 2013-03-22 17:47:00 Owner Jon Awbrey (15246) Last modified by Jon Awbrey (15246) Numerical id 87 Author Jon Awbrey (15246) Entry type Topic Classification msc 03B70 Classification msc 03B35 Classification msc 03B22 Classification msc 03B05 Related topic DifferentialLogic Related topic MinimalNegationOperator Related topic PropositionalCalculus Related topic ZerothOrderLogic Related topic Ampheck Defines entitative interpretation Defines existential interpretation