## TTT: Exercise 1.1.$$\text{IIT}^\diamond$$


Describe the isomorphisms, initial objects, and terminal objects (if they exist) in each of the categories in exercise $$1.1.\text{CCON}^\diamond$$. I think this exercise should have come before the exercise $$1.1.\text{ISO}^\diamond$$ in the book, because it requires the definition of isomorphism for slice categories.

• Arrow Category.
• Isomorphisms. An isomorphism in the category $$Ar(\cat{C})$$ is a pair of morphisms (described by the following diagrams):

These two morphisms must satisfy the following equations:

Thus, an isomorphism in $$Ar(\cat{C})$$ corresponds to a pair of ismorphisms in $$\cat{C}$$.
• Initial Objects. Recall that an initial object in a category is an object such that there exists an unique arrow from the initial object to any other object. Thus, for every object, an arrow of $$\cat{C}$$, we must be able to construct an unique arrow from the initial object to any other object (arrow of $$\cat{C}$$). Suppose $$g : C \to D$$ is an arrow of $$\cat{C}$$. The following diagram defines a unique arrow to $$g$$:

• Terminal Objects. Terminal objects are similar to the construction given for initial objects. The unique diagram is as follows:

• Twisted Arrow Category.
• Isomorphisms. An isomorphism in the category $$Ar(\cat{C})$$ is a pair of morphisms (described by the following diagrams):

These two morphisms must satisfy the following equations:

Thus, an isomorphism in $$Tar(\cat{C})$$ corresponds to a pair of ismorphisms in $$\cat{C}$$ and $$\cat{C}^{\mathsf{op}}$$
respectively.
• Initial Objects. We cannot use the trick with identities to get an initial object in the twisted arrow category, but if $$\cat{C}$$ has an intital object, then $$Tar(\cat{C})$$ has one. The morphisms of $$Tar(\cat{C})$$ are pairs of morphisms from $$\cat{C}$$ and $$\cat{C}^{\mathsf{op}}$$ respectively. Thus, if $$\cat{C}$$ has an initial object, then $$\cat{C}^{\mathsf{op}}$$ has a terminal object. This implies that we can define the initial object of $$Tar(\cat{C})$$ as the identity out of the initial object of $$\cat{C}$$. The following diagram defines the unique arrow out of the initial object:

• Terminal Objects. We can use our identity trick to get terminal objects in any twisted category. The following defines the terminal arrow:

• Slice Category.
• Isomorphisms. Isomorphisms in slice categories were defined in the post $$1.1.\text{ISO}^\diamond$$.
• Initial Objects. Simiarly to the twisted category intitial objects do not exist in any aribtrary slice category, but if $$\cat{C}$$ has an initial object, then $$\cat{C}/A$$ does as well. The unique arrow is defined as follows:

This diagram commutes, because of the uniqueness of $$\diamond_A$$.
• Terminal Objects. Terminal objects do exist, and correspond to the identity arrow. The unique arrow is defined as follows:

## TTT: Exercise 1.1.$$\text{ISO}^\diamond$$


The category of objects of $$\cat{C}$$ over $$A$$ (slice category $$C/A$$) has as objects the arrows of $$\cat{C}$$, and an arrow from $$f : B \to A$$ to $$g : C \to A$$ is an arrow $$h : B \to C$$ making the following diagram commute:

• a. Show that if $$h : f \to g$$ over A of $$\cat{C}/A$$ is an isomorphism then $$h$$ is an isomorphism in $$\cat{C}$$. Note that this question is an iff in the book, but in the light of part b I changed it to an only an if, because the only-if direction would mean that any time we have an isomorphism in $$\cat{C}$$, then all morphisms forming a slice would be isomorphic, but this is not the case.

Suppose $$h : f \to g$$ is an isomorphism. Then there exists another morphism $$h^{-1} : g \to f$$ such that all of the following are true:

Both of the previous diagrams imply that $$h$$ and $$h^{-1}$$ are mutual inverses in $$\cat{C}$$, and hence, define an isomorphism.
• b. Give an example of objects $$A$$, $$B$$, and $$C$$ of some category $$\cat{C}$$ and arrows $$f : B \to A$$ and $$g : C \to A$$such that $$B \cong C$$ in $$\cat{C}$$, but $$f$$ and $$g$$ are not isomorphic in $$\cat{C}/A$$.

Set $$A = \mathbb{N}$$, $$B = \mathbb{N} \times \top$$, and $$C = \mathbb{N}$$, then $$f = \pi_1 : A \times \perp \to A$$ and $$g(x) = x*2 : A \to A$$. In this case $$B \cong C$$, but $$f$$ and $$g$$ are not isomorphic, because it is not possible to form a morphism $$h : f \to g$$ or $$h’ : g \to f$$.

## TTT: Exercise 1.1.$$\text{CCON}^\diamond$$

Starting now I am going to be doing nearly every exercise in Barr and Wells’ book “Toposes, Triples, and Theories.” I might skip some here and there. I am doing this to start closing the gaps in my category theory knowledge. It should be fun. Future posts will not have much commentary, but will state the problem and my solution. Perhaps I will use these again in the future, or someone out there in the cloud might find them helpful. It gives me an excuse to do them, and write them up completely.

Verify the following constructions are categories.

• a. For any category, $$\cat{C}$$, the arrow category denoted, $$Ar(\cat{C})$$, has as objects the arrows of $$\cat{C}$$ and an arrow from $$f : A \to C$$ to $$g : B \to D$$ is a pair of arrows $$h : A \to B$$ and $$k : C \to D$$ making the following diagram commute:

Identities. Suppose $$f : A \to A$$ is an arrow of $$\cat{C}$$. We must define two arrows $$h : A \to A$$ and $$k : A \to A$$ making the following diagram commute:

Choose $$h = k = \id_A$$. Clearly, the previous diagram commutes.

Composition. Suppose we have the diagrams:

It is easy to show that the previous two diagrams imply that the following diagram will commute:

We prove this diagram commutes by the following equational reasoning:

The previous diagram corresponds to the composition of the two arrows $$(h_1,k_1) : f \to g$$ and $$(h_2,k_2) : g \to h$$. We can see that this turns out to be $$(h_1;h_2,k_1;k_2)$$. Thus composition of $$Ar(\cat{C})$$ is inherited from composition in $$\cat{C}$$. This then implies that composition in $$Ar(\cat{C})$$ will respect identities as well as associativity.
• b. For any category, $$\cat{C}$$, the twisted arrow category denoted, $$Tar(\cat{C})$$, has as objects the arrows of $$\cat{C}$$ and an arrow from $$f : A \to C$$ to $$g : B \to D$$ is a pair of arrows $$h : A \to B$$ and $$k : D \to C$$ making the following diagram commute:

Identities. Defined as they were for $$Ar(\cat{C})$$.

Composition. Suppose we have the diagrams:

We must show that the previous diagrams imply the following diagram commutes:

We prove this diagram commutes by the following equational reasoning:

Again, associativity of arrows will come from associativity from $$\cat{C}$$ and composition will respect identities because $$\cat{C}$$ respects identities.

## The Collapse: When Matter Meets Antimatter

One characteristic which sets category theory apart from other theories is that for any concept there exists a dual. This makes category theory an obvious choice when setting out on an investigation of a dual concept. In this post we take a look at the less well-known dual of implication which we call coimplication, but is also known as subtraction or exclusion.

Implication corresponds to the right adjoint of the cartesian product. That is, a cartesian category $$\cat{C}$$ models implication if the functor $$A \times (-) : \cat{C} \to \cat{C}$$ has a right adjoint, which we denote $$A \Rightarrow B$$ for objects $$A$$ and $$B$$. By the functor $$A \times (-) : \cat{C} \to \cat{C}$$ having a right adjoint we mean that there is a natural bijection $$\mathsf{cur} : \homs{A \times B}{C} \cong \homs{A}{B \Rightarrow C}$$. This gives rise to the necessary tools for working with implication like currying, which is given by the bijection itself, and evalution $$\mathsf{eval} : (A \Rightarrow B) \times A \to B$$. As an example using these morphisms along with the associator, $$\alpha : (A \times B) \times C \to A \times (B \times C)$$, and the symmetry, $$\beta : A \times B \to B \times A$$, of the cartesian product we can prove $$\mathsf{cur}(\alpha;(\mathsf{id} \times \beta);\alpha^{-1};(\mathsf{eval} \times \mathsf{id});\beta;\mathsf{eval}) : (A \Rightarrow B) \times (B \Rightarrow C) \to (A \Rightarrow C)$$. A category with products, a terminal object, and the right adjoint to the product functor is called a cartesian closed category (CCC), and the right adjoint to the product is often called the interal hom, because it can be seen as a bifunctor: $$(-) \Rightarrow (-) : \cat{C}^{\mathsf{op}} \times \cat{C} \to \cat{C}$$ which is an internalization of the homspace functor.

Now coimplication corresponds to the left adjoint of the coproduct functor. This is dual to that of implication. Thus, we have a bijection $$\mathsf{cocur} : \homs{B}{C + A} \cong \homs{A – B}{C}$$. We also have the coevaluator $$\mathsf{coeval} : B \to (A – B) + A$$. The left adjoint to the coproduct functor is called the co-internal hom, and a category with coproducts, an initial object, and a co-internal hom is called a cocartesian coclosed category (coCCC).

The question this post gives an answer to is “does it make sense to have a category that is both cartesian closed and cocartesian co-closed?” The answer happens to be negative and is due to Tristan Crolard. The main result that we will need is due to André Joyal. We will show that any category which is both cartesian closed and cocartesian coclosed is degenerative in the sense that it is a preorder, that is, for any two objects $$A$$ and $$B$$ the homspace $$\homs{A}{B}$$ has at most one element. We call a category that is both a CCC and a coCCC a bi-cartesian closed categories (biCCC).

First, we need some basic results.

In any CCC $$\cat{C}$$, if $$\perp$$ is an initial object in $$\cat{C}$$ and $$\homs{A}{\perp}$$ is non-empty, then $$A \cong A \times \perp$$.
This follows easily from the universial mapping property for products. ▌
In any CCC $$\cat{C}$$, if $$\perp$$ is an initial object in $$\cat{C}$$, then so is $$\perp \times A$$ for any object $$A$$ of $$\cat{C}$$.
We know that the universal morphism for the initial object is unique, and hence, the homspace $$\homs{\perp}{A \Rightarrow B}$$ for any object $$B$$ of $$\cat{C}$$. Then using the right adjoint to the product functor we know that $$\homs{\perp}{A \Rightarrow B} \cong \homs{\perp \times A}{B}$$, and hence, there is only one arrow between $$\perp \times A$$ and $$B$$. ▌
In any CCC $$\cat{C}$$, if $$\top$$ is a terminal object in $$\cat{C}$$, then for any object $$A$$, $$\top \times A \cong A \cong A \times \top$$.
This follows easily from the universial mapping property for products. ▌

Observe the following.

A biCCC is a CCC whose dual is also a CCC.
Follows from a straightforward use of duality. ▌

Thus, the three basic results we gave apply to any biCCC.

We have now arrived at the main result by Joyal.

In any CCC $$\cat{C}$$, if $$\perp$$ is an initial object in $$\cat{C}$$ and $$\homs{A}{\perp}$$ is non-empty, then $$A$$ is an initial object in $$\cat{C}$$.
Suppose $$\cat{C}$$ is a CCC such that $$\perp$$ is an initial object in $$\cat{C}$$, and $$A$$ is an arbitrary object in $$\cat{C}$$. Furthermore, suppose $$\homs{A}{\perp}$$ is non-empty. By the first basic lemma above we know that $$A \cong A \times \perp$$, and by the second $$A \times \perp$$ is initial, thus $$A$$ is initial. ▌

Notice that the proof of the previous lemma hinges on the fact that the universal morphism is unique. If we took the initial object to be weak instead, then the previous result would not hold.

Finally, we have the main result of this post due to Crolard.

If $$\cat{C}$$ is a biCCC, then for any two object $$A$$ and $$B$$ of $$\cat{C}$$, $$\homs{A}{B}$$ has at most one element.
Suppose $$\cat{C}$$ is a biCCC, and $$A$$ and $$B$$ are objects of $$\cat{C}$$. We know from above that $$B \cong \top \times B$$. Thus, we know that $$\homs{B}{A} \cong \homs{\top \times B}{A}$$. Using the internal hom adjunction we know the following:

$$\homs{B}{A} \cong \homs{\top \times B}{A} \cong \homs{\top}{B \Rightarrow A}$$

Taking the dual of this chain of isomorphisms yields:

$$\homs{A}{B} \cong \homs{A}{\perp + B} \cong \homs{B – A}{\perp}$$

Therefore, by Joyal’s theorem above $$\homs{A}{B}$$ has at most one element. ▌

Again, the previous result hinges on Joyal’s result, but can be prevented by using weak initial objects.

So, what are the repercussions of all of this? We do not have a categorical proof theory for languages consisting of implication and coimplication. For example, the logics: subtractive logic, L, and dualized intutionistic logic. This is not desirable.

Are there anyways around this? Well, taking weak initial objects is one way. Interestingly enough, the structural rules are causing havoic here. If we take instead of cartesian closed categories and cocartesian coclosed categories we use monoidal closed and monoidal coclosed categories we do not have any issues. The internal hom then would be a right adjoint to the tensor product, and the cointernal hom would be the left adjoint of par. So an option is to work with linear logic instead. In fact, Lambek used the cointernal hom in his work on categorical grammars. Another option is to split the atom and keep the cartesian closed part separate from the co-cartesian closed part. This is the direction I am working on right now. I plan to talk more about this in a future post.

## Categorical Note: Natural Numbers as a Coequalizer

There is a nice definition of the natural numbers monoid as a coequalizer. Most people know that we can show that the natural numbers defines a monoid $$(\mathbf{N},0,+)$$ where $$0$$ is the identity element, and $$+ : \mathbf{N} \times \mathbf{N} \to \mathbf{N}$$ is the binary operation. We will show that this monoid can be seen as the coequalizer in the category $$\mathsf{Cat}$$ of categories and functors.

First, define the discrete category, $$\mathbf{1}$$, of one object we call $$0$$, and the category $$\mathbf{2}$$ of two objects we call $$0$$ and $$1$$, and one non-identity-morphism we call $$g : 0 \to 1$$. There are exactly two parallel functors in $$0,1 \in \mathsf{Hom}(\mathsf{1},\mathsf{2})$$. We have given these two morphisms the same names as the objects of $$\mathsf{2}$$ because we can view these as generalized elements, and they are isomorphic to the objects.

Now we must find the coequalizer, $$\mathsf{CoEq}(0,1)$$, making the following diagram commute:

The previous diagram defining a coequalizer implies that $$0;h = 1;h$$, and thus, $$0$$ and $$1$$ of $$\mathbf{2}$$ are collapsed into the same object in $$\mathsf{CoEq}(0,1)$$. This then implies that $$h(g) : h(0) \to h(1)$$ can be composed with itself. Define $$\mathsf{CoEq}(0,1)$$ to be the category with one object, and one non-identity-morphism, such that, $$h(0) = h(1)$$ and $$h(g)$$ is the non-identity-morphism. The non-identity morphism composes with itself, $$h(g),h(g);h(g),h(g);h(g);h(g),\ldots$$. We do not constrain this composition in anyway, because we need each composition to be an unique natural number.

We must show that $$\mathsf{CoEq}(0,1)$$ is indeed a coequalizer of $$0$$ and $$1$$. We already know that $$0;h = 1;h$$ by definition of $$h$$. Suppose $$k : \mathbf{2} \to N$$ such that $$0;k = 1;k$$. Then we must find a unique morphism $$l : \mathsf{CoEq}(0,1) \to N$$ such that $$k = h;l$$. Because we know $$0;k = 1;k$$ we know that $$0$$ and $$1$$ are collapsed in $$N$$ at some object $$x$$ of $$N$$. Define $$l$$ to take the object of $$\mathsf{CoEq}(0,1)$$ to $$x$$ and to take the non-identity morphism of $$\mathsf{CoEq}(0,1)$$ to $$k(g)$$. At this point we can see that $$k = h;l$$. As for uniqeness $$h$$ is an epimorphism, and thus $$l$$ is unique.

It is now easy to show that $$\mathsf{CoEq}(0,1)$$ is the natural number monoid. Take the identity morphism as the natural number $$0$$, the only non-identity-morphism as the natural number $$1$$, and composition as addition. Clearly, the axioms of a monoid are satisfied by the fact that $$\mathsf{CoEq}(0,1)$$ is a category.

I am reading the paper Computability and Complexity of Categorical Structures, and the author takes as the definition of the natural numbers the coequalizer defined above. This was the first time I had seen this, and so naturally I had to figure out the details. I got most of them, but then I also saw this email from Vaughan Pratt that filled in the gaps.

## Chu Spaces, Dialectica Spaces, and n-ary Relations

Dialectica spaces were first developed by Valeria de Paiva in her thesis. They happen to be a model of linear logic, in particularly they are a model of full intuitionistic linear logic. This fact came as quite a surprise to Valeria and her Ph.D. advisor Martin Hyland. The project they were working on was to give a categorical interpretation of Gödel’s dialectica interpretation. Naturally, they thought it would give rise to a cartesian closed category, but it turned out to be a model of linear logic!

There is a close cousin to dialectica spaces called Chu spaces. These spaces were developed by Michael Barr and his masters student Po-Hsiang Chu as a model of linear logic. They have also been widely studied by Vaughan Pratt and his group at Stanford. One of the most interesting computational applications of Chu spaces is in concurrency theory.

This post can be thought of as some of my notes on understanding the relationship between dialectica spaces and Chu spaces. The results in this post are mostly due to Valeria de Paiva and Vaughan Pratt; citations will be given below. The main objective of this post is to highlight the differences between the two types of spaces, and to show that Chu spaces can be embedded into dialectica spaces using Valeria’s work. Then we show that all $$n$$-ary relational structures can be embedded into dialectica spaces using Pratt’s work. Finally, we conclude with an application.

Consider the following definitions.

A dialectica space is a function $$r : A \times X \to \Sigma$$ such that $$\Sigma$$ is a lineale. We call $$\Sigma$$ the alphabet and $$X$$ the set of states of the dialectica space. Given a dialectica space we can define a new function that returns the columns of the space given a state by $$r_c = \mathsf{cur}(\beta;r) : X \to (A \to \Sigma)$$ where beta is the canonical isomorphism $$\beta : X \times A \to A \times X$$ and $$\mathsf{cur}$$ is the canonical currying isomorphism. When $$r_c$$ is injective we call $$r$$ extensional.
A Chu space is a function $$r : A \times X \to \Sigma$$. We call $$\Sigma$$ the alphabet and $$X$$ the set of states of the Chu space. Chu spaces can be extensional which is defined exactly how it is defined for dialectica spaces.

The previous definitions imply that we can think of dialectica and Chu spaces as matrices where the set of values in their cells are from $$\Sigma$$. Extensional spaces are often easier to work with because the states of the space are unique.

There is only one difference between the two previous definitions, which is the requirement that $$\Sigma$$ is a lineale in the definition of dialectica spaces. What is a lineale, and why do we need it? Consider the first question.

A monoidal poset is a poset $$(\Sigma,\leq)$$ with a given given compatible monoidal structure $$(L,\circ,e)$$. That is, $$L$$ is a set with a reflexive, symmetric, and transitive relation $$\leq\,\subseteq L \times L$$ together with a monoidal multiplication $$\circ : L \times L \to L$$ and distinguished object $$e \in L$$ satisfying the following:

• Associativity: $$a \circ (b \circ c) = (a \circ b) \circ c$$
• Symmetry: $$a \circ b = b \circ a$$
• Identity: $$a \circ e = e \circ a = a$$
• Compatibility: if $$a \leq b$$ then $$a \circ c \leq b \circ c$$, for all $$c \in L$$

A lineale is a monoidal poset $$(L, \leq, \circ, e)$$ such that for any $$a,b \in L$$, there exists a largest $$x \in L$$ where $$a \circ x \leq b$$. This element is denoted by $$a \Rightarrow b$$ and is called the relative pseudocomplement of $$a$$ wrt $$b$$. This definition implies that following hold:

• Application: $$a \circ (a \Rightarrow b) \leq b$$
• Currying: $$a \circ c \leq b$$ iff $$c \leq a \Rightarrow b$$
• If $$a \leq b$$, then for any $$c \in L$$, $$c \Rightarrow a \leq c \Rightarrow b$$ and $$b \Rightarrow c \leq a \Rightarrow c$$

Thus, lineales can be seen as a trivialization of a symmetric monoidal closed category.

Lineales are needed to define transformations between dialectica spaces. We define these next along with transformations between Chu spaces. These are what really set the two types of spaces apart.

Suppose $$r : A \times X \to \Sigma$$ and $$s : B \times Y \to \Sigma$$ are two dialectica spaces over $$\Sigma$$. Then a dialectica transformation is a pair of functions $$(f,F) : r \to s$$ where $$f : A \to B$$, $$F : Y \to X$$, and the following constraint holds:
$$\forall a \in A.\forall y \in Y.r(a,F(y)) \leq_\Sigma s(f(a),y)$$.
We call the previous constraint the semi-adjointness condition, and this condition is the reason why we need the alphabet to be a lineale.
Suppose $$r : A \times X \to \Sigma$$ and $$s : B \times Y \to \Sigma$$ are two Chu spaces over $$\Sigma$$. Then a Chu transformation is a pair of functions $$(f,F) : r \to s$$ where $$f : A \to B$$, $$F : Y \to X$$, and the following constraint holds:
$$\forall a \in A.\forall y \in Y.r(a,F(y)) = s(f(a),y)$$.
We call the previous constraint the adjointness condition.

Taking dialectica spaces over $$\Sigma$$ as objects and dialectica transformations as morphisms we obtain a category called $$\mathsf{Dial}(\Sigma)$$, and doing the same for Chu spaces we obtain a category called $$\mathsf{Chu}(\Sigma)$$. We have arrived at the following result.

Suppose $$\Sigma$$ is a lineale. Then there is an embedding $$E : \mathsf{Chu}(\Sigma) \to \mathsf{Dial}(\Sigma)$$.
Take $$E$$ to be the identity functor. ◼

The results up to now are not new. They were first investigated by Valeria de Paiva in her TAC paper. She makes a lot more interesting connections between the two types of spaces. The interested reader should see her paper for more fun! We now use her result to show that every $$n$$-ary relation can be represented by a dialectica space.

### $$n$$-ary Relations in Dialectica Spaces

Vaughan Pratt showed that any $$n$$-ary relation can be represented by a Chu space over $$\mathsf{2}^n = \{0,1\}^n$$ for some $$n \in \mathbb{N}$$. This results in Chu spaces being able to represent a large number mathematical structures. We would like this same result for dialectical spaces, and we can obtain it by using the embedding discussed above. However, we must show that $$\mathsf{2}^n$$ is a lineale in order to obtain our result. We do this now.

First, we show that we can construct a category of lineales. We have morphisms between lineales.

Suppose $$(L_1,\leq_1, \circ_1, e_1,\Rightarrow_1)$$ and $$(L_2,\leq_2, \circ_2, e_2,\Rightarrow_2)$$ are two lineales. Then a lineale map between $$L_1$$ and $$L_2$$ is a function $$f : L_1 \to L_2$$ satisfying the following properties:

• If $$a \leq_1 b$$, then $$f(a) \leq_2 f(b)$$
• $$f(a \circ_1 b) = f(a) \circ_2 f(b)$$
• $$f(e_1) = e_2$$
• $$f(a \Rightarrow_1 b) = f(a) \Rightarrow_2 f(b)$$

It is easy to see that we can now form a category of lineales where composition and identities are set theoretic. We denote this category by $$\mathsf{Lin}$$. Next we show that this category has products.

$$\mathsf{Lin}$$ has all finite products.
Suppose $$(L_1,\leq_1, \circ_1, e_1,\Rightarrow_1)$$ and $$(L_2,\leq_2, \circ_2, e_2,\Rightarrow_2)$$ are two lineales. Then we can construct the monoidal poset $$(L_1 \times L_2,\leq_\times, \circ_1 \times \circ_2, (e_1,e_2))$$ where $$\leq_\times$$ is the lexicographic combination of $$\leq_1$$ and $$\leq_2$$. Now we must prove that for any $$(a,b),(c,d) \in L_1 \times L_2$$, there exists a largest $$(x_1,x_2) \in L_1 \times L_2$$ where $$(a,c)\,(\circ_1 \times \circ_2)\,(x_1,x_2) \leq (b,d)$$. Choose $$a \Rightarrow_1 b$$ for $$x_1$$ and $$c \Rightarrow_2 d$$ for $$x_2$$. Certainly, each of these are largest within their respective set, and hence, their pairing will be largest in the cartesian product. Therefore, $$(L_1 \times L_2,\leq_\times, \circ_1 \times \circ_2, (e_1,e_2),\Rightarrow_1 \times \Rightarrow_2)$$ is a lineale.

Clearly, there are two lineale maps $$\pi_1 : (L_1 \times L_2,\leq_\times, \circ_1 \times \circ_2, (e_1,e_2),\Rightarrow_1 \times \Rightarrow_2) \to (L_1,\leq_1, \circ_1, e_1,\Rightarrow_1)$$ and $$\pi_2 : (L_1 \times L_2,\leq_\times, \circ_1 \times \circ_2, (e_1,e_2),\Rightarrow_1 \times \Rightarrow_2) \to (L_2,\leq_2, \circ_2, e_2,\Rightarrow_2)$$ defined by the set theoretic projections. Finally, we must show that the following diagram commutes:

where $$(L_3,\leq_3, \circ_3, e_3,\Rightarrow_3)$$ is a lineale. Suppose $$f : L_3 \to L_1$$ and $$g : L_3 \to L_2$$ are two lineale maps. Then choose the set theoretic function $$\langle f,g \rangle : L_3 \to L_1 \times L_2$$ for $$h$$. Clearly, in sets the previous diagram commutes, but we must verify that $$\langle f,g \rangle$$ is a lineale map. We do this verification as follows:

• Suppose $$a,b \in L_3$$ and that $$a \leq_3 b$$. Then $$\langle f,g \rangle(a) = (f(a),g(a))$$ and $$\langle f,g \rangle(b) = (f(b),g(b))$$. By assumption $$f$$ and $$g$$ are lineale maps, and thus, $$f(a) \leq_1 f(b)$$ and $$g(a) \leq_2 g(b)$$ which implies that $$(f(a),g(a)) \leq_\times (f(b),g(b))$$.
• Suppose $$a,b \in L_3$$. Then
$$\begin{array}{llllll} \langle f,g \rangle(a \circ_3 b) & = & (f(a \circ_3 b),g(a \circ_3 b))\\ & = & (f(a) \circ_1 f(b),g(a) \circ_2 g(b))\\ & = & (f(a),g(a))\,(\circ_1 \times \circ_2)\,(f(b),g(b))\\ & = & \langle f,g \rangle(a)\,(\circ_1 \times \circ_2)\,\langle f,g \rangle(b).\\ \end{array}$$
• First, $$\langle f,g \rangle(e_3) = (f(e_3),g(e_3))$$, and because $$f$$ and $$g$$ are lineale maps $$(f(e_3),g(e_3)) = (e_1,e_2)$$.
• Suppose $$a,b \in L_3$$. Then
$$\begin{array}{llllll} \langle f,g \rangle(a \Rightarrow_3 b) & = & (f(a \Rightarrow_3 b),g(a \Rightarrow_3 b))\\ & = & (f(a) \Rightarrow_1 f(b),g(a) \Rightarrow_2 g(b))\\ & = & (f(a),g(a))\,(\Rightarrow_1 \times \Rightarrow_2)\,(f(b),g(b))\\ & = & \langle f,g \rangle(a)\,(\Rightarrow_1 \times \Rightarrow_2)\,\langle f,g \rangle(b).\\ \end{array}$$

At this point all that is left to do is show that there is a terminal object. Let $$L_T$$ be any one element set where $$\star \in L_T$$ is its distinguished object. Then we can construct the lineale $$(L_T, \leq_T, \circ_T, \star, \Rightarrow_T)$$ as follows:

• Set $$\leq_T = \{(\star,\star)\}$$.
• Set $$\star \circ_T \star = \star$$.
• Set $$\star \Rightarrow_T \star = \star$$.

It is trivial to see that $$L_T$$ is a lineale given the previous definitions. Suppose $$(L, \leq, \circ, \star, \Rightarrow)$$ is a lineale. Then there is a lineale map $$\diamond : L_1 \to L_T$$ is simply the terminal map from sets. The verification that it is lineale is straightforward. ◼

If $$L$$ is a lineale, then $$L^n = \underbrace{L \times \cdots \times L}_n$$ is a lineale for any $$n \in \mathbb{N}$$.

To conclude that $$\mathsf{2}^n$$ is a lineale it is enough to show that $$(\mathsf{2} = \{0,1\},\leq_B\,=\{(0,0),(0,1),(1,1)\},\land,1,\Longrightarrow)$$, where $$\land$$ and $$\Longrightarrow$$ are binary conjunction and implication respectively, is a lineale. It is easy to see via truth tables that conjunction is symmetric, associative, and $$1$$ is its identity. Thus, $$\mathsf{2}$$ is a monoidal poset. Now consider the following truth table:

$$\begin{array}{ccccc} a & x & a \land x & b\\ 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 1\\ \end{array}$$

This truth table proves that for any $$a,b \in \mathsf{2}$$, there exists a greatest $$x \in \mathsf{2}$$ such that $$a \land x \leq_B b$$. Furthermore, notice that the columns for $$a$$, $$b$$, and $$x$$ defines binary implication where $$x = a \Longrightarrow b$$. Therefore, $$(\mathsf{2},\leq_B,\land,1,\Longrightarrow)$$ is a lineale. By the fact that $$\mathsf{Lin}$$ is cartesian we may conclude that $$\mathsf{2}^n$$ is a lineale.

At this point we simply replay Vaughan Pratt’s result. First, we define $$n$$-ary relations.

An $$n$$-ary relational structure $$(X,\rho)$$ consists of a set $$X$$ and an $$n$$-ary relation $$\rho \subseteq X^n$$ on $$X$$. A homomorphism $$f : (X,\rho) \to (Y,\sigma)$$ between $$n$$-ary relational structures is a function $$f : X \to Y$$ such that $$f(\rho) = \{(f(a_1),\cdots,f(a_n)) \mid \overrightarrow{a_i} = (a_1,\cdots,a_n) \in \rho \} \subseteq \sigma$$. Taking the relational structures as objects and their homomorphisms as morphisms defines a category called $$\mathsf{Str}_n$$.

Next we define an embedding $$E’ : \mathsf{Str}_n \to \mathsf{Dial}(\mathsf{2}^n)$$.

We define the object part of $$E’ : \mathsf{Str}_n \to \mathsf{Dial}(\mathsf{2}^n)$$ as taking the $$n$$-ary relational structure $$(A,\rho)$$ to the dialectica space $$r : A \times X \to \mathsf{2}^n$$ defined as follows. Take the set of states, $$X$$, to be the subset of $$n$$-tuples $$r \in \mathcal{P}(A)^n$$ for which $$\Pi_i r_i \subseteq \overline{\rho} = A^n – \rho$$. Finally, define $$r : A \times X \to \mathsf{2}^n$$ by $$v(a,x) = 1$$ if $$a \in r_i$$, and $$0$$ otherwise.

The object part as defined above is injective.

For all $$\overrightarrow{a_i} \in A^n$$, $$\overrightarrow{a_i} \in \rho$$ iff for any $$r \in X$$, there exists an $$i < n$$, such that, $$r(a_i,x)_i = 0$$.

It also turns out that $$r : A \times X \to \mathsf{2}^n$$ is extensional. See lemma 12 of Pratt’s paper.

We still need to define the morphism part of $$E’$$.

We define the morphism part of $$E’ : \mathsf{Str}_n \to \mathsf{Dial}(\mathsf{2}^n)$$. Suppose $$f : (A, \rho) \to (B, \sigma)$$ be a homomorphism between $$n$$-ary relations, where $$E'(A,\rho) = r : A \times X \to \mathsf{2}^n$$ and $$E'(B,\sigma) = s : B \times Y \to \mathsf{2}^n$$. Define $$f^{-1} : (\mathsf{2}^n)^B \to (\mathsf{2}^n)^A$$ by $$f^{-1}(g : B \to \mathsf{2}^n) = f;g : A \to \mathsf{2}^n$$. Finally, define $$E'(f) = (f,f’)$$. The correctness of this definition can be found in Pratt’s paper in definition 13.

By theorem 14 of Pratt’s paper $$E’$$ is concrete, full, and faithful. Thus, it is an embedding.

### Application: Multisets in Dialectica Spaces

This line of inquiry of mine was started because I wanted a way to represent multisets by dialectica spaces over a finite alphabet. A multiset, $$m : A \to \mathbb{N}$$, can be represent by an equivalence relation, $$\cong_m \subseteq A \times A$$, such that, for any $$a \in A$$, $$|[a]_{\cong_m}| = m(a)$$. That is, the cardinality of the equivalence class of a particular element in the multiset is the elements multiplicity. Taking this representation of multisets and the previous result we now have a means of interpreting multisets into dialectica spaces over a finite alphabet.

## On the Categorical Model of Fragmented Y

$$\newcommand{\interp}[1]{\unicode{x27e6} #1 \unicode{x27e7}}$$
I recently published a post on the categorical model of fragmented system T. Aaron Stump — my Ph.D. advisor — asked a very good question. One property the Trellys team wants in a PL is a non-terminating fragment to conduct general purpose programming. However, as Aaron pointed out fragmented system T has a terminating programmatic fragment, but what if we make it non-terminating? He asked particularly about adding the Y combinator.
My response was that I did not know how to add just the Y combinator, but I argued that the programmatic fragment could be non-terminating by making it uni-typed.

I took his question as a challenge to figure out how to add just the Y combinator, and then obtain a categorical model. One thing had to be true of our model, it must have a cartesian closed subcategory to model the logical fragment in. Adding the Y combinator to STLC is simple enough, and since STLC is modeled by cartesian closed categories, then there should be an extension of cartesian closed categories with fixpoint operators. Keeping this idea in mind I did some thinking, and looked into the literature. It turns out that we can extend a cartesian closed category with fixpoint operators to obtain a model for the Y combinator. This post describes how this is done.

### Fragmented Y

First thing is first. We need to define the functional PL we will be analyzing.

Just as we did in the previous post the programmatic fragment contains all typable terms using the rules annotated with $$\mathsf{Y}$$, and the logical fragment contains all typable terms using the rules annotated with $$\mathsf{S}$$. We can easily see that the programmatic fragment is non-terminating. Clearly, $$\cdot \vdash_{\mathsf{Y}} \mathsf{Y}\,(\lambda x:T.x) : T$$, and

$$\mathsf{Y}\,(\lambda x:T.x) \leadsto (\lambda x:T.x)\,(\mathsf{Y}\,(\lambda x:T.x)) \leadsto \mathsf{Y}\,(\lambda x:T.x)$$

In addition, it is well known that the logical fragment is terminating. Thus, the logical fragment is the isolation of a terminating subtheory of a non-terminating theory.

### Categorical Model

The idea is to argue how we can model the theory in a cartesian closed category with some additional structure. Just as we have said above the logical fragment can be modeled by a cartesian closed category. The subtheory rule $${\rm Y\_{\small sub}}$$ can be modeled by functor from the model of the logical fragment to the model of the programmatic fragment. This is the same as it was for fragmented system T. In addition, it is easy to handle $$\mathsf{zero}$$ and $$\mathsf{suc}$$. So the most interesting question here is how do we model the programmatic fragment? The answer is well known, and can be found here — this book has a lot of great things in it, but lacks in presentation and the omission of details in some of the equational reasoning.

We model the programmatic fragment in a cartesian closed category extended with fixpoints for each object.

In any cartesian closed category we have the morphism $$\pi_2 : \Gamma \times (T \Rightarrow T) \to (T \Rightarrow T)$$. Using this morphism we can define the following morphism in a cartesian closed category with fixpoint operators:

$$\mathsf{cur}(\pi_2;\mathsf{fix}_T) : \Gamma \to (T \Rightarrow T) \Rightarrow T$$

This morphism is the $$\mathsf{Y}$$ combinator. That is, we can define

$$\interp{\Gamma \vdash_{\mathsf{Y}} Y : (T \to T) \to T}\,=\,\mathsf{cur}(\pi_2;\mathsf{fix}_{\interp{T}})$$

A categorical model must have enough structure to model every typable term, but it must also model the reduction rules. That is, for any $$\Gamma \vdash_{\theta} t_1 : T$$ and $$\Gamma \vdash_{\theta} t_2 : T$$, if $$t_1 \leadsto t_2$$, then $$\interp{\Gamma \vdash_{\theta} t_1 : T} = \interp{\Gamma \vdash_{\theta} t_2 : T}$$, where $$\theta \in \{\mathsf{S},\mathsf{Y}\}$$. It is well-known that if $$\theta = \mathsf{S}$$, then the reduction rules of STLC can be modeled by a cartesian closed category. In fact, we have $$\beta$$-equality in the model:

($$\beta$$-cat) $$\langle \mathsf{cur}(f),g \rangle;\mathsf{app} = \langle \mathsf{id},g \rangle;(\mathsf{cur}(f) \times id);\mathsf{app} = \langle \mathsf{id},g\rangle;f$$

Using $$\beta$$-cat we can prove the following:

We call the previous fact $$\mathsf{Y}$$-cat. Now we can show that a cartesian closed category with fixpoint operators models the reduction rule for the $$\mathsf{Y}$$-combinator:

It took me the better part of the weekend to get the equational reasoning right, so hopefully I did not make any mistakes.

The way we model the Y combinator is slightly different from the book here. There the authors use the morphism $$!_{\Gamma};\mathsf{cur}(\pi_2;\mathsf{fix}_T) : \Gamma \to (T \Rightarrow T) \Rightarrow T$$, where $$!_\Gamma : \Gamma \to \top$$ is the terminal morphism, but I do not see a reason to use the terminal morphism when we can just use $$\pi_2$$. We obtain the same result here, but I might be missing something.

Finally, we arrive at the following main theorem:

Suppose that $$\mathcal{S}$$ is cartesian closed category (a model of the logical fragment of fragmented Y), and that $$\mathcal{Y}$$ is a cartesian closed category with fixpoint operators (a model of the programmatic fragment of fragmented Y). Then $$(\mathcal{S}, S, \mathcal{Y})$$ is a model of fragmented Y, where $$S : \mathcal{S} \to \mathcal{Y}$$ is a full and faithful functor.

It was a lot of fun figuring this out!

## On the Categorical Model of Fragmented System T

Recently, I have been thinking of something I wanted to try for a while, but my categorical skills were not good enough to think about it. In this short simple note I show how a fragmented type theory can be modeled using submodels and a functor between them. The theory I discuss is very basic, but contains the setup I wanted. I hope to build on this in the future.

The Trellys project originally set out in 2008 to design a dependently typed functional programming language with a fast amount of features. The most important of them being the isolation of a logical fragment from a non-terminating fragment called the programmatic fragment. The former amounts to a terminating fragment of the overall language with a new property called freedom of speech. This new property allows for the types of the logical fragment to depend on programs of the programmatic fragment. Thus, the logical fragment can be used to verify properties of the programs of the programmatic fragment. In addition, one can use the logical fragment to ones advantage. If one wishes to implement a program that must be terminating, then this can be internally verified by simply implementing it in the logical fragment, because it is simply a fragment of the set of overall programs.

This fragmentation was accomplished by annotating the sequent with either an $$\mathsf{L}$$ or a $$\mathsf{P}$$ called a consistency classifier. Then maintaining consistency of the logical fragment was achieved by placing value restrictions at key locations in the definition of type-checking algorithm. Then the addition of a rule similar to:

was added to the programmatic fragment. However, this formalization of the language resulted in a theory that was very difficult to study metatheoretically. New attempts are underway to try and modify the language so as to allow for a metatheoretic study. There are a number of references the interested reader can find with respect to this work. See my thesis and Chris Casinghino’s publications, Vilhelm Sjöberg’s publications, Stephanie Weirch’s publications, and Aaron Stump’s publications.

In this note we examine how one might model a fragmented language such as the Trellys language categorically. We start with a very simple language, and then show that there exists a strightforward functorial semantics.

### Fragmented System T

We begin with the definition of the simply typed $$\lambda$$-calculus (STLC).

STLC will be considered our logical fragment. Its terms may be considered proofs while its types formulas. We now enrich this definition with a programmatic fragment resulting in our final system called fragmented system T.

The programmatic fragment contains all typable terms using the rules annotated with $$\mathsf{G}$$. This fragment does not make for a very good logic, but is a very good statically typed pure functional programming language. In fact, the programmatic fragment is very powerful even if it seems simplistic. For example, we can type the Ackerman’s function:

$$\lambda m : \mathsf{Nat}.\lambda n : \mathsf{Nat}.((\mathsf{rec}\,m\,(\lambda y : \mathsf{Nat}.(\mathsf{suc}\,y)) \\ \,\,\,\,\,(\lambda h : \mathsf{Nat} \to \mathsf{Nat}.\lambda i : \mathsf{Nat}.\lambda y : \mathsf{Nat}.(\mathsf{rec}\,y\,(h\,(\mathsf{suc}\,0))\,(\lambda k : \mathsf{Nat}.\lambda j:\mathsf{Nat}.(h\,k)))))\,n)$$

### Categorical Model

The purpose of this note is to define the categorical model of fragmented system T. The logical fragment (STLC) can easily be modeled by a cartesian closed category. See here for a detailed explanation of this. The programmatic fragment is also well known to be modeled by a cartesian closed category extended with a natural number object and the necessary morphisms to model zero and successor. See here for more information.

So the only interesting question we have is how do we model the subtheory rule:

This requires an operation on homspaces:

Clearly, this is corresponds to the morphism part of a functor from the model of the logical fragment to the programmatic fragment. It just so happens that such a functor exists.

Suppose that $$\mathcal{S}$$ is the cartesian closed category modeling the logical fragment, and that $$\mathcal{G}$$ is the extended cartesian closed category with the necessary structure modeling the programmatic fragment. Clearly, there exists a functor $$S : \mathcal{S} \to \mathcal{G}$$. In addition, this functor is full and faithful, because $$\mathcal{S}$$ is isomorphic to the cartesian closed proper subcategory of $$\mathcal{G}$$. The latter insures that all of the typeable terms of the logical fragment are indeed modeled within $$\mathcal{S}$$. Using this reasoning we arrive at the following result:

Suppose that $$\mathcal{S}$$ is a model of the logical fragment of fragmented system T, and that $$\mathcal{G}$$ is a model of the programmatic fragment. Then $$(\mathcal{S}, S, \mathcal{G})$$ is a model of fragmented system T, where $$S : \mathcal{S} \to \mathcal{G}$$ is a full and faithful functor.

The point of the previous result is that if one fragments a language into parts where the constituent parts can be collapsed back down — e.g. the subtheory rule — then there is a functor between the constituent parts. More on this later.

## Actor-networks: Social Processes as Computations and Computations as Social Processes

The phone rings at the desk of a secretary for a top-level employee at a medium sized company. The following conversation ensues:

Secretary: You have reached the desk of Big Wig at medium sized company. I am S how may I help you?

Caller: Hey S, how you doing today? It’s been a while since we have talked. This is E from the department of information technology.

Secretary: Oh, hey, yeah it has been a while. Is there anything I can do for you?

Caller: We are trying to get all of our internal software updated before the holiday. The bosses down here are getting agitated that we haven’t got some of the workstations updated yet. So I am on the hook to call up everyone at an out of date workstation, and have them install the update. Think you can help me out with that right now?

Secretary: Sure, that’s no problem! What do I need to do?

Caller: Great! The process is simple. I just sent you an email. All you have to do is click the link, and save the download to your desktop.

Caller: Now simply click the file you downloaded and hit next a few times. Then it should say that it successfully installed.

Secretary: Just finished.

Caller: Excellent! Go a head and open the internal software and make sure everything is still running okay.

Secretary: Everything seems to be working.

Caller: Perfect. So that we don’t get confused when we update again later go a head and delete that file. Then empty your trash. Having files in the trash can make your machine run slower.

Secretary: I didn’t know that it could make my machine run slower. I deleted it and emptied the trash.

Caller: That’s all I needed. Say hello to Big Wig for me, and if you have any issues please do not hesitate to call our department. Thanks for helping.

Secretary: No problem at all. Have a great holiday.

In the above scenario the caller, E, was not actually an employee of the information technology department. They were a malicious hacker trying to gain access to the companies network. They did this by posing as an employee to gain the confidence of the secretary. Then they had them install an “update to the internal software”, but actually this “update” was a virus designed to give E access to the companies network. The virus could have done much more like steal passwords, credit card numbers, and even move from machine to machine by hiding in file attachments. This style of virus is not meant to destroy the machine, but act as a mole and transmit information from the machine to E. It could hide and be unnoticed for a substantial amount of time.

The type of attack E used is called social engineering. It entails manipulating another person into gaining access to confidential information or resources. Social engineering to gain access to computer networks is often considered to be the easiest method to circumvent the security of a workstation on a computer network. Then once access is obtained it is easier to gain access to the other network resources. The notorious computer hacker turned computer security consultate Kevin Mitnick used this technique to gain access to the networks of various prominent technology companies.

The study of computer security uses mathematical models of various styles of networks, and then studies various attacks on these models using various mathematical tools. For example, a network of two machines can be modeled by two Turing machines, and then an additional tape that both Turing machines have access to. Then shared communication takes place by reading and writing on the shared tape.

In this blog post I am going to summarize some of the work of Dusko Pavlovic. He has started working on solving two major problems in computer security. I hold the same beliefs that he does with respect to the first problem. See this email for a list of papers in this areas.

The first problem with the study of computer security is that it largely ignores the social aspect of computer security. There is a human element that must be considered. However, one may object stating that capturing the human involvement will be to difficult, because there are just to many variables to keep track of. I hope that this blog post will connivence the reader that this objection is just not true.

The second problem with computer security is that practical models of various security scenarios are very complex. They usually entail lots of crazy diagrams, and even more complex reasoning. This implies that there are most likely errors in reasoning that go unnoticed! Thus, the community needs more formal methods that strip away this complexity, and where this complexity cannot be removed perhaps automated or semi-automated — e.g. interactive theorem provers — reasoning can be used to provide a higher confidence in the results obtained using the models.

I believe — and I am sure Dusko does as well judging by his papers — that category theory, type theory, verification of programs, and automated theorem proving can contribute to solving the second problem. Category theory is very good at stripping away unneeded complexity, and the various graphical languages that are available in category theory can be used to strip away the complexity of security models, and provide easier methods of reasoning. In addition, type theory, verification of programs, and automated theorem proving can be used to automate some of the receptive complex reasoning.

In the remainder of this blog post I would like to describe an effort to study the first problem. That is, how do we add the human element formally? Dusko Pavlovic and Catherine Meadows have a paper called “Actor-network procedures: Modeling multi-factor authentication, device pairing, and social interaction.” They introduce the concept of an actor network that gives a way to formally connect humans and devices. The rest of this blog post is a summery of what an actor network is.

There are lots of objects that must be paired with a human in order for their full potential to be reached. One analogy the authors give is that of a musical instrument. Suppose an acoustic guitar is leaning against a wall. The guitar sits ideally waiting to be played, because a musical instrument cannot make music on its own. It must be played by a human. The instrument is the actor. Humans control actors. The coming together of these several actors controlled by various humans is called an actor network.

Computers provide another example. There is very little we as a society do with our computers that do not require a human element. Modern day computer security uses this to its advantage when authorizing use of resources. Consider my old bank the University of Iowa Community Credit Union in Iowa City, IA. In order to access the online banking website one had to first enter a username and password, but then if I haven’t logged in since I cleared my browser data, then the website would send a code to my cellphone — actually it was my wives phone which made it even hard for me to login from work — then after receiving this code I was able to log in. This is called two factor authentication. The actor network here consists of the human (me), my computer, my cell phone, and the banks webserver.

So what is the formal definition of an actor network? Start with the following precategory:

where $$\mathcal{C} = \{f,g,\ldots\}$$ is a collection channels, $$\mathcal{P} = \{P,Q,\ldots\}$$ is a collection of configurations, $$\delta : \mathcal{C} \to \mathcal{P}$$ maps each channel to an entry configuration, and $$\varrho : \mathcal{C} \to \mathcal{P}$$ maps each channel to an exit configuration. So we can denote channels by $$f : \delta(f) \to \varrho(f)$$. Thus, channels are premorphisms and configurations are objects.

We pair the previous structure with a collection of nodes denoted by $$\mathcal{N} = \{M,N,\ldots\}$$. Then we define configurations to be trees. Each $$P \in \mathcal{P}$$ can either be a finite set of nodes or a finite set of configurations. We define an actor to be an element of a configuration. Nodes represent canonical configurations. Another example might be that a paperback book is a configuration consisting of paper, glue, and ink.

Channels connect configurations together. For example, there might be a channel between a phone and a webserver. A secure channel might be a channel between two devices that is encrypted.

Our definition so far can account for devices (configurations) and the channels connecting them, but we what about all this human business? We now extend the previous definition to the following structure:

where $$\mathcal{I} = \{A,B,\ldots\}$$ is a collection of identities or principles, and $$© : \mathcal{P} \to \mathcal{I}$$ is a partial function assigning identities to configurations. Now we can talk about a person controlling a configuration or node.

One last addition. As of right now channels are left very abstract, and we have no way of specifying what type of channels we are dealing with. So we extend the previous structure as follows:

where $$\Theta = \{\tau_1, \tau_2\, \ldots \}$$ is a collection of channel types, and $$\vartheta : \mathcal{P} \to \Theta$$ is a map assigning each channel a type. This final structure defines an actor network.

Lets consider an example. It has recently been announced that the United States Army Cyber Command headquarters is moving to Augusta, GA. In fact, to Fort Gordon where I live only two miles away. So in honor of this new move I share the example from Dusko and Catherine’s paper. We define the actor network modeling a cyber network as follows:

• $$\mathcal{I} = \{A_1,\ldots,A_n\}$$, i.e. an arbitrary set of identifies,
• $$\mathcal{N} = \{N_1,\ldots,N_n\}$$, i.e. an arbitrary set of nodes,
• $$\mathcal{P} = \mathcal{N}$$, i.e. the only configurations are nodes,
• $$\mathcal{C} = \{M \to N \mid M,N \in \mathcal{N}\}$$, i.e. there is exactly one channel between any two nodes,
• $$\Theta = \{\mathsf{cyber}\}$$, i.e. there is only one type of channel called $$\mathsf{cyber}$$ representing an insecure cyber connection,
• All communication is done by broadcast from the sender to all nodes in the network. The receipt of the message does not observe the sender directly. Finally, we assume a person only controls one node.

For more examples see the paper. This is only the first part of the paper. I will be posting a second post talking about more things I learn from it.

## Categorical Proof Theory of Dualized Type Theory: Filtered Multicategories


A filtered category (FC) can be thought of as starting with an arbitrary multicategory, say $$\mathcal{C}$$, and layering over the top of it another category, say $$\mathcal{F}$$ (the filter). The following image illustrates this idea:

The job of this layer is to act as a filter, and only let through specific structures. The above image captures this idea by having the filter, $$\mathcal{F}$$ over $$\mathcal{C}$$, and the filter is shown by making the filter layer a little transparent so that we can only make out some of the bottom layer. The way in which the filter will restrict $$\mathcal{C}$$ will become more apparent when we give the definition of a FC. In fact, lets do that now.

### Filtered Categories

A filtered category (FC) is a multicategory defined in terms of two structures given by a pair denoted $$\ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}$$ where $$\mathcal{C}$$ is a multicategory called the lax component, $$\mathcal{W}$$ is a collection of worlds, and $$\mathsf{Cat}( \mathcal{W} )$$ is $$\mathcal{W} \times \mathcal{W}$$ considered as a category which we call the constraint universe of the FC. A FC, $$( \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) )$$, is defined as follows:

• (Objects) A collection of objects each denoted $$\ottmv{A} \ottsym{@} \ottmv{w}$$, where $$\ottmv{A} \in \mathsf{Obj}( \mathcal{C} )$$ and $$\ottmv{w} \in \mathcal{W}$$.
• (Morphisms) A collection of multimorphisms. Each morphism is a triple $$(f , M, \{(\ottmv{A_{{\mathrm{1}}}},\ottmv{w_{{\mathrm{1}}}}),\ldots,(\ottmv{A_{\ottmv{m}}},\ottmv{w_{\ottmv{m}}}),(B,w)\})$$ where $$f \in \mathcal{C} \ottsym{(} \ottmv{A_{{\mathrm{1}}}} \ottsym{,} \, … \, \ottsym{,} \ottmv{A_{\ottmv{m}}} \ottsym{,} \ottmv{B} \ottsym{)}$$, $$M$$ — the morphism constraints — is a subcategory of $$\mathsf{Cat}( \mathcal{W} )$$, such that, $$\{\ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{w_{\ottmv{m}}},\ottmv{w}\} \subseteq \mathsf{Obj}( \ottmv{M} )$$. We denote each morphism by $$f : \ottmv{M}; \ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{A_{\ottmv{m}}} \ottsym{@} \ottmv{w_{\ottmv{m}}} \to \ottmv{B} \ottsym{@} \ottmv{w}$$.
• (Identities) For any list of objects, $$\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}}, \ldots, \ottmv{A} \ottsym{@} \ottmv{w}, \ldots, \ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}}$$ there is a morphism $$\mathsf{id} : M;\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}}, \ldots, \ottmv{A} \ottsym{@} \ottmv{w}, \ldots, \ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}} \to \ottmv{A} \ottsym{@} \ottmv{w}$$.
• (Vertical Composition) For any two morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)} ( \ottmv{M} ; \ottmv{A_{{\mathrm{1}}}} @ \ottmv{w_{{\mathrm{1}}}} , \ldots, \ottmv{A_{\ottmv{i}}} @ \ottmv{w_{\ottmv{i}}} , \ottmv{B} @ \ottmv{w’} )$$ and
$$g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M';C_{{\mathrm{1}}} \ottsym{@} \ottmv{w’_{{\mathrm{1}}}}, \ldots, \ottmv{B} \ottsym{@} \ottmv{w’}, \ldots,C_{\ottmv{j}} \ottsym{@} \ottmv{w’_{\ottmv{j}}}, \ottmv{D} \ottsym{@} \ottmv{w”})$$ their composition is also a morphism, that is, $$f;g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M,M';C_{{\mathrm{1}}} \ottsym{@} \ottmv{w’_{{\mathrm{1}}}},\ldots,\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}},\ldots,C_{\ottmv{j}} \ottsym{@} \ottmv{w’_{\ottmv{j}}},\ottmv{D} \ottsym{@} \ottmv{w”})$$.
• (Horizontal Source Composition) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”)$$ and $$\alpha \in M(w,w’)$$ the horizontal composition $$f \rightharpoonup \alpha :M;A_1@w_1, \ldots, A@w’, \ldots,A_n@w_n,B@w”$$ is also a morphism.
• (Horizontal Target Composition) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A_n@w_n,B@w)$$ and $$\alpha \in M(w,w’)$$ the horizontal target composition $$f \rightharpoondown \alpha :M;A_1@w_1, \ldots, A_n@w_n,B@w’$$ is also a morphism.
• Finally, we have the following axioms:
• (Vertical Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots, A_i@w_i,B@w’)$$ we have $$f ; \mathsf{id}_{B@w’} = f = \mathsf{id}_{A@w};f$$.
• (Associativity of Vertical Composition) For any morphisms$$f \in ( \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) )(M_1;A_1@w_1, \ldots, A_i@w_i,B@w’)$$,$$g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M_2;B_1@w_1, \ldots, B@w’, \ldots, B_i@w_i,C@w”)$$, and $$h \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M_3;C_1@w_1, \ldots, C@w”, \ldots, C_i@w_i,D@w”’)$$ we have $$f;(g;h) = (f;g);h$$.
• (Horizontal Source Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots, A_i@w_i,B@w’)$$ we have $$f \rightharpoonup \mathsf{id}_w = f$$.
• (Horizontal Source Unfold) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”’)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w”)$$ we have $$f \rightharpoonup (\alpha_1;\alpha_2)$$ = $$(f \rightharpoonup \alpha_1) \rightharpoonup \alpha_2$$.
• (Horizontal Source Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”’)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w)$$ we have $$f = f \rightharpoonup (\alpha_1;\alpha_2)$$.
• (Horizontal Target Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A_i@w_i,B@w)$$ we have $$f \rightharpoondown \mathsf{id}_w = f$$.
• (Horizontal Target Unfold) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w”)$$ we have $$f \rightharpoondown (\alpha_1;\alpha_2)$$ = $$(f \rightharpoondown \alpha_1) \rightharpoondown \alpha_2$$.
• (Horizontal Target Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w)$$ we have $$f = f \rightharpoondown (\alpha_1;\alpha_2)$$.
• (Horizontal-Vertical Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w_i,w’_i)$$, and $$\alpha_2 \in M(w,w’)$$ we have $$(f \rightharpoonup \alpha_1) \rightharpoondown \alpha_2)$$ = $$(f \rightharpoondown \alpha_2)\rightharpoonup \alpha_1)$$.

The categories on morphisms should be thought of as being constraints on particular morphisms, and thus, the only part of the category that matters is the set of morphisms. So we will denote these categories as simply lists of morphisms, that is, $$\ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}$$ but with the additional property that this list should be a category, and hence, identity and composition morphisms must be present. That being said, these constraint categories could get large due to having to have identities for every object, and the composition morphisms, so we will often only list out the non-trivial morphisms and leave the existence of the identity and composition morphisms implicit. The notation $$\ottmv{M_{{\mathrm{1}}}},\ottmv{M_{{\mathrm{2}}}}$$ for categories $$\ottmv{M_{{\mathrm{1}}}}$$ and $$\ottmv{M_{{\mathrm{2}}}}$$ is a combination operation. This notation stands for the category defined as the union of the two categories.

The category $$\ottmv{M_{{\mathrm{1}}}},\ottmv{M_{{\mathrm{2}}}}$$ is defined as follows:

• $$\mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} ) = \mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} ) \cup \mathsf{Obj}( \ottmv{M_{{\mathrm{2}}}} )$$
• For any objects $$A,B \in \mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} )$$, either $$(\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}})(A,B) = \ottmv{M_{{\mathrm{1}}}}(A,B)$$ if $$A,B \in \ottmv{M_{{\mathrm{1}}}}$$, or $$(\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}})(A,B) = \ottmv{M_{{\mathrm{2}}}}(A,B)$$ if $$A,B \in \ottmv{M_{{\mathrm{2}}}}$$.

Keeping with the theme of representing the constraint categories as lists of non-trivial morphisms we define the union — the operation defined above — of $$\ottmv{M_{{\mathrm{1}}}} = \ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}$$ and $$\ottmv{M_{{\mathrm{2}}}} = \ottmv{w’_{{\mathrm{1}}}} \leq \ottmv{w’_{{\mathrm{2}}}} , \ldots, \ottmv{w’_{\ottmv{j}}} \leq \ottmv{w’}_{j+1}$$ as $$\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} = \ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}, \ottmv{w’_{{\mathrm{1}}}} \leq \ottmv{w’_{{\mathrm{2}}}} , \ldots, \ottmv{w’_{\ottmv{j}}} \leq \ottmv{w’}_{j+1}$$.

### Some Intuition

The best intuition I can give for a FC, $$\mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} )$$, is to think about the elements of $$\mathcal{W}$$ as worlds where objects of $$\mathcal{C}$$ live. Then one should think of the morphisms of $$\mathsf{Cat}( \mathcal{W} )$$ as transitions between the worlds. Thus, the reader familiar with Kripke models should be able to see that we are considering $$\mathsf{Cat}( \mathcal{W} )$$ as imposing a Kripke like structure over $$\mathcal{C}$$.

Now horizontal composition corresponds to monotonicity if we think of a morphism as a proof of its target assuming its list of source objects. Notice, that horizontal composition only moves forward, that is because, when formulas are true in a Kripke model we can only move them forward along the transition relation. So since there is no negation here we can only move forward.

### Examples

Every category, $$\mathcal{C}$$, is isomorphic to a FC, $$(\mathcal{C}\downarrow\mathsf{Cat}(\mathcal{W}))$$ where $$\mathsf{Cat}(\mathcal{W})$$ is a discrete category with only one object $$w$$. The constraint universe has only one morphism, the identity on $$w$$, and hence, the only constraints on morphisms in the FC is this single identity morphism. So we can define bijective functors as follows:

• $$\mathsf{Lax}(A@w) = A$$
• $$\mathsf{Lax}(f : \mathsf{Cat}(\mathcal{W});A_1@w,\ldots,A_i@w \to B@w) = f : A_1,\ldots,A_i \to B$$
• $$\mathsf{Filtered}(A) = A@w$$
• $$\mathsf{Filtered}(f : A_1,\ldots,A_i \to B) = f : \mathsf{Cat}(\mathcal{W});A_1@w,\ldots,A_i@w \to B@w$$

### Functors

One can define functors between filtered categories:

A functor, $$H$$, between two FCs $$(\mathcal{C_1}\downarrow\mathsf{Cat}(\mathcal{W}_1))$$ and $$(\mathcal{C_2}\downarrow\mathsf{Cat}(\mathcal{W}_2))$$ is a pair of functors $$(F : \mathcal{C}_1 \to \mathcal{C}_2, G : \mathsf{Cat}(\mathcal{W}_1) \to \mathsf{Cat}(\mathcal{W}_2) )$$ defined as follows:

• (Objects) $$H(A@w) = F(A)@G(w)$$
• (Morphisms) $$H(f : M;A_1@w_1,\ldots,A_i@w_i \to B@w) =$$
$$F(f) : G(M);F(A_1)@G(w_1),\ldots,F(A_i)@G(w_i) \to F(B)@G(w)$$

Since $$F$$ and $$G$$ are functors $$H$$ will respect both vertical and horizontal compositions as well as identities.

### Structures in FCs

FCs can have two different types of structures: structures inherited from the lax component, and structures global to the FC. The latter are the usual structures built from morphisms such as products, co-products, and exponentials. However, the former are structures that are constrained versions of the structures in the lax component. Defining these is were the constraints are really going to come into play. Now there are lots of ways one could constrain the structures of the lax component using the constraint universe. The following are examples of the types of structures I will need when interpreting DIL/DTT. Throughout this section I will denote lists of objects of an FC by $$\Gamma$$ and $$\Delta$$.

A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has a pointed initial object, denoted $$\perp @ w$$, if for any category $$M$$, worlds $$w,w’ \in \mathsf{Obj}(M)$$, sequences of objects $$\Gamma$$ and $$\Delta$$, and object $$A@w’ \in \mathsf{Obj}(\mathcal{C}\downarrow\mathcal{W})$$, there exists a unique morphism $$\triangle : M; \Gamma,\perp@w,\Delta \to A@w’$$.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has a pointed final object, denoted $$\top @ w$$, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, and sequence of objects $$\Gamma$$, there exists a unique morphism $$\nabla : M; \Gamma \to \top @ w$$.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has pointed products, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, two objects $$A@w$$ and $$B@w$$ of $$(\mathcal{C}\downarrow\mathcal{W})$$, and morphisms $$f : M_1; \overrightarrow{C@w’} \to A@w$$ and $$g : M_2 ; \overrightarrow{C@w’} \to B@w$$, there is an object denoted $$(A \times B)@w$$, two morphisms $$\pi_1 : M ; (A \times B)@w \to A@w$$ and $$\pi_2 : M ; (A \times B)@w \to B@w$$, and there exists a unique morphism
$$\langle f,g\rangle : M_1,M_2; \overrightarrow{C@w’} \to (A \times B)@w$$ in $$(\mathcal{C}\downarrow\mathcal{W})$$ where the following diagram commutes:

We call a FC pointed cartesian if all finite pointed products exist, and there exists a pointed final object.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has pointed co-products, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, two objects $$A@w$$ and $$B@w$$ of $$(\mathcal{C}\downarrow\mathcal{W})$$, and morphisms $$f : M_1; A@w \to C@w’$$ and $$g : M_2 ; B@w \to C@w’$$, there is an object denoted $$(A + B)@w$$, two morphisms $$\iota_1 : M ; A@w \to (A + B)@w$$ and $$\iota_2 : M ; B@w \to (A + B)@w$$, and there exists a unique morphism
$$[f,g] : M_1,M_2; (A + B)@w \to C@w’$$ in $$(\mathcal{C}\downarrow\mathcal{W})$$ where the following diagram commutes:

We call a FC pointed bi-cartesian if all finite pointed co-products exist, and there exists a pointed inital object.

I think I will leave this post here. This is just a look at what I am considering. In the close future I will be thinking about internal homs in FC, and generalizing to polycategories, then after these are figured out, we will be able to interpret DIL and DTT in a categorical framework.