## Categorical Proof Theory of Dualized Type Theory: Filtered Multicategories


In the last post I talked about the need for a categorical proof theory of BINT. In this post I would like to start laying out an idea for how we can obtain one for Dualized Type Theory (DTT). At first I had planned on laying this post out as a private note, but I decided to make it public to try and get feedback. I am not sure if my idea already exists or not, and if the reader knows please do tell me! Now in this post I only detail the initial categorical framework I will interpret DTT in, and so, I will be presenting filtered categories as multicategories, but the final model we will eventually arrive at in a future post will be a polycategorical version of what is described here. However, the latter has its own hurdles to get through, and I am still trying to understand those hurdles.

A filtered category (FC) can be thought of as starting with an arbitrary multicategory, say $$\mathcal{C}$$, and layering over the top of it another category, say $$\mathcal{F}$$ (the filter). The following image illustrates this idea:

The job of this layer is to act as a filter, and only let through specific structures. The above image captures this idea by having the filter, $$\mathcal{F}$$ over $$\mathcal{C}$$, and the filter is shown by making the filter layer a little transparent so that we can only make out some of the bottom layer. The way in which the filter will restrict $$\mathcal{C}$$ will become more apparent when we give the definition of a FC. In fact, lets do that now.

### Filtered Categories

A filtered category (FC) is a multicategory defined in terms of two structures given by a pair denoted $$\ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}$$ where $$\mathcal{C}$$ is a multicategory called the lax component, $$\mathcal{W}$$ is a collection of worlds, and $$\mathsf{Cat}( \mathcal{W} )$$ is $$\mathcal{W} \times \mathcal{W}$$ considered as a category which we call the constraint universe of the FC. A FC, $$( \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) )$$, is defined as follows:

• (Objects) A collection of objects each denoted $$\ottmv{A} \ottsym{@} \ottmv{w}$$, where $$\ottmv{A} \in \mathsf{Obj}( \mathcal{C} )$$ and $$\ottmv{w} \in \mathcal{W}$$.
• (Morphisms) A collection of multimorphisms. Each morphism is a triple $$(f , M, \{(\ottmv{A_{{\mathrm{1}}}},\ottmv{w_{{\mathrm{1}}}}),\ldots,(\ottmv{A_{\ottmv{m}}},\ottmv{w_{\ottmv{m}}}),(B,w)\})$$ where $$f \in \mathcal{C} \ottsym{(} \ottmv{A_{{\mathrm{1}}}} \ottsym{,} \, … \, \ottsym{,} \ottmv{A_{\ottmv{m}}} \ottsym{,} \ottmv{B} \ottsym{)}$$, $$M$$ — the morphism constraints — is a subcategory of $$\mathsf{Cat}( \mathcal{W} )$$, such that, $$\{\ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{w_{\ottmv{m}}},\ottmv{w}\} \subseteq \mathsf{Obj}( \ottmv{M} )$$. We denote each morphism by $$f : \ottmv{M}; \ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{A_{\ottmv{m}}} \ottsym{@} \ottmv{w_{\ottmv{m}}} \to \ottmv{B} \ottsym{@} \ottmv{w}$$.
• (Identities) For any list of objects, $$\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}}, \ldots, \ottmv{A} \ottsym{@} \ottmv{w}, \ldots, \ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}}$$ there is a morphism $$\mathsf{id} : M;\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}}, \ldots, \ottmv{A} \ottsym{@} \ottmv{w}, \ldots, \ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}} \to \ottmv{A} \ottsym{@} \ottmv{w}$$.
• (Vertical Composition) For any two morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)} ( \ottmv{M} ; \ottmv{A_{{\mathrm{1}}}} @ \ottmv{w_{{\mathrm{1}}}} , \ldots, \ottmv{A_{\ottmv{i}}} @ \ottmv{w_{\ottmv{i}}} , \ottmv{B} @ \ottmv{w’} )$$ and
$$g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M';C_{{\mathrm{1}}} \ottsym{@} \ottmv{w’_{{\mathrm{1}}}}, \ldots, \ottmv{B} \ottsym{@} \ottmv{w’}, \ldots,C_{\ottmv{j}} \ottsym{@} \ottmv{w’_{\ottmv{j}}}, \ottmv{D} \ottsym{@} \ottmv{w”})$$ their composition is also a morphism, that is, $$f;g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M,M';C_{{\mathrm{1}}} \ottsym{@} \ottmv{w’_{{\mathrm{1}}}},\ldots,\ottmv{A_{{\mathrm{1}}}} \ottsym{@} \ottmv{w_{{\mathrm{1}}}},\ldots,\ottmv{A_{\ottmv{i}}} \ottsym{@} \ottmv{w_{\ottmv{i}}},\ldots,C_{\ottmv{j}} \ottsym{@} \ottmv{w’_{\ottmv{j}}},\ottmv{D} \ottsym{@} \ottmv{w”})$$.
• (Horizontal Source Composition) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”)$$ and $$\alpha \in M(w,w’)$$ the horizontal composition $$f \rightharpoonup \alpha :M;A_1@w_1, \ldots, A@w’, \ldots,A_n@w_n,B@w”$$ is also a morphism.
• (Horizontal Target Composition) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A_n@w_n,B@w)$$ and $$\alpha \in M(w,w’)$$ the horizontal target composition $$f \rightharpoondown \alpha :M;A_1@w_1, \ldots, A_n@w_n,B@w’$$ is also a morphism.
• Finally, we have the following axioms:
• (Vertical Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots, A_i@w_i,B@w’)$$ we have $$f ; \mathsf{id}_{B@w’} = f = \mathsf{id}_{A@w};f$$.
• (Associativity of Vertical Composition) For any morphisms$$f \in ( \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) )(M_1;A_1@w_1, \ldots, A_i@w_i,B@w’)$$,$$g \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M_2;B_1@w_1, \ldots, B@w’, \ldots, B_i@w_i,C@w”)$$, and $$h \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M_3;C_1@w_1, \ldots, C@w”, \ldots, C_i@w_i,D@w”’)$$ we have $$f;(g;h) = (f;g);h$$.
• (Horizontal Source Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots, A_i@w_i,B@w’)$$ we have $$f \rightharpoonup \mathsf{id}_w = f$$.
• (Horizontal Source Unfold) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”’)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w”)$$ we have $$f \rightharpoonup (\alpha_1;\alpha_2)$$ = $$(f \rightharpoonup \alpha_1) \rightharpoonup \alpha_2$$.
• (Horizontal Source Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A@w, \ldots,A_n@w_n,B@w”’)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w)$$ we have $$f = f \rightharpoonup (\alpha_1;\alpha_2)$$.
• (Horizontal Target Identity) For any morphism $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots, A_i@w_i,B@w)$$ we have $$f \rightharpoondown \mathsf{id}_w = f$$.
• (Horizontal Target Unfold) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w”)$$ we have $$f \rightharpoondown (\alpha_1;\alpha_2)$$ = $$(f \rightharpoondown \alpha_1) \rightharpoondown \alpha_2$$.
• (Horizontal Target Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w,w’)$$, and $$\alpha_2 \in M(w’,w)$$ we have $$f = f \rightharpoondown (\alpha_1;\alpha_2)$$.
• (Horizontal-Vertical Symmetry) For any morphisms $$f \in \ottsym{(} \mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} ) \ottsym{)}(M;A_1@w_1, \ldots,A_n@w_n,B@w)$$, $$\alpha_1 \in M(w_i,w’_i)$$, and $$\alpha_2 \in M(w,w’)$$ we have $$(f \rightharpoonup \alpha_1) \rightharpoondown \alpha_2)$$ = $$(f \rightharpoondown \alpha_2)\rightharpoonup \alpha_1)$$.

The categories on morphisms should be thought of as being constraints on particular morphisms, and thus, the only part of the category that matters is the set of morphisms. So we will denote these categories as simply lists of morphisms, that is, $$\ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}$$ but with the additional property that this list should be a category, and hence, identity and composition morphisms must be present. That being said, these constraint categories could get large due to having to have identities for every object, and the composition morphisms, so we will often only list out the non-trivial morphisms and leave the existence of the identity and composition morphisms implicit. The notation $$\ottmv{M_{{\mathrm{1}}}},\ottmv{M_{{\mathrm{2}}}}$$ for categories $$\ottmv{M_{{\mathrm{1}}}}$$ and $$\ottmv{M_{{\mathrm{2}}}}$$ is a combination operation. This notation stands for the category defined as the union of the two categories.

The category $$\ottmv{M_{{\mathrm{1}}}},\ottmv{M_{{\mathrm{2}}}}$$ is defined as follows:

• $$\mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} ) = \mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} ) \cup \mathsf{Obj}( \ottmv{M_{{\mathrm{2}}}} )$$
• For any objects $$A,B \in \mathsf{Obj}( \ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} )$$, either $$(\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}})(A,B) = \ottmv{M_{{\mathrm{1}}}}(A,B)$$ if $$A,B \in \ottmv{M_{{\mathrm{1}}}}$$, or $$(\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}})(A,B) = \ottmv{M_{{\mathrm{2}}}}(A,B)$$ if $$A,B \in \ottmv{M_{{\mathrm{2}}}}$$.

Keeping with the theme of representing the constraint categories as lists of non-trivial morphisms we define the union — the operation defined above — of $$\ottmv{M_{{\mathrm{1}}}} = \ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}$$ and $$\ottmv{M_{{\mathrm{2}}}} = \ottmv{w’_{{\mathrm{1}}}} \leq \ottmv{w’_{{\mathrm{2}}}} , \ldots, \ottmv{w’_{\ottmv{j}}} \leq \ottmv{w’}_{j+1}$$ as $$\ottmv{M_{{\mathrm{1}}}} \ottsym{,} \ottmv{M_{{\mathrm{2}}}} = \ottmv{w_{{\mathrm{1}}}} \leq \ottmv{w_{{\mathrm{2}}}} , \ldots, \ottmv{w_{\ottmv{i}}} \leq \ottmv{w}_{i+1}, \ottmv{w’_{{\mathrm{1}}}} \leq \ottmv{w’_{{\mathrm{2}}}} , \ldots, \ottmv{w’_{\ottmv{j}}} \leq \ottmv{w’}_{j+1}$$.

### Some Intuition

The best intuition I can give for a FC, $$\mathcal{C} \downarrow \mathsf{Cat}( \mathcal{W} )$$, is to think about the elements of $$\mathcal{W}$$ as worlds where objects of $$\mathcal{C}$$ live. Then one should think of the morphisms of $$\mathsf{Cat}( \mathcal{W} )$$ as transitions between the worlds. Thus, the reader familiar with Kripke models should be able to see that we are considering $$\mathsf{Cat}( \mathcal{W} )$$ as imposing a Kripke like structure over $$\mathcal{C}$$.

Now horizontal composition corresponds to monotonicity if we think of a morphism as a proof of its target assuming its list of source objects. Notice, that horizontal composition only moves forward, that is because, when formulas are true in a Kripke model we can only move them forward along the transition relation. So since there is no negation here we can only move forward.

### Examples

Every category, $$\mathcal{C}$$, is isomorphic to a FC, $$(\mathcal{C}\downarrow\mathsf{Cat}(\mathcal{W}))$$ where $$\mathsf{Cat}(\mathcal{W})$$ is a discrete category with only one object $$w$$. The constraint universe has only one morphism, the identity on $$w$$, and hence, the only constraints on morphisms in the FC is this single identity morphism. So we can define bijective functors as follows:

• $$\mathsf{Lax}(A@w) = A$$
• $$\mathsf{Lax}(f : \mathsf{Cat}(\mathcal{W});A_1@w,\ldots,A_i@w \to B@w) = f : A_1,\ldots,A_i \to B$$
• $$\mathsf{Filtered}(A) = A@w$$
• $$\mathsf{Filtered}(f : A_1,\ldots,A_i \to B) = f : \mathsf{Cat}(\mathcal{W});A_1@w,\ldots,A_i@w \to B@w$$

### Functors

One can define functors between filtered categories:

A functor, $$H$$, between two FCs $$(\mathcal{C_1}\downarrow\mathsf{Cat}(\mathcal{W}_1))$$ and $$(\mathcal{C_2}\downarrow\mathsf{Cat}(\mathcal{W}_2))$$ is a pair of functors $$(F : \mathcal{C}_1 \to \mathcal{C}_2, G : \mathsf{Cat}(\mathcal{W}_1) \to \mathsf{Cat}(\mathcal{W}_2) )$$ defined as follows:

• (Objects) $$H(A@w) = F(A)@G(w)$$
• (Morphisms) $$H(f : M;A_1@w_1,\ldots,A_i@w_i \to B@w) =$$
$$F(f) : G(M);F(A_1)@G(w_1),\ldots,F(A_i)@G(w_i) \to F(B)@G(w)$$

Since $$F$$ and $$G$$ are functors $$H$$ will respect both vertical and horizontal compositions as well as identities.

### Structures in FCs

FCs can have two different types of structures: structures inherited from the lax component, and structures global to the FC. The latter are the usual structures built from morphisms such as products, co-products, and exponentials. However, the former are structures that are constrained versions of the structures in the lax component. Defining these is were the constraints are really going to come into play. Now there are lots of ways one could constrain the structures of the lax component using the constraint universe. The following are examples of the types of structures I will need when interpreting DIL/DTT. Throughout this section I will denote lists of objects of an FC by $$\Gamma$$ and $$\Delta$$.

A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has a pointed initial object, denoted $$\perp @ w$$, if for any category $$M$$, worlds $$w,w’ \in \mathsf{Obj}(M)$$, sequences of objects $$\Gamma$$ and $$\Delta$$, and object $$A@w’ \in \mathsf{Obj}(\mathcal{C}\downarrow\mathcal{W})$$, there exists a unique morphism $$\triangle : M; \Gamma,\perp@w,\Delta \to A@w’$$.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has a pointed final object, denoted $$\top @ w$$, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, and sequence of objects $$\Gamma$$, there exists a unique morphism $$\nabla : M; \Gamma \to \top @ w$$.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has pointed products, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, two objects $$A@w$$ and $$B@w$$ of $$(\mathcal{C}\downarrow\mathcal{W})$$, and morphisms $$f : M_1; \overrightarrow{C@w’} \to A@w$$ and $$g : M_2 ; \overrightarrow{C@w’} \to B@w$$, there is an object denoted $$(A \times B)@w$$, two morphisms $$\pi_1 : M ; (A \times B)@w \to A@w$$ and $$\pi_2 : M ; (A \times B)@w \to B@w$$, and there exists a unique morphism
$$\langle f,g\rangle : M_1,M_2; \overrightarrow{C@w’} \to (A \times B)@w$$ in $$(\mathcal{C}\downarrow\mathcal{W})$$ where the following diagram commutes:

We call a FC pointed cartesian if all finite pointed products exist, and there exists a pointed final object.
A FC, $$(\mathcal{C}\downarrow\mathcal{W})$$, has pointed co-products, if for any category $$M$$, world $$w \in \mathsf{Obj}(M)$$, two objects $$A@w$$ and $$B@w$$ of $$(\mathcal{C}\downarrow\mathcal{W})$$, and morphisms $$f : M_1; A@w \to C@w’$$ and $$g : M_2 ; B@w \to C@w’$$, there is an object denoted $$(A + B)@w$$, two morphisms $$\iota_1 : M ; A@w \to (A + B)@w$$ and $$\iota_2 : M ; B@w \to (A + B)@w$$, and there exists a unique morphism
$$[f,g] : M_1,M_2; (A + B)@w \to C@w’$$ in $$(\mathcal{C}\downarrow\mathcal{W})$$ where the following diagram commutes:

We call a FC pointed bi-cartesian if all finite pointed co-products exist, and there exists a pointed inital object.

I think I will leave this post here. This is just a look at what I am considering. In the close future I will be thinking about internal homs in FC, and generalizing to polycategories, then after these are figured out, we will be able to interpret DIL and DTT in a categorical framework.

## Linear bi-Intuitionistic logic cannot be full

$$\newcommand{\interp}[1]{[\negthinspace[#1]\negthinspace]}$$
This post was written while I was attending VSL from July 12 – July 17 2014.

This week I am in Vienna — the capital of Austria — attending the Vienna Summer of Logic (VSL). It has been a blast! The feeling I get when attending a conference is pure unadulterated motivation, creativity, and excitement. I just love this atmosphere. So many great researchers giving great talks about interesting work.

So what do I do with this excitement you might ask? I start to reflect on my own research quite deeply, and ponder if what I am doing is the best I can do, and if the answer is no, then I begin fixing it so that it is. Now, by “the best I can do” I do not mean “will get into the best conference/journal” or “will make me famous”, but rather only that it meets my personal standards, basically I want to be able to be proud that my work is my work and anything less is a waste. An example of this was when I attended a three week summer school in Eugene Organ called the Organ Programming Languages Summer School. There was the first time I got this feeling, and I ended up reworking how proofs of normalization using hereditary substitution worked from the ground up.

So what am I thinking about now that I have this feeling again? My research focused on understanding bi-intuitionistic simple type theory with a non-trivial categorical model. The work I have done so far — in collaboration with Aaron Stump and Ryan McCleeary — can be found here and was presented at The 5th International Workshop on Classical Logic and Computation (CL&C) 2014 on Sunday July 13.

Currently, Dualized Type Theory (DTT) has a sequent that looks like $$G;x_1 : p_1 A_1@n_1,\ldots,x_j : p_j A_j@n_j \vdash t : p B @ n$$. There each $$x_i$$ and $$t$$ are terms while $$G$$ is a graph — defined as a list of ordered pairs — and each $$n_i$$ and $$n$$ are nodes in $$G$$, finally each $$p_i$$ and $$p$$ are polarities — either $$+$$ or $$-$$. So what is the deal with the graph and node labels on types?

Tristan Crolard was the first to study bi-intuitionistic logic and type theory in the context of computer science. He proposed a multi-succedent sequent calculus called Subtractive Logic, but this sequent calculus has a defect. It requires the Dragalin restriction. This restriction requires that the premise of the right implication inference rule have only one formula on the right side of the sequent. Just as linear logicians observed this breaks cut-elimination! Luis Pinto and Tarmo Uustalu proposed a means of fixing this issue by first starting with LK+subtraction, and then adding a labeling system using graphs that semantically correspond to constraints on the set of Kripke models. This system they called labeled BINT. DTT can be seen as a simple type theory of a simplified version of labeled BINT called Dualized Intuitionistic Logic (DILL).

At this point I should quickly explain — intuitively — what BINT is, but I am not going to go into to much detail. See the references I have been linking to. My slogan is “BINT is intuitionistic logic with perfect duality.” Now “perfect” may be to strong of a word, but it gets my point across. BINT is a logic where for every logical connective of the logic its dual is also a logical connective of the logic. That is, BINT consists of true and false, conjunction and disjunction, and implication and its dual called subtraction or exclusion. Cecylia Rauszer was the first person to study BINT where she proposed an extended version of the Kripke Semantics for intuitionistic logic.

Lets consider the extended Kripke Semantics proposed by Cecylia. It may help with understanding subtraction. The interpretation of implication into a Kripke Model is defined as follows:

$$\interp{A \to B}_w = \forall w’.wRw’ \Rightarrow \interp{A}_{w’} \Rightarrow \interp{B}_{w’}$$

This states that $$A \to B$$ holds in an arbitrary Kripke Model at world $$w$$ if for any future world $$w’$$, reachable from $$w$$, and assuming $$A$$ holds in the model at the future world $$w’$$, and we can show that $$B$$ holds at world $$w’$$. The key observation to make regarding this interpretation is that it is modal, and talks about all future worlds. This modal aspect is what connects the semantics of modal logic S4 to Kripke semantics. Now to obtain the interpretation of subtraction in Kripke semantics is by simply taking the dual of the above definition:

$$\interp{A – B}_w = \exists w’.w’Rw \land \interp{A}_{w’} \land \lnot\interp{B}_{w’}$$

This shows that we move from talking about all future worlds to talking about the existence of a past world!

It is well-known that the simply-typed $$\lambda$$-calculus can be interpreted in cartesian-closed categories (CCC). Tristan Crolard showed that if one extends a CCC with co-products (which corresponds to disjunction) and co-exponentials or the left adjoint of the co-product (which corresponds to subtraction) then one obtains what he named a bi-[CCC], but even worse every bi-[CCC] is equivalent — up to isomorphism — to a preorder. That is, there is at most one morphism between any two objects of the bi-[CCC]. Therefore, this does not provide for a very good proof theory of BINT!

So I pose the question, does there actually exist a non-trivial proof theory for BINT? The answer may very well be no, but I have some ideas to explore before making that bold claim. That’s the topic of my next post. If the answer is indeed no, then that implies that we must move away from trying to come up with a satisfactory proof theory of bi-intuitionistic logic in the sense of a bi-[CCC]. Instead we must insist that bi-intuitionistic logic be split into two fragments: a positive fragment (or as Paul-Andrea Millies calls it the prover) and a negative fragment (denier). We can also consider the positive fragment being intuitionistic logic, and the negative fragment co-intuitionistic logic — actually this will be slightly different, because we cannot have an isomorphism between both sides. However, before going to that extreme there are a few ideas I want to explore first.

The first idea I explored was moving to linear logic and combining the idea of Crolard’s bi-[CCC]‘s with the idea of Nick Benton’s mixed linear non-linear models of linear logic. I thought that by doing this we could obtain a non-trivial categorical model of bi-intuitionistic linear logic (linear BINT), and by the use of Girard’s translation a categorical model of BINT. However, this does not fix the problem, the embedding of BINT into linear BINT is degenerate by the same argument as Crolard’s. Let’s consider a diagram that may help situate our minds on this idea:

The diagram shows that we can view the model of linear BINT as the union of a symmetric monoidal closed category and its opposite. Then using Girard’s embedding we can obtain an adjunction between a bi-[CCC] (full BINT) and the previous union.

To use Crolard’s argument all we have to show is that by way of Girard’s translation that the co-Kleisli category is cartesian closed and the Kleisli category is co-cartesian co-closed. The former follows from Seely’s theorem in Gavin Bierman’s thesis (Proposition 17 on p. 156), and the latter follows by duality. We briefly show how to obtain the fact that intuitionistic implication is modeled by the right adjoint to the product functor:

The co-Kleisli category, $$\mathcal{C}_!$$, is cartesian closed.
We have the following chain of isomorphisms:

$$\begin{array}{lll} \mathcal{C}_!(A \times B, C) & \cong & \mathcal{C}_!(!(A \times B), C) & (\text{Def.})\\ & \cong & \mathcal{C}_!(!A \times !B, C) & (\text{Seely’s axiom}\,n)\\ & \cong & \mathcal{C}_!(!A, !B -\negthinspace\negthinspace *\, C) & (\text{Closure.})\\ & \cong & \mathcal{C}_!(A, !B -\negthinspace\negthinspace * \,C) & (\text{Def.})\\ \end{array}$$

Note that the symbol $$-\negthinspace\negthinspace *$$ is linear implication. I could not get the CMLL latex library to work with MathJax yet.

Now by duality we have the following result.

The Kleisli category, $$\mathcal{C}_?$$, is co-cartesian co-closed.

Therefore, any categorical model of linear BINT with linear exponentials is degenerate just as Crolard established.

We could either remove the $$!$$-exponential (of-course) or the $$?$$-exponential (why-not) to obtain a model that looks like one of these:

Both of these recover the fact that we can embed either intuitionistic logic (the first diagram) into linear logic, or co-intuitionistc logic (the second diagram). However, both models are more restrictive than our initial idea, but this is to be expected, because something has to give or we would end up with a degenerated model after all.

We could also take the union of a CCC and a SMcC which would mean that the majority of the logic is non-linear, but subtraction is left linear. Now this is less appealing, because it is not as elegant as the other linear logic formulations (previous two diagrams) and it is no more expressive than the previous two diagrams especially the first of the two.

All of the approaches I have been discussing so far have been trying to keep the model — and also the logic — to be as collapsed as possible. I mentioned above that we could also separate the postive fragment from the negative fragment, and then relate them using a weak equivalence — an adjunction. Paul-Andrea Millies calls this a chirality. The term comes from chemistry and means something to the effect of “mirror symmetry.”

This is all I want to say in this post. I now conclude with some final remarks. My reflecting I think has paid off, as it usually does, and I believe I now have a novel idea for two different, but related non-trivial categories models of DIL and DTT. These models will indeed be collapsed, but constrained by an external structure. I will leave the reader with a hint about my latest idea. DIL and DTT may have a categorical model where the external structure is a coalgebra. Exciting I know!

## On to the next big adventure!

Over the course of the last five years I have had the honor of being one of Aaron Stump’s Ph.D. students. I am happy to report that I have successfully defended my dissertation, and have accepted a new academic position.

My dissertation is titled “The Semantics Analysis of Advanced Programming Languages” and contains most of my published and unpublished research over the course of my graduate experience. Here is the abstract:

We live in a time where computing devices power essential
systems of our society: our automobiles, our airplanes and even our
medical services. In these safety-critical systems, bugs do not
just cost money to fix; they have a potential to cause harm, even
death. Therefore, software correctness is of paramount importance.

Existing mainstream programming languages do not support
software verification as part of their design, but rely on testing,
and thus cannot completely rule out the possibility of bugs during
software development. To fix this problem we must reshape the very
foundation on which programming languages are based. Programming
languages must support the ability to verify the correctness of the
software developed in them, and this software verification must be
possible using the same language the software is developed in. In
the first half of this dissertation we introduce three new
programming languages: Freedom of Speech, Separation of Proof from
Program, and Dualized Type Theory. The Freedom of Speech language
separates a logical fragment from of a general recursive programming
language, but still allowing for the types of the logical fragment
to depend on general recursive programs while maintaining logical
consistency. Thus, obtaining the ability to verify properties of
general recursion programs. Separation of Proof from Program builds
on the Freedom of Speech language by relieving several restrictions,
and adding a number of extensions. Finally, Dualized Type Theory is
a terminating functional programming language rich in constructive
duality, and shows promise of being a logical foundation of
induction and coninduction.

These languages have the ability to verify properties of
software, but how can we trust this verification? To be able to put
our trust in these languages requires that the language be
rigorously and mathematically defined so that the programming
language itself can be studied as a mathematical object. Then we
must show one very important property, logical consistency of the
fragment of the programming language used to verify mathematical
properties of the software. In the second half of this dissertation
we introduce a well-known proof technique for showing logical
consistency called hereditary substitution. Hereditary substitution
shows promise of being less complex than existing proof techniques
like the Tait-Girard Reducibility method. However, we are unsure
which programming languages can be proved terminating using
hereditary substitution. Our contribution to this line of work is
the application of the hereditary substitution technique to
predicative polymorphic programming languages, and the first proof
of termination using hereditary substitution for a classical type
theory.

You can find my dissertation on my webpage here. The physical beast:

My defense was a lot of fun. My wife came which was the first time she has seen me give a talk. In addition, some past professors from my alma matar — Millikin University — virtually attended using +Google Hangouts. The title slide from my defense:

All in all I am extremely excited to have these:

I am now off on the next big adventure as an Assistant Professor of Computer Science in the Computer and Information Sciences Department at Georgia Regents University in August GA.

We packed all of our stuff into these big pods:

They are operated by ABF freight and I highly recommend them. They are the best part about movers, but are half the price. ABF delivers the pods to your house, and then you pack them up, finally when you are ready for them to be shipped to your destination you call ABF and have them come and get them. Now these pods are shipped like a package so they take 5 – 7 days to reach their desinition, so you have to make sure and plan accordingly. They arrived a day later in GA then we had atticipated so we had to wait around a bit. Using ABF to ship our stuff was a lot easier than having to drive a huge moving truck from IA to GA.

We are tired of living in an apartment so we decided to get a rental house:

We have been here for about a week and a half, and we absolutely love it. The neighborhood is very quite, and its only about 12 miles from campus.

I officially start work on August 11, and I am going to be teaching two courses: Programming Languages and Theory of Computation. The former is going to be devided into two halfs: a theory half and a practical half. The former is going to explore basing functional programming languages on typed $$\lambda$$-calculi and using abstract mathematics to be a better programmer. Then the latter half will primarly be about learning to implement prototype PLs in Haskell. The course project is an implementation of Girard/Reynold’s system F which is at the core of Haskell. Theory of computation is going to be a lot of fun as well. The main project for that is going to be a Turing machine simulator that will then be used to compile a prototype imparative PL to. Very exciting indeed!

That concludes this update. I am very excited about the future and being able to teach and continue doing research. It really is a dream come true.

Posted in General | 3 Comments

## Diagram Schemes Need Not Be Posets

My future research will — as much as possible — be at least partially formalized in a proof assistant like Coq or Agda. I am a big Agda fan, so naturally, I will stick with it, but Coq definitely has some advantages and may be used for some projects.

I am graduating this summer and so I have been pondering what direction I want to go in my research, and I have decided that I want to try and contribute to areas where category theory plays a major role. So areas like categorical logic, and categorical semantics of programming languages. However, I am also interested in new areas of exploration using category theory. For example, Dusko Pavlovic’s initiative to study security using category theory seems very promising.

So to help with my future research I have been developing my own library for doing categorical logic in Agda. I call it the Lawvere categorical logic library or law for short. It can be found here: https://github.com/heades/law. Law is based on total setoids because they allow for the definition of quotients and — more importantly for my up coming work — subsets in type theory. See the paper “Setoids in Type Theory” by Barthe et. al for more about that. Now I am well aware that there are a number of category theory libraries out there in the cloud, but rolling my own allowed me to learn a lot, and concentrate on what I am interested in. I am only going to formalize the category theory I need, and I wanted to learn how to use setoids. So this has many benefits for me.

I have a bunch of definitions implemented so far. See the README for a list. A week or so ago I wanted to formalize something small using law that would stress test some of my definitions. So I decided to formalize the notion of a diagram as a functor from an index category — the scheme — to some other category.

Now depending on what properties the index category has influences the type of diagram one has. According to Wikipedia a commutative diagram is a diagram with a poset for the scheme. Now a poset is a preorder with one additional property. A preorder (PO) is a category where for any two objects $$A$$ and $$B$$ there is at most one morphism between $$A$$ and $$B$$. This tells us that if a morphism exists then it is unique. Now a poset has the additional property which states that if there are two morphisms $$f \in Hom(A,B)$$ and $$g \in Hom(B,A)$$ then $$A = B$$. Certainly, one can see that commutativity of a diagram is implied by the property of a PO, because if we have two or more compositions between two objects $$A$$ and $$B$$, then it must be the case that those compositions are equivalent.

I claim that posets are too strong in general for commutative diagrams. All we really need is a functor from a PO to some other category. To convince the reader I will now show that any commutative square in a category $$\mathcal{C}$$ can be modeled by a functor from a PO. All of this post has been formalized in Agda, and I will do my best to point to the respective files throughout the remainder of this post.

First, we define a PO with exactly four objects and five non-identity morphisms. In fact, we can depict this category as a graph:

I left identity arrows out of the previous graph, but we do have them, and we name them $$id_n : i_n \to i_n$$ for $$n \in \{1,2,3,4\}$$. Now I call this category the $$4\text{PO}$$. It’s definition in Agda can be found in the module of the same name here. It is easy to see that the above graph is a commutative square, because we only have one morphism between $$i_1$$ and $$i_4$$ and it is $$i_5$$. Hence, $$f_1 ; f_3 = f_2 ; f_4 = f_5$$ — I will use diagram composition throughout this post e.g. given $$m_1 : A \to B$$ and $$m_2 : B \to C$$, then $$m_1;m_2 : A \to C$$. The proof that $$4\text{PO}$$ is in fact a PO is trivial, we can see it is from the graph above.

Now suppose we have a functor $$\mathsf{SQ} : 4\text{PO} \to \mathcal{C}$$ for some category $$\mathcal{C}$$, such that, $$\mathsf{SQ}(i_1) = A$$, $$\mathsf{SQ}(i_2) = B$$, $$\mathsf{SQ}(i_3) = D$$, and $$\mathsf{SQ}(i_4) = C$$. Furthermore, suppose $$\mathsf{SQ}(f_1) = g : A \to B$$, $$\mathsf{SQ}(f_2) = h : A \to D$$, $$\mathsf{SQ}(f_3) = j : B \to C$$, and $$\mathsf{SQ}(f_4) = k : C \to D$$. This tells us that we have the following picture in $$\mathcal{C}$$:

Now we can show that the existence of the $$\mathsf{SQ}$$ functor implies that the previous square commutes. The lemma in Agda is called Comm-Square-Commutes and can be found here.

First, notice that we know $$f_1;f_3 = f_2;f_4$$, by the PO property of $$4\text{PO}$$. Thus, we know $$\mathsf{SQ}(f_1;f_3) = \mathsf{SQ}(f_2;f_4)$$. Now consider the following:

$$\begin{array}\,g;h \\ \,\,\,= \mathsf{SQ}(f_1);\mathsf{SQ}(f_3) \\ \,\,\,= \mathsf{SQ}(f_1;f_3) \end{array}$$
and

$$\begin{array} \mathsf{SQ}(f_2;f_4) \\ \,\,\,= \mathsf{SQ}(f_2);\mathsf{SQ}(f_4) \\ \,\,\,= j ; k \end{array}$$
Therefore, $$g;h = j;k$$. This result shows that in general posets are too strong and that POs are enough. In fact, we could use even weaker categories than POs for the index category. This is one of the benefits of working with diagrams defined as functors from index categories.

If anyone has any comments about my library or about diagrams as functors I would love to chat about it. Hopefully, I will have my post on LNL models done for next time.

| Tagged , , , | 6 Comments

## What have I been thinking about?

So Beeminder is about to go off on me if I do not post something soon.  Due to me graduating this semester, I have been very busy writing my thesis.  So blog posts have been put on the back burner for a bit, but I want to keep being active anyway.  I decided to post a snapshot of what I have been thinking about research wise.

The snapshot is in the form of a diagram.  This diagram illustrates all of the topics and how they fit together of my ongoing research program on developing a core logic where induction and coinduction can live harmoniously. A major property of this core logic is that it must have non-trivial categorical models.  This cannot be said about bi-intuitionistic logic currently.

Finally, here is what I am thinking about in a single diagram:

I am currently drafting a post explaining — in most likely too much detail — LNL-models. So stayed tuned for that.

## The Category of Deterministic Automata

One thing I have recently decided is that category theory provides a unique perspective of computation. Thus, it can be used throughout computer science. Many people in my field — theory of programming languages — already know this, or at least know that category theory can be used in the study and implementation of programming languages, but I claim more! I also claim that not every computer scientist knows that category theory can provide this alternate perspective. In a number of posts I am going to share some discoveries I have found in the literature that supports this claim. I am not sure on the amount of posts, but I plan to write a bunch. I really want to share some little nuggets of fun that I have been finding. In this post I want to define the category of deterministic automata. I learned of this from the wonderful reports titled “A Junction between Computer Science and Category Theory” by Goguen, Thatcher, Wagner, and Wright.

## Automaton

Throughout this post I often use the word “automaton” to mean “deterministic automaton.” Given a set of inputs called $$X$$, a set of states $$S$$, and a function $$\delta : X \times S \to S$$, called the transition function, we can define the deterministic automata $$\mathcal{A} = \langle X,S,\delta \rangle$$. The transition function $$\delta$$ tells us how to move from one state to the next. We call $$\mathcal{A}$$ a finite automata if and only if $$X$$ and $$S$$ are finite sets.

Lets consider a quick example. Suppose we have the following:

• $$X = \{ A,B\}$$
• $$S = \{ S_0,S_1,S_2\}$$
• $$\delta(A, S_0) = S_1$$
• $$\delta(A, S_1) = S_1$$
• $$\delta(B, S_1) = S_2$$
• $$\delta(A, S_2) = S_1$$
• $$\delta(B, S_2) = S_2$$

There is an equivalent graphical formulation of automaton. The graphical formulation of the previous definition is as follows:

We can see that the graphical formulation is a graph where the nodes are the states of the automaton, and the edges make up the definition of the transition function, and are labeled by the input symbols. The graphical formulation makes it really easy to determine reachability between states. For example, the state $$S_2$$ is reachable from the state $$S_0$$ using the input word $$AB$$ by the following reasoning:

• $$\delta(B, \delta(A, S_0)) = \delta(B, S_1) = S_2$$

We can confirm this reasoning very easy using the graphical formulation, because we can simply follow the edge labeled $$A$$ to the state $$S_1$$, and then follow the edge labeled $$B$$ to the state $$S_2$$. Instead of transitions between edges one symbol at a time is it possible to extend $$\delta$$ so that we can transition directly using groupings of symbols — these are usually called words?

It turns out that we can, and I know of two different ways this can be done. Perhaps there are more, but I want to give an extension of $$\delta$$ that is somewhat different. First we must define the monoid of all functions between states, that is functions from $$S$$ to $$S$$. Suppose that we have two functions $$f : S \to S$$ and $$g : S \to S$$, then we can define a new function called $$f ; g : S \to S$$. So suppose $$s \in S$$, then $$(f ; g)(s) = g(f(s))$$. We can define a second function called the identity function $$id : S \to S$$ by $$id(s) = s$$, and we can see that $$f ; id = id ; f = f$$ for any function $$f : S \to S$$. This data defines the monoid $$[S,S]$$ of all functions between states, and I leave checking the axioms to the reader.

Suppose $$A \in X$$, then notice that we can define a function $$\delta_A : S \to S$$ as $$\delta_A(s) = \delta(A,s)$$ for any $$s \in S$$. Thus, $$\delta_A \in [S,S]$$. Taking this one step further we can define the function $$\delta_X : X \to [S,S]$$ as $$\delta_X(A) = \delta_A$$. Based on the definition of an automaton we know that $$X$$ is a set of symbols, but we can think of this as an alphabet, then we can use this alphabet as the carrier set of the monoid $$X^*$$ which is the free monoid of all strings generated by juxtaposition of the elements of $$X$$ where the empty string is the identity of the monoid. Since we can take a set of symbols and generate a free monoid we should be able to take a function from the set of symbols to a monoid and extend it into a monoid homomorphism from the free monoid generated from the domain, to the range monoid. This means we can take the function $$\delta_X : X \to [S,S]$$ and extend it into a function $$\delta^*_X : X^* \to [S,S]$$, where

• $$\delta^*_X(A_1 \cdots A_n) = \delta^*_X(A_1);\cdots ;\delta^*_X(A_n) = \delta_X(A_1);\cdots ;\delta_X(A_n) = \delta_{A_1};\cdots ;\delta_{A_n}$$

The previous equation is the defining characteristic of a monoid homomorphism. This extension is called the universal property of $$X^*$$. We leave the details of the proof that the extension is indeed unique to the reader. See Proposition 1.6 of the paper I cited above. Finally, we can define the extension of $$\delta$$ called $$\delta^+$$ by $$\delta^+(w,s) = (\delta^*_X(w))(s)$$.
Recall our example from above we know

• $$\delta(B, \delta(A, S_0)) = \delta(B, S_1) = S_2$$

Now using the extension we can now see that

• $$\begin{array}{llll} \delta^+(AB,S_0) & = (\delta^*_X(AB))(S_0) \\ & = (\delta^*_X(A);\delta^*_X(B))(S_0) \\ & = (\delta_A;\delta_B)(S_0)\\ & = \delta_B(\delta_A(S_0))\\ & = \delta(B, \delta(A, S_0) \\ & = S_2 \end{array}$$

## Automaton Homomorphisms

In this section I will define what I call automaton homomorphisms, but this is nonstandard terminology, but I think fits best. These are simple little mappings from one automaton to another with an additional invariant.

Suppose we have automatons $$\mathcal{A} = \langle X_{\mathcal{A}},S_{\mathcal{A}}, \delta_{\mathcal{A}}\rangle$$ and $$\mathcal{B} = \langle X_{\mathcal{B}},S_{\mathcal{B}}, \delta_{\mathcal{B}} \rangle$$. Then how would we map $$\mathcal{A}$$ into $$\mathcal{B}$$? First, we would have to map $$X_{\mathcal{A}}$$ into $$X_{\mathcal{B}}$$ and map $$S_{\mathcal{A}}$$ into $$S_{\mathcal{B}}$$. Then using these mappings we would have to map $$\delta_{\mathcal{A}}$$ into $$\delta_{\mathcal{B}}$$, but we have to be careful, because this final mapping should not break transitions when moving from $$\mathcal{A}$$ to $$\mathcal{B}$$, that is if we can transition from one state to another in $$\mathcal{A}$$, then the states these two map to in $$\mathcal{B}$$ should also have a transition between them.

This intuition guides us to the definition of an automaton homomorphism between $$\mathcal{A}$$ and $$\mathcal{B}$$. These are pairs $$(\phi, \psi)$$ where $$\phi : X_{\mathcal{A}} \to X_{\mathcal{B}}$$ and $$\psi : S_{\mathcal{A}} \to S_{\mathcal{B}}$$. Furthermore, the following diagram must commute:

The definition of $$\phi \times \psi$$ is rather simple, $$(\phi \times \psi)(A,s) = (\phi(A),\psi(s))$$. The previous diagram simply guarantees our invariant holds.

## Automaton are Categories

Now I would like to show that automaton are in fact categories. This is similar to how we can think of sets as categories. Suppose we have an automaton $$\mathcal{A} = \langle X,S,\delta\rangle$$. Then we define the category $$\mathsf{Tr}_{\mathcal{A}}$$ as follows:

• Objects: $$|\mathsf{Tr}_{\mathcal{A}}|_0 = S$$
• Morphisms: $$|\mathsf{Tr}_{\mathcal{A}}|_1 = \{w : S_1 \to S_2 \mid w \in X^* \text{ and } \delta^+(w,S_1) = S_2 \}$$

Now we have to check the axioms of a category. Clearly, the identity is $$\lambda : S_0 \to S_0$$ for any state $$S_0 \in S$$ where $$\lambda \in X^*$$ is the empty string. Suppose $$w_1 : S_1 \to S_2 \in |\mathsf{Tr}_{\mathcal{A}}|_1$$ and $$w_2 : S_2 \to S_3 \in |\mathsf{Tr}_{\mathcal{A}}|_1$$. Then we know $$w_{1}w_{2} \in X^*$$ and $$\delta^+(w_{1}w_{2},S_1) = S_3$$. Therefore, $$w_{1}w_{2} : S_1 \to S_3 \in |\mathsf{Tr}_{\mathcal{A}}|_1$$. So we can define composition as $$w_1 ; w_2 = w_{1}w_{2} : S_1 \to S_3$$. The remainder of the axioms are easily verified.

## The Category of Deterministic Automaton

Finally, we can now take everything we have defined so far, and put it all together to form the category of deterministic automaton $$\mathsf{DAut}$$. The objects are all deterministic automaton. The morphisms are all automaton homomorphisms between deterministic automaton. Certainly there is an identity, for any automaton $$\mathcal{A}$$ there is an identity homomorphism from $$\mathcal{A}$$ to $$\mathcal{A}$$ defined as follows $$(\mathsf{id}_X,\mathsf{id}_S)(B,S_0) = (B,S_0)$$, and the invariant holds:

Now we must define composition. Suppose $$(\phi_1,\psi_1) : \mathcal{A} \to \mathcal{B}$$ and $$(\phi_2,\psi_2) : \mathcal{B} \to \mathcal{C}$$ are automaton homomorphisms. Then their composition $$(\phi_1,\psi_1) ; (\phi_2,\psi_2)$$ is defined as $$(\phi_1,\psi_1) ; (\phi_2,\psi_2) = (\phi_1;\phi_2,\psi_1;\psi_2)$$. We must check that this definition of composition is an automaton homomorphisms, hence we must verify the invariant. We know by assumption that the invariant holds for $$(\phi_1,\psi_1)$$ and
$$(\phi_2,\psi_2)$$:

Using these diagrams we can show that the following diagram commutes:

The following equational reasoning shows the previous diagram commutes:
$$\begin{array}{lllll} \delta_{\mathcal{A}};\psi_1;\psi_2 & = (\delta_{\mathcal{A}};\psi_1);\psi_2 & \text{(associativity)}\\ & = ((\phi_1 \times \psi_1);\delta_{\mathcal{B}});\psi_2 & \text{(top diagram)}\\ & = (\phi_1 \times \psi_1);(\delta_{\mathcal{B}};\psi_2) & \text{(associativity)}\\ & = (\phi_1 \times \psi_1);((\phi_2 \times \psi_2);\delta_{\mathcal{C}}) & \text{(bottom diagram)}\\ & = (\phi_1 \times \psi_1);(\phi_2 \times \psi_2);\delta_{\mathcal{C}} & \text{(associativity)}\\ \end{array}$$
Thus, composition as we have defined is indeed an automaton homomorphism. I leave checking the remainder of the axioms of a category as an exercise.

This post talks specifically about deterministic automaton, but we can form the category of nondeterministic automaton as well. In fact, the definition is exactly the same. Then using these we can form the category $$\mathsf{Aut}$$ of all automaton.

## Fun Facts

Here I list some fun facts that can be deduced from the categories we have looked at in this post.

• To check that a word $$w$$ is accepted by an automaton $$\mathcal{A}$$, one only needs to check that there exists two states $$S_0$$ and $$S_1$$ such that $$w \in \mathsf{TR}_{\mathcal{A}}(S_0,S_1)$$.
• An automaton $$\mathcal{A}$$ is strongly connected if and only if $$\mathsf{TR}_{\mathcal{A}}(S_0,S_1) \not= \emptyset$$ for all states $$S_0,S_1 \in S$$.
• An automaton $$\mathcal{A}$$ is reachable from the state $$S_0$$ if and only if $$\mathsf{TR}_{\mathcal{A}}(S_0,S_1) \not= \emptyset$$ for all states $$S_1 \in S$$.
• All of the sets $$\mathsf{TR}_{\mathcal{A}}(S_0,S_1)$$ are the sets definable by the automaton $$\mathcal{A}$$. Furthermore, these correspond to subsets — including their finite unions — of $$X^*$$.
Posted in category theory, computer science | | 2 Comments

## Codebreaker Screening at UIowa!

The year 2012 was the year of Alan Turing. It took place mainly in Europe, but there were a lot of great events all over the world. Checkout:

http://www.mathcomp.leeds.ac.uk/turing2012/

for a ton of cool stuff that went on all over the world. One project that I was really excited about is the production of a new film about Alan Turing’s life called Codebreaker. The website for the film is here http://www.turingfilm.com/.

Once the film was produced the executive producer Patrick Sammon started doing special screenings around the world at various universities. Recently, the film has finally been released on DVD. I have always planned on buying a copy of the film. So I decided to pitch the idea for me to show the film as a movie night in our department (Computer Science at the University of Iowa). I pitched the idea to Sheryl Semler who is the academic services coordinator for our department. She thought it was a great idea, but I casually mentioned the fact that Patrik does screening at universities, and then she thought that we should bring up the idea of having an actual screening to our department chair Alberto Segre.

Alberto thought this was a great idea! Every year we have a technology and computer science student ran ACM conference. He pitched the idea to make the screening the keynote for the conference. So he then asked the ACM chapter if they liked the idea, and they did. The only problem was getting the money together. We were able get several sponsors for the film. These include the Department of Cinema and Comparative Studies, as well as the Department of Gender, Women, and Sexuality Studies, and then as you might have guessed the CS department.

I am very excited to announce that the screening is happening, and is taking place next week. The screening will take place on Friday, February 28th 2014 at 6:30pm. It is open to the public, and their will be snacks! The film will be introduced by the executive producer, and then after the film he will take questions. I really encourage all that can come to please do! It is going to be a lot of fun. More on the screening can be found here:
https://www.cs.uiowa.edu/resources/features/2014-ui-computing-conference-228-31.

## Dualized Intuitionistic Logic: Introduction

This is the first of a few posts introducing a new line of research I am working on currently. In this post I only introduce the ideas we are considering. Then each post after will fill in the details. So if something seems interesting or does not seem to make sense just wait it will. I have tried to write this post about three time now. We kept having to change our results so much that what I had written become obsolete and incorrect. This is one of the reasons I have decided to break up the post into multiple posts. We are finally getting into a position in which I can reveal where the research is taking us.

Over the course of a little over a year I have been becoming increasingly interested in classical type theories. A classical type theory is a type theory which when moving from type theory to logic corresponds to classical logic as stated by the three perspectives of computation (also known as the Curry-Howard correspondence or the proofs-as-programs propositions-as-types correspondence). I managed to get my Advisor, Aaron Stump, interested. Getting Aaron interested is quite a joy. He, like many of us, is very passionate, and has great ideas. In addition we have scooped up a new graduate student, Ryan McCleeary, to work on these ideas with us. One idea I am profoundly excited about is our idea to study duality in constructive logic. To understand what I mean by this lets consider what duality we have in classical logic.

Logic is a beautiful universe. We can use it to express truth (w.r.t a certain semantics), and we can use it to express computation. Like I said, logic is awesome! In classical logic there is a beautiful notion of duality. For example, we can show that conjunction is dual to disjunction. That is we can prove $$A \land B \iff \lnot ( \lnot A \lor \lnot B )$$. Here we can see that conjunction is the same as a negative disjunction whose disjuncts are both negative. We can see this duality also in the inference rules for the classical sequent calculus:

Here the left rules are the negative rules. So conjunction on the left is actually disjunction, likewise for disjunction on the left, which is negative conjunction. So far we have only seen dualities between conjunction and disjunction, but what about implication? There is a left and right rule for implication:

Again the left rule is negative implication, but is there an operator which is dual to implication? The answer is yes. We know in classical logic $$A \to B$$ is logically equivalent to $$\lnot A \lor B$$. Then we also know that $$A \to B$$ is logically equivalent to $$\lnot (A \land \lnot B)$$. This tells us that the dual to implication must be logically equivalent to $$A \land \lnot B$$. This looks very much like set difference if we interpret conjunction as intersection and negation as complement. This is the reason we call the dual to implication subtraction, written $$A – B$$, and it is indeed logically equivalent to $$A \land \lnot B$$. This operator was first studied by Cecylia Rauzer. It has since been studied by a number of people like Pierre-Louis Curien and Hugo Herbelin, Gianluigi Bellin, Tristan Crolard, Phillip Wadler, Peter Selinger and many more. We will delay the definition of its inference rules for now. We will introduce these in the next post.

So far we have seen some examples of duality in classical logic, but what are some applications? One application has been to study the long standing conjecture that the call-by-value reduction strategy is dual to the call-by-name reduction strategy. This has been answered in the positive by exploiting the very dualities we have been discussing. The first to investigate this line of work was Peter Selinger. He proves the conjecture using a categorical semantics of the $$\lambda\mu$$-Calculus. However, the equivalences he uses are up to isomorphism. Pierre-Louis Curien and Hugo Herbelin followed suit, but designed a new calculus called the $$\bar\lambda\tilde\mu$$-calculus. This calculus adopted subtraction as the dual to implication. Phillip Wadler designed yet another calculus to solve this problem, but instead of taking implication and subtraction as primitive he took conjunction, disjunction, and negation as primitive and defined implication and subtraction.
The problem of proving CBV dual to CBN uses classical logic, and indeed so far we have only consider classical logic. The applications of duality we are considering are in the area of constructive logic. Tristan Crolard has studied this as well, but with an emphasis on using subtraction to model constructive co-routines. We are interested in so much more.

What types of applications do we have in mind? Inhabitants of negative types (throughout the sequel we will call types like $$\lnot A$$ a negative type) give rise to codata. We are finding programming with codata to be very interesting. They allow for the definition of infinite data type using coinduction in a language without lazy evaluation. In addition we are finding that codata gives rise to first class patterns! No need to for a primitive notion of pattern matching. We get it for free! Our main application is to show that induction and coinduction can live harmoniously within the same type theory without any funny restrictions (Agda), or sacrificing type safety (Coq). In addition we think (keep in mind this is all on going research) that we might get some very interesting control operators by exploiting the fact that we have a primitive notion of duality.

So this all sounds great. This is where I will leave this post. In the next post I will give several perspectives of subtraction. I will define them in Heyting Algebras, give a categorical account of them, and then define them logically by defining Dualized Intuitionistic Logic (DIL). Following the definition of DIL I will give a few examples. To know what the third post is going to be about read the second. Stay tuned!

Posted in Logic, metatheory | | 3 Comments

## Hanging with Kripke

Lately, I have been thinking a lot about Kripke models of intuitionistic propositional logic for my research. During this time I also played around a bit with the models. Here I would like to write down some of the stuff I played with.

Def. 1. A Kripke model is a tuple $$M = \langle W , \preceq , \mathcal{V} \rangle$$, where $$W$$ is a non-empty set of objects called worlds, $$\preceq$$ is a binary relation on $$W$$, and $$\mathcal{V}$$ is a unary relation between worlds and sets of atomic propositions. This model has the following conditions:

• $$\preceq$$ is a preorder, and
• (Hereditary Condition) For every atomic formula $$p$$ and worlds $$w$$ and $$w’$$, if $$w \leq w’$$ and $$p \in \mathcal{V}(w)$$, then $$p \in \mathcal{V}(w’)$$.

We call the tuple $$F = \langle W, \preceq \rangle$$ a Kripke frame.

# Categorizing Kripke

Kripke frames are simply preordered sets. Thus, they are categories.

Lemma. 1. Suppose $$F = \langle W, \preceq \rangle$$ is a Kripke frame. Then $$F$$ is a category.
Proof. Simply take $$W$$ as the set of objects. An arrow exists between two worlds $$w$$ and $$w’$$ if and only if $$w \preceq w’$$. Then identity arrows exist by reflexivity of $$\preceq$$ and composition is defined using transitivity. It is easy to see that left and right identities behave correctly and composition is associative. $$\Box$$

We denote the category of a Kripke frame $$F$$ as $$Cat(F)$$. A Kripke model is then a Kripke frame together with an interpretation function $$\mathcal{V}$$ from the set of worlds to a set of atomic propositions. We can interpret a model as $$Cat(F)$$ for some Kripke frame $$F$$ together with the following functor ($$\mathcal{\subseteq}: Cat(F) \to Set$$):

• $$\mathcal{\subseteq}(w) = A$$, for some set $$A$$ of atomic propositions, and
• $$\mathcal{\subseteq}(f : w \to w’) = g : \mathcal{\subseteq}(w) \to \mathcal{\subseteq}(w’)$$, where $$p = g(p)$$ (if $$p \in \mathcal{\subseteq}(w)$$ then $$p \in \mathcal{\subseteq}(w’)$$).

Therefore, a Kripke model is then the tuple $$\langle Cat(F), \mathcal{\subseteq}\rangle$$.

What about the category of Kripke models?

Def. 3. The category of Kripke models, $$Kat$$, is defined as follows:

• Objects: The Set of all Kripke models.
• Arrows: The set of homomorphisms from the Kripke frame of one model to the Kripke frame of another. That is for models $$M_1 = \langle W_1,\preceq_1,\mathcal{V}_1\rangle$$ and $$M_2 = \langle W_2,\preceq_2,\mathcal{V}_2\rangle$$ the arrow from $$M_1$$ to $$M_2$$ is a homomorphism from $$W_1$$ to $$W_2$$ such that the preorder structure is preserved.
• Identities: Clearly, identities exists, just take the trivial homomorphism.
• Composition: Homomorphisms are composable, and they respect identities and associativity.

So that is the definition using Def. 1, lets try and give one interms of pure categories. Since a functor is a generalization of a homomorphism between categories we simply move from the arrows being homomorphisms in $$Set$$ to functors.

Def. 3. The category of Kripke models, $$Kat_{Cat}$$, is defined as follows:

• Objects: The set of all Kripke models $$M = \langle Cat(\langle W, \preceq \rangle),\subseteq\rangle$$.
• Arrows: The set of functors between frames.
• Identities: Take the identity functors $$\mathcal{I}_{M}$$ for any model $$M$$.
• Composition: Composition is well defined for functors. Identities play nice and composition is associative.

# Frame it!

We can eliminate the pesky functor $$\subseteq$$ and the interpretation mapping $$\mathcal{V}$$ all together. Instead of leaving the worlds unspecified we can adopt sets of atomic propositions as worlds. Doing this results in Kripke frames becoming Kripke models. Intuitionistic logic can be interpreted in this configuration in the usual way except instead of interrupting atomic propositions as members of the interpretation of a world they are actual members of the world. Some may have philosophical objections to this style, because the semantics is not completely distinct from the object language, but I do not think this is that far off base from the semantics of type theories. There we interpret types as sets of normal forms which are members of the object language. This is also exactly how worlds are defined in the work on actualism; where this is called possible world semantics. I am a fan of both styles.