## Hanging with Kripke

Lately, I have been thinking a lot about Kripke models of intuitionistic propositional logic for my research. During this time I also played around a bit with the models. Here I would like to write down some of the stuff I played with.

Def. 1. A Kripke model is a tuple $$M = \langle W , \preceq , \mathcal{V} \rangle$$, where $$W$$ is a non-empty set of objects called worlds, $$\preceq$$ is a binary relation on $$W$$, and $$\mathcal{V}$$ is a unary relation between worlds and sets of atomic propositions. This model has the following conditions:

• $$\preceq$$ is a preorder, and
• (Hereditary Condition) For every atomic formula $$p$$ and worlds $$w$$ and $$w’$$, if $$w \leq w’$$ and $$p \in \mathcal{V}(w)$$, then $$p \in \mathcal{V}(w’)$$.

We call the tuple $$F = \langle W, \preceq \rangle$$ a Kripke frame.

# Categorizing Kripke

Kripke frames are simply preordered sets. Thus, they are categories.

Lemma. 1. Suppose $$F = \langle W, \preceq \rangle$$ is a Kripke frame. Then $$F$$ is a category.
Proof. Simply take $$W$$ as the set of objects. An arrow exists between two worlds $$w$$ and $$w’$$ if and only if $$w \preceq w’$$. Then identity arrows exist by reflexivity of $$\preceq$$ and composition is defined using transitivity. It is easy to see that left and right identities behave correctly and composition is associative. $$\Box$$

We denote the category of a Kripke frame $$F$$ as $$Cat(F)$$. A Kripke model is then a Kripke frame together with an interpretation function $$\mathcal{V}$$ from the set of worlds to a set of atomic propositions. We can interpret a model as $$Cat(F)$$ for some Kripke frame $$F$$ together with the following functor ($$\mathcal{\subseteq}: Cat(F) \to Set$$):

• $$\mathcal{\subseteq}(w) = A$$, for some set $$A$$ of atomic propositions, and
• $$\mathcal{\subseteq}(f : w \to w’) = g : \mathcal{\subseteq}(w) \to \mathcal{\subseteq}(w’)$$, where $$p = g(p)$$ (if $$p \in \mathcal{\subseteq}(w)$$ then $$p \in \mathcal{\subseteq}(w’)$$).

Therefore, a Kripke model is then the tuple $$\langle Cat(F), \mathcal{\subseteq}\rangle$$.

What about the category of Kripke models?

Def. 3. The category of Kripke models, $$Kat$$, is defined as follows:

• Objects: The Set of all Kripke models.
• Arrows: The set of homomorphisms from the Kripke frame of one model to the Kripke frame of another. That is for models $$M_1 = \langle W_1,\preceq_1,\mathcal{V}_1\rangle$$ and $$M_2 = \langle W_2,\preceq_2,\mathcal{V}_2\rangle$$ the arrow from $$M_1$$ to $$M_2$$ is a homomorphism from $$W_1$$ to $$W_2$$ such that the preorder structure is preserved.
• Identities: Clearly, identities exists, just take the trivial homomorphism.
• Composition: Homomorphisms are composable, and they respect identities and associativity.

So that is the definition using Def. 1, lets try and give one interms of pure categories. Since a functor is a generalization of a homomorphism between categories we simply move from the arrows being homomorphisms in $$Set$$ to functors.

Def. 3. The category of Kripke models, $$Kat_{Cat}$$, is defined as follows:

• Objects: The set of all Kripke models $$M = \langle Cat(\langle W, \preceq \rangle),\subseteq\rangle$$.
• Arrows: The set of functors between frames.
• Identities: Take the identity functors $$\mathcal{I}_{M}$$ for any model $$M$$.
• Composition: Composition is well defined for functors. Identities play nice and composition is associative.

# Frame it!

We can eliminate the pesky functor $$\subseteq$$ and the interpretation mapping $$\mathcal{V}$$ all together. Instead of leaving the worlds unspecified we can adopt sets of atomic propositions as worlds. Doing this results in Kripke frames becoming Kripke models. Intuitionistic logic can be interpreted in this configuration in the usual way except instead of interrupting atomic propositions as members of the interpretation of a world they are actual members of the world. Some may have philosophical objections to this style, because the semantics is not completely distinct from the object language, but I do not think this is that far off base from the semantics of type theories. There we interpret types as sets of normal forms which are members of the object language. This is also exactly how worlds are defined in the work on actualism; where this is called possible world semantics. I am a fan of both styles.

| 1 Comment

## Completeness of a Multi-Succedent Intuitionistic Propositional Logic

Over the course of about four informal lectures I presented how to show completeness for multi-succedent intuitionistic propositional logic.  I used Mints’ book “A short introduction to intuitionistic logic” as my reference.  There is a beautiful proof of completeness done in section 8.2. However, there are many problems with Mints’ presentation and proofs.  Thus, I decided to write my own lecture notes correcting these problems.  I wanted to post them here in case anyone else might find them helpful.

The lecture notes can be found here.

Phil Wadler wrote a nice post on writing, communication, and being successful in research.  You can find it here: You and Your Research and the Elements of Style.

Wadler points to an inspiring video of a lecture by Richard Hamming:

I highly recommend watching the above video.

## Church-Encoded Pairs are not Categorical Products

Recently, John Altidor a PhD student from the University of Massachusetts has been visting UIowa. His research as of right now is concerned with subtyping. While his work can be applied to a wide range of programming languages he and his advisor have been applying their work to Java. Lately, he and I have been talking a lot about his work on subtyping. As I said, he has been applying his work to Java, and one thing I find interesting is thinking about how much of OO can be done in a dependent type theory. His work sparked me to read a few papers on subtyping. One such paper is “An extension of system F with subtyping” by L. Cardelli et al. In this paper the authors claim something I was not aware of. They state that church-encoded pairs in system F do not correspond to actual categorical constructs. Now this is not saying that they are unsound, but it is saying that they are not products. It is important to note that this claim is with respect to $$\beta\eta$$-equality. I am not sure, but it may be possible to show that church-encoded pairs can be identified with products using some other notation of equality. The authors claim this is well-known and do not give the details. Curiosity has lead me to filling in the details myself.

I plan to present the details of this claim in a three part post. First, I will set the stage by defining system F with $$\beta\eta$$-equality and church-encoded pairs. In the second post I will present all the categorical notions we will need on the journey. Finally, in the third post I will present the main results. The entirety of this series will be presented and formalized in Coq. I assume the reader is familier with Coq and the locally nameless binding representation. However, for the uninitiated I give plenty of references.

System F was invented independently by both J. Girard and J. Reynolds. It is the extension of the simply typed $$\lambda$$-calculus with second order universal quantification. For a more in depth look at system F see the amazing book “Proofs and Types.” I start by defining the syntax. The syntax for types and terms are defined by the following inductive definitions. Note that I will not be able to give every detail of the formalization, however in the final post I will provide a link to the entire formalization.
 

Inductive typ : Set :=
| typ_base
| typ_fvar   (n:nat)
| typ_bvar   (n:nat)
| typ_forall (A:typ)
| typ_arrow  (A1:typ) (A2:typ).

Inductive trm : Set :=
| trm_bvar  (n:nat)
| trm_fvar  (n:nat)
| trm_lam   (A:typ) (b:trm)
| trm_Lam   (b:trm)
| trm_app   (a:trm) (b:trm)
| trm_tapp  (a:trm) (T:typ)
| trm_c.


There are a few things that must be said before moving on. I am using the locally nameless representation for binders. See this for more information on that. System F has the types, typ_bvar _ bound variables, typ_fvar _ free variables, typ_base a base type, typ_forall _ the universal type, and typ_arrow _ _ implication (or the function type). Now terms are made up of bound variables trm_bvar _, free variables trm_fvar _, $$\lambda$$-abstractions trm_lam _ _, type abstractions trm_Lam _, applications trm_app _ _, type applications trm_tapp _ _, and a constant trm_c.

The syntax for terms and types allow for the definition of non-well-formed types and terms. An example of a non-well-formed type is typ_bvar 0. This type contains an unbound bound variable. Another example is typ_arrow (typ_bvar 0) (typ_fvar 0) which also contains an unbound bound variable. A similar game can be played using terms. A type or a term is well-formed iff all bound variables are bound. Well-formed types and terms are characterized by the following two predicates.

 

Inductive lc_T : typ -> Prop :=
| lcT_base : lc_T typ_base

| lcT_fvar : forall n, lc_T (typ_fvar n)

| lcT_arrow : forall a b,
lc_T a ->
lc_T b ->
lc_T (typ_arrow a b)

| lcT_forall : forall t L,
(forall x, ~(In x L) -> lc_T (open_T 0 (typ_fvar x) t)) ->
lc_T (typ_forall t).

Inductive lc : trm -> Prop :=
| lc_fvar : forall n, lc (trm_fvar n)

| lc_lam : forall L t A,
lc_T A ->
(forall x, ~(In x L) -> lc (open_t 0 (trm_fvar x) t)) ->
lc (trm_lam A t)

| lc_Lam : forall t,
lc t ->
lc (trm_Lam t)

| lc_app : forall t1 t2,
lc t1 ->
lc t2 ->
lc (trm_app t1 t2)

| lc_tapp : forall T t,
lc_T T ->
lc t ->
lc (trm_tapp t T)

| lc_c : lc trm_c.


A type T or a term t is called locally closed iff lc_T T and lc_t t are true. Both of these definitions depend on two functions called open_T (n:nat) (t:typ) (t':typ) : typ and open_t (n:nat) (t:trm) (t':trm) : trm. The former replaces all bound variables typ_bvar n with the type t in t'. The latter does the same with respect to terms. There is also a third function we will see open_tT (n:nat) (t:typ) (t':trm) : trm which replaces all bound type variables typ_bvar n with the type t in the term t'.

The definition of full $$\beta\eta$$-reduction is next. This is the standard definition so I do not comment on it. The reflexive and transitive closure of this relation is denoted by t1 ~*> t2.

 

Inductive step_t : trm -> trm -> Prop :=
| ST_lam : forall A q q' L,
lc_T A ->
(forall y,
~ In y L ->
exists q'', open_t 0 (trm_fvar y) q ~> q'' /\ q' = close_t 0 y q'') ->
(trm_lam A q) ~> (trm_lam A q')

| ST_Lam : forall t t',
t ~> t' ->
(trm_Lam t) ~> (trm_Lam t')

| ST_beta : forall T t1 t2,
lc (trm_lam T t1) ->
lc t2 ->
(trm_app (trm_lam T t1) t2) ~> (open_t 0 t2 t1)

| ST_eta : forall t T,
lc t ->
(trm_lam T (trm_app t (trm_bvar 0))) ~> t

| ST_type : forall t T,
lc t ->
lc_T T ->
(trm_tapp (trm_Lam t) T) ~> (open_tT 0 T t)

| ST_app1 : forall t1 t1' t2,
lc t2 ->
t1 ~> t1' ->
trm_app t1 t2 ~> trm_app t1' t2

| ST_app2 : forall t1 t2 t2',
lc t1 ->
t2 ~> t2' ->
trm_app t1 t2 ~> trm_app t1 t2'

| ST_tapp : forall t t' T,
lc_T T ->
t ~> t' ->
trm_tapp t T ~> trm_tapp t' T
where "t1 '~>' t2" := (step_t t1 t2).


The standard definition of typing for church-style system F goes something like the following; where context is defined to be List (nat*typ). In fact all names are nats. The proposition Ok G where G:context states that contexts are well-formed, which means that all variables in the domain of G are distinct.
 

Inductive type_check (G:context) : trm -> typ -> Prop :=
| T_Var : forall T x,
Ok G -> In (x,T) G ->
type_check G (trm_fvar x) T

| T_Lam : forall L t T1 T2,
(forall (x:nat),
~ (In x L) ->
type_check ((x,T1) :: G) (open_t 0 (trm_fvar x) t) T2) ->
type_check G (trm_lam T1 t) (typ_arrow T1 T2)

| T_TLam : forall L t T,
(forall (x:nat),
~ (In x L) ->
type_check G (open_tT 0 (typ_fvar x) t) (open_T 0 (typ_fvar x) T)) ->
type_check G (trm_Lam t) (typ_forall T)

| T_App : forall T1 T2 t1 t2,
type_check G t1 (typ_arrow T1 T2) ->
type_check G t2 T1 ->
type_check G (trm_app t1 t2) T2

| T_AppT : forall t T T' A,
lc_T T' ->
type_check G t (typ_forall T) ->
(open_T 0 T' T) = A ->
type_check G (trm_tapp t T') A

| T_C :
Ok G ->
type_check G trm_c typ_base
where "G :> t ; T" := (type_check G t T).

 

Finally, I define $$\beta\eta$$-equality. This definition is rather long and tedious. If the reader would like a clearer picture see page 8 of Cardelli's paper cited above. There Cardelli gives the definition for $$\beta\eta$$-equality for system F with subtyping which the following definition is based.

 

Inductive trm_eq (G:context) : trm -> trm -> typ -> Prop :=
| Eq_var :  forall n T,
G :> (trm_fvar n) ; T ->
G :> (trm_fvar n) <~> (trm_fvar n) ; T

| Eq_lam : forall L t1 t2 T1 T2,
(forall (x:nat),
~ (In x L) ->
((x,T1) :: G) :> (open_t 0 (trm_fvar x) t1) <~> (open_t 0 (trm_fvar x) t2) ; T2) ->
G :> (trm_lam T1 t1) <~> (trm_lam T1 t2) ; (typ_arrow T1 T2)

| Eq_tlam : forall L t1 t2 T,
(forall (n:nat),
~ In n L ->
G :> (open_tT 0 (typ_fvar n) t1) <~> (open_tT 0 (typ_fvar n) t2) ; (open_T 0 (typ_fvar n) T)) ->
G :> (trm_Lam t1) <~> (trm_Lam t2) ; (typ_forall T)

| Eq_app : forall a a' b b' A B,
G :> a <~> a' ; typ_arrow A B ->
G :> b <~> b' ; A ->
G :> (trm_app a b) <~> (trm_app a' b') ; B

| Eq_tapp : forall a a' A B C,
lc_T B ->
G :> a <~> a' ; typ_forall A ->
C = (open_T 0 B A) ->
G :> (trm_tapp a B) <~> (trm_tapp a' B) ; C

| Eq_beta : forall L a a' b b' c A B,
(forall (x:nat),
~ (In x L) ->
((x,A) :: G) :> (open_t 0 (trm_fvar x) b) <~> (open_t 0 (trm_fvar x) b') ; B) ->
G :> a <~> a' ; A ->
c = (open_t 0 a' b') ->
G :> (trm_app (trm_lam A b) a) <~>  c ; B

| Eq_tbeta : forall L a b c A B C,
lc_T B ->
(forall (n:nat),
~ In n L ->
G :> (open_tT 0 (typ_fvar n) a) <~> (open_tT 0 (typ_fvar n) b) ; (open_T 0 (typ_fvar n) A)) ->
c = open_tT 0 B b ->
C = open_T 0 B A ->
G :> (trm_tapp (trm_Lam a) B) <~>  c ; C

| Eq_eta : forall b b' A B,
G :> b <~> b' ; typ_arrow A B ->
G :> (trm_lam A (trm_app b (trm_bvar 0))) <~> b' ; typ_arrow A B

| Eq_teta : forall b b' A,
G :> b <~> b' ; typ_forall A ->
G :> (trm_Lam (trm_tapp b (typ_fvar 0))) <~> b' ; typ_forall A

| Eq_sym : forall t1 t2 T,
G :> t2 <~> t1 ; T ->
G :> t1 <~> t2 ; T

| Eq_trans : forall t1 t2 t3 T,
G :> t1 <~> t3 ; T ->
G :> t3 <~> t2 ; T -> G :> t1 <~> t2 ; T
where "G :> t1 <~> t2 ; T" := (trm_eq G t1 t2 T).


This concludes the definition of system F with $$\beta\eta$$-equality. The next step is to define the church encoding of pairs. Following the definition I state several correctness properties certifying that we have the correct definitions.

At this point I define the church-encoding of pairs. The following defines the constructor for pairs.

 

Definition pair_cons :=
trm_Lam (trm_Lam (trm_lam (typ_bvar 1)
(trm_lam (typ_bvar 0)
(trm_Lam (trm_lam (typ_arrow (typ_bvar 2) (typ_arrow (typ_bvar 1) (typ_bvar 0)))
((trm_app (trm_app (trm_bvar 0) (trm_bvar 2)) (trm_bvar 1)))))))).


This is a little hard to read so lets take a look at this in mathematical notation.
$\Lambda X.\Lambda Y.\lambda x:X.\lambda y:Y.\Lambda Z.\lambda z:X \rightarrow (Y \rightarrow Z).(z\,x)\,y.$
Here the variables $$x$$ and $$y$$ are the data we are packaging up in the pair. The variable $$z$$ is used to project out the data of the pair. Next we define the type of pair_cons.
 

Definition pair_cons_type :=
typ_forall (typ_forall (typ_arrow (typ_bvar 1) (typ_arrow (typ_bvar 0)
(typ_forall (typ_arrow
(typ_arrow (typ_bvar 2) (typ_arrow (typ_bvar 1) (typ_bvar 0))) (typ_bvar 0)))))).


In mathematical notation this looks like the following:

$\forall X.\forall Y.X \rightarrow (Y \rightarrow \forall Z.((X \rightarrow (Y \rightarrow Z)) \rightarrow Z)).$

The previous definitions are how define church-encoded pairs in system F. Next are the definitions of two convince functions which will allow us to easily construct pairs. The following is the definition of a pair given two terms a:A and b:B.

 

Definition pair (A B:typ) (a b:trm) := trm_app (trm_app (trm_tapp (trm_tapp pair_cons A) B) a) b.


The following is the type of pair.

 

Definition pair_type (A B:typ) :=
typ_forall (typ_arrow (typ_arrow A (typ_arrow B (typ_bvar 0))) (typ_bvar 0)).


We need to be sure these are the correct definitions so first I show that pair_cons has the type pair_cons_type and that pair has the type pair_type. I omit proofs for brevity.

 

Lemma pair_cons_typing : nil :> pair_cons ; pair_cons_type.

Lemma pair_typing : forall A B a b,
lc_T A ->
lc_T B ->
nil :> a ; A ->
nil :> b ; B ->
nil :> pair A B a b ; pair_type A B.


Using our pair definition we can construct pairs, but we still need ways of getting the data back out of the pairs. Next I define the first and second projections of our pairs. First, O need two very simple functions. They are used for deciding which projection we want. The first function, arg_one, is used to get the first projection, while the second, arg_two, is used to get the second.

 

Definition arg_one A B := trm_lam A (trm_lam B (trm_bvar 1)).
Definition arg_two A B := trm_lam A (trm_lam B (trm_bvar 0)).


Now using arg_one and arg_two I can define the first and second projections as follows:
 

Definition fst (A:typ) (B:typ) (p:trm) := trm_app (trm_tapp p A) (arg_one A B).
Definition snd (A:typ) (B:typ) (p:trm) := trm_app (trm_tapp p B) (arg_two A B).


Again I must be sure these are the correct definitions. I first show that fst and snd have the expected type.

 

Lemma fst_typing : forall A B p,
lc_T A ->
lc_T B ->
nil :> p ; pair_type A B ->
nil :> fst A B p ; A.

Lemma snd_typing : forall A B p,
lc_T A ->
lc_T B ->
nil :> p ; pair_type A B ->
nil :> snd A B p ; B.


Recall that I defined $$\beta\eta$$-equality. All equational reasoning I will do will be using $$\beta\eta$$-equality. Using this I show that applying fst to a pair is equivalent to the first project of the pair.

 

Lemma fst_eq : forall A B a b,
nil :> a ; A ->
nil :> b ; B ->
nil :> (pair A B a b) ; pair_type A B ->
nil :> fst A B (pair A B a b) <~> a ; A.


Finally, I do the same for snd.

 

Lemma snd_eq : forall A B a b,
nil :> a ; A ->
nil :> b ; B ->
nil :> (pair A B a b) ; pair_type A B ->
nil :> snd A B (pair A B a b) <~> b ; B.


We have set out on a journey to better understand the relationship between church-encoded pairs in system F and categorical products. So far we have defined system F with $$\beta\eta$$-equality and defined church-encoded pairs in system F. In the next post we will layout all the necessary category theory we will need for the main result. Stay tuned.

## Coq, Category Theory, and Steve Awodey’s Book.

There is a great book on category theory called, you guessed it, “Category Theory” by Steve Awodey. It is a great book! I really enjoy it. I am interested in category theory, because it provides a possible interpretation of type theory. Actually, I have been thinking of the relationship between category theory and type theory as being different perspectives of the same thing. That thing is computation. As I learn more category theory I thought it would be useful to start thinking about how to formalize results involving category theory in Coq.

To my surprise I found that Adam Megacz has started a nice library called coq-categories. His formalization follows Awodey’s book exactly. In fact in order to find a concept in the formalization one has to know which chapter it is in Awodey’s book. This is a great formalization. It uses type classes to define categories and functors. Type classes are such a nice fit for categories.

There are two issues with this formalization. First, it does not formalize anything beyond chapter 7 of the book. The second is that it has not be worked on in over a year. The last commit was 2011-04-24. The version of Coq it was developed in was 8.3pl2.

I decided to clone Adam’s git repo. and get it working with Coq 8.4. It took only about fifteen minutes to get working. The only problems that I ran into were related to unification of implicit arguments.

This post just serves as a place to put up a patch in case anyone is looking for one. I plan to develop this a bit further. I would like to formalize control categories and play around with the semantics of a few different type theories. Here is the patch.

I plan to post more about my extensions in the near future.

Posted in category theory, Coq, Fun, Logic, metatheory, Semantics, Type Theory | | 2 Comments

## Cartesian Closed Comics

There is a new comic called Cartesian Closed Comics. They are pretty good.

Below is my favorite one:

Find more by clicking the above link.

## Impredicativity and the Human Mind

I have not posted in awhile so I thought I would post something both short and interesting to get things going again. I defended my comprehensive exam this week. It went really well, and I plan to post the report on my website, but before I do I want to make a few corrections. Greg Landini a philosophy professor here at UIowa was an external member of my committee for my comprehensive exam. I am also doing a readings course with him on relevant logic. During our weekly meeting this week we got into a really interesting conversation comparing the meaning of impredicativity and predicativity according to Bertrand Russell and the definition used in type theory. The conversation eventually turned over to Greg’s work on understanding human thought and consciousness. I would like to discuss in this post the details of my conversation with Greg. First, I would like to introduce how I define impredicativity and predicativity. Then I will talk about how Russell defined these terms. Following these I will discuss Greg’s ideas on animal consciousness.

I define an impredicative formula to be an universally quantified formula — the complexity of the formula does not matter — such that, the domain of quantification contains the formula being defined. A logic or type theory is impredicative if all universal formulas are impredicative. An example would be the universal type of Girard-Reynold’s system F. Now a predicative formula is an universal formula, such that, the domain of quantification does not contain the formula which is being defined. We can make the universal type of system F predicative by stratifying types using kinds.

Consider the following formula in naive set theory: $$\exists y.\forall x.(x \in y \iff x \not \in x)$$. Greg points out that Russell did not call this an “impredicative formula”, because there is only one domain of objects to quantify over which is the universe of all sets. Now I would call this an extreme case of impredicativity. That is this formula is the most impredicative a formula can be. In fact, so impredicative it is inconsistent. Other than this case Russell’s views on impredicative vs. predicative are in line with the definition we use in type theory. That is types can be impredicative in the same way they are in system F and remain consistent, while just like stratified system F they can be predicative. Indeed this is the difference between the simple theory of types and the ramified theory of types. From here the conversation drifted over to Greg’s work on consciousness.

Greg believes that there are two types of reasoning going on in the mind of animals. Some use predicative reasoning while the more gifted use impredicative reasoning. He used a great example using a frog.

A frog sits and eats flies. The frog has the innate ability to spot a fly when it is near. If the frog hears some clicking noise which sounds like a fly it will try and eat the object making the clicking. If the frog sees a flying object it will also try to eat it. Now if I use a tape recorder and a small toy object which makes the same sound a fly makes then we can easily trick the frog into eating the toy. Thus, a frog cannot first identify the object making the sound and determine whether or not it is actually a fly before deciding to eat it. This kind of thought Greg believes is predicative. This suggests that most animals use predicative reasoning. He says that the animals with impredicative reasoning got lucky during the evolutionary process. Now what do we mean by lucky?

The brain is believed to be constructed by layers of mesh which make up synaptic networks. This is depicted by the following figure.

The previous figure shows a stratification of the layers of mesh. This results in predicative reasoning. Impredicative reasoning results when the layers have connections between them. As in the following diagram.

This suggests that the more these layers are collapsed the more powerful reasoning becomes.

This fits with the views of type theory almost exactly. Think of Coq where the type universe is stratified into levels. This stratification is greatly less powerful then in the full collapse of the type universe — this results in $mathsf{Type}:mathsf{Type}$. The more we collapse the type universe the more powerful the theory becomes. Think of the difference between stratified system F and system F. We can gain incremental power by making connections between the levels. One way of doing this is adding universe polymorphism. This is like the connections between the mesh layers discussed above. These connections I found to be quite surprising, but also not so surprising. The brain is essentially the worlds most powerful computer. So it makes sense that computer science could help in figuring out how it works. Computation is what we do after all.

I think this is all I am going to say about this right now. I found this to be really cool. I thought maybe others might as well.

Posted in Logic, philosophy, Type Theory | | 2 Comments

## On the Dual Calculus with a Twist

Recently, my advisor (Aaron Stump) and I have been trying to unify induction and coinduction using a single operator and an explicit ordering. I know what you are thinking, awesome! While I could talk for hours about our ideas on how to do this I am instead going to introduce a type theory I think everyone should know of.

While reading over the literature covering duality between induction and coinduction and call-by-name and call-by-value we had the joy of learning about Philip Wadler’s dual calculus (DC) [1,2]. This type theory is the computational view of Gentzen’s classical sequent calculus LK. For more information on LK see [3]. A couple of interesting properties of the dual calculus is that its dual can be embedded in itself. This dual embedding can then be used to show that the call-by-value reduction strategy is dual to the call-by-name reduction strategy. Following on the previous point DC can also be used to show that the call-by-value $lambdamu$-calculus is dual to the call-by-name $lambdamu$-calculus by first embedding the call-by-value $lambdamu$-calculus into DC, taking the dual, and then translating the $lambdamu$-calculus back out [1,2]. Cool stuff! In this post I would like to simply define DC and give an intuitive meaning to its judgments. However, there is a twist. The twist is that I have come up with a new syntax for the judgments of DC. While DC is a beautiful theory I am not a fan of DC’s original nor of the version of DC in Kimura’s paper [4] synatx for judgments. I find them ugly, confusing, and hard to read. I hope the reader will find my syntax a bit more pleasing.

The syntax of DC is defined by the following set of grammars: The first thing the reader who has at least a basic understanding of type theory will notice, is that, there is no function type or $lambda$-abstraction! It turns out that these can actually be defined! I hope the reader’s head did not just explode. Here are the definitions:

Using these definitions the usual typing rules are now derivable as well as the usual $beta$-reduction rule. As for the remaining syntax of DC it is pretty routine. I only comment on a few aspects of it. I call variables $x$ input variables and variables $alpha$ output variables or output ports. The output ports will be replaced by the output of (co)terms. Statements $alpha bullet k$ can be viewed as commands where $k$ is computed with input $alpha$. The term $(s).alpha$ binds the output port $alpha$ to the return value of the statement $s$. Dually, the coterm $x.(s)$ binds the input variable $x$ in $s$.

DC consists of three judgments. I will first give my new syntax for these judgments, compare my syntax with the original syntax, give an intuitive explanation of the judgments, and finally define them. The syntax is of the three judgments are as follows:

Explanation of the syntax. The arrows in the judgments above indicate the flow of input/output. The first left-pointing arrow $leftarrow$ indicates that each expression $t$,$k$, and $s$ can assign output to an output port in $D$. Now the right-pointing arrow $rightarrow$ in the typing judgment indicates that $t$ may return a value of type $A$, dually, in the syntax for the cotyping judgment the second left-pointing arrow indicates that $k$ takes an input of type $A$. Recall from above that I mentioned that across the propositions-as-types correspondence DC corresponds to Gentzen’s classical sequent calculus LK. LK consists of left and right rules, so additionally the second arrow in the typing and cotyping judgments indicate the type of rules which define the judgments. The rules defining the typing judgment are right rules, hence, the second arrow points right, while the second arrow in the cotyping judgment points left indicating its rules correspond to the left rules of LK.

The syntax used by Kimura was the following:

This syntax is very similar to the original syntax of Wadler, but instead of $vdash$ he used an arrow pointing to $Delta$ which is close to my syntax, but still not as pretty.  I find this style syntax to be hard to read, but I do think it makes sense once I understood it. I have this annoying belief that the syntax of a judgment should be sexy enough to make writing out derivations somewhat manageable.

Intuitive meaning of the judgments. Lets first expand the syntax out a bit.
The typing judgment means that when each $x_i$ has a value and $t$ is computed then it may either return a value of type $A$ or assign a value to one of the output ports $o_i$ of type $B_i$. Dually, the cotyping judgment means when each $x_i$ has a value then $k$ takes an input of type $A$ and after computation assigns a value to one of the output ports. Finally, the cut judgments means when each $x_i$ has a value then $s$ is computed and assigns a value to one of the output ports. This intuitive explanation behind the judgment helped and continues to help me understand DC. I do not claim authorship of such a great meaning I learned it from [4]. They did a great job explaining the theory.

Definition of the judgments of DC. Finally, I move onto define each judgment. The following is the definition of the typing and cotyping judgment:
Lastly, we have the sole rule of the cut judgment, the cut rule:

A nice feature of this type theory is that it is easy to translate it to a logic equivalent to LK. Lets try this out. We define the following eraser:

Using this eraser we obtain the following logic which is equivalent to LK.

We have the cut rule last:

I might prove this logic equivalent to LK, all truths deducable in this logic are deducible in LK, in a later post. To finish off the definition of DC lets take a look at the reduction rules. I do not pen any specific reduction strategy down here. The reducation rules of DC are:
Well that is the dual calculus. It is a really interesting theory of classical logic. There are quite a few papers on the dual calculus. I sprinkled some throughout the post, but if the reader knows of any I would love to hear about them.

Posted in Logic, Type Theory | 3 Comments

## The 2012 Oregon Programming Languages Summer School

The lecture videos and some slides are up! You can find them here.

I am particularly excited about Steve Awodey’s talk.

Dig in.

## The Seventh ACM SIGPLAN Workshop on Programming Languages meets Program Verification (PLPV 2013)

I am on the program committee for PLPV 2013. The first call for papers is out. I thought I would share it here just in case there is anyone reading this that doesn’t know.

The Seventh ACM SIGPLAN Workshop on Programming Languages meets Program Verification (PLPV 2013)

http://plpv.tcs.ifi.lmu.de/

22nd January, 2013
Rome, Italy
(Affiliated with POPL 2013)

Call for Papers

Overview

The goal of PLPV is to foster and stimulate research at the
intersection of programming languages and program verification, by
bringing together experts from diverse areas like types, contracts,
interactive theorem proving, model checking and program analysis. Work
in this area typically attempts to reduce the burden of program
verification by taking advantage of particular semantic or structural
properties of the programming language. One example are dependently
typed programming languages, which leverage a language’s type system
to specify and check rich specifications. Another example are
extended static checking systems which incorporate contracts with
either static or dynamic contract checking.

We invite submissions on all aspects, both theoretical and practical,
of the integration of programming language and program verification
technology. To encourage interaction between different
communities, we seek a broad scope for PLPV. In particular,
submissions may have diverse foundations for verification (based on
types, Hoare-logic, abstract interpretation, etc), target
different kinds of programming languages (functional, imperative,
object-oriented, etc), and apply to diverse kinds of program
properties (data structure invariants, security properties, temporal
protocols, resource constraints, etc).

Important Dates

Submission 8th October, 2012 (Monday)
Final Version 8th November, 2012 (Thursday)
Workshop 22nd January, 2013 (Tuesday)

Submissions

We seek submissions of up to 12 pages related to the above
topics; shorter submissions are also welcome. Submissions may describe
new work, propose new challenge problems for language-based
verification techniques, or present a known idea in an elegant way
(i.e., a pearl).

Submissions should be prepared with SIGPLAN two-column conference
format. Submitted papers must adhere to the SIGPLAN republication
policy. Concurrent submissions to other workshops, conferences,
journals, or similar forums of publication are not allowed.

To submit a paper, access the online submission site at

http://www.easychair.org/conferences/?conf=plpv2013.

Publication

Accepted papers will be published by the ACM and will appear in the
ACM Digital library.

Program Committee

Andreas Abel Ludwig-Maximilians-University Munich (co-chair)
Robert Atkey University of Strathclyde
Harley Eades The University of Iowa
Chung-Kil Hur Max Planck Institute for Software Systems
Brigitte Pientka McGill University
Andrew Pitts University of Cambridge
François Pottier INRIA
Tim Sheard Portland State University (co-chair)
Makoto Takeyama Advanced Industrial Science and Technology