This is the first of a few posts introducing a new line of research I am working on currently. In this post I only introduce the ideas we are considering. Then each post after will fill in the details. So if something seems interesting or does not seem to make sense just wait it will. I have tried to write this post about three time now. We kept having to change our results so much that what I had written become obsolete and incorrect. This is one of the reasons I have decided to break up the post into multiple posts. We are finally getting into a position in which I can reveal where the research is taking us.
Over the course of a little over a year I have been becoming increasingly interested in classical type theories. A classical type theory is a type theory which when moving from type theory to logic corresponds to classical logic as stated by the three perspectives of computation (also known as the Curry-Howard correspondence or the proofs-as-programs propositions-as-types correspondence). I managed to get my Advisor, Aaron Stump, interested. Getting Aaron interested is quite a joy. He, like many of us, is very passionate, and has great ideas. In addition we have scooped up a new graduate student, Ryan McCleeary, to work on these ideas with us. One idea I am profoundly excited about is our idea to study duality in constructive logic. To understand what I mean by this lets consider what duality we have in classical logic.
Logic is a beautiful universe. We can use it to express truth (w.r.t a certain semantics), and we can use it to express computation. Like I said, logic is awesome! In classical logic there is a beautiful notion of duality. For example, we can show that conjunction is dual to disjunction. That is we can prove \( A \land B \iff \lnot ( \lnot A \lor \lnot B ) \). Here we can see that conjunction is the same as a negative disjunction whose disjuncts are both negative. We can see this duality also in the inference rules for the classical sequent calculus:
Here the left rules are the negative rules. So conjunction on the left is actually disjunction, likewise for disjunction on the left, which is negative conjunction. So far we have only seen dualities between conjunction and disjunction, but what about implication? There is a left and right rule for implication:
Again the left rule is negative implication, but is there an operator which is dual to implication? The answer is yes. We know in classical logic \(A \to B\) is logically equivalent to \(\lnot A \lor B\). Then we also know that \(A \to B\) is logically equivalent to \(\lnot (A \land \lnot B) \). This tells us that the dual to implication must be logically equivalent to \(A \land \lnot B\). This looks very much like set difference if we interpret conjunction as intersection and negation as complement. This is the reason we call the dual to implication subtraction, written \(A – B\), and it is indeed logically equivalent to \(A \land \lnot B\). This operator was first studied by Cecylia Rauzer. It has since been studied by a number of people like Pierre-Louis Curien and Hugo Herbelin, Gianluigi Bellin, Tristan Crolard, Phillip Wadler, Peter Selinger and many more. We will delay the definition of its inference rules for now. We will introduce these in the next post.
So far we have seen some examples of duality in classical logic, but what are some applications? One application has been to study the long standing conjecture that the call-by-value reduction strategy is dual to the call-by-name reduction strategy. This has been answered in the positive by exploiting the very dualities we have been discussing. The first to investigate this line of work was Peter Selinger. He proves the conjecture using a categorical semantics of the \(\lambda\mu\)-Calculus. However, the equivalences he uses are up to isomorphism. Pierre-Louis Curien and Hugo Herbelin followed suit, but designed a new calculus called the \(\bar\lambda\tilde\mu\)-calculus. This calculus adopted subtraction as the dual to implication. Phillip Wadler designed yet another calculus to solve this problem, but instead of taking implication and subtraction as primitive he took conjunction, disjunction, and negation as primitive and defined implication and subtraction.
The problem of proving CBV dual to CBN uses classical logic, and indeed so far we have only consider classical logic. The applications of duality we are considering are in the area of constructive logic. Tristan Crolard has studied this as well, but with an emphasis on using subtraction to model constructive co-routines. We are interested in so much more.
What types of applications do we have in mind? Inhabitants of negative types (throughout the sequel we will call types like \(\lnot A\) a negative type) give rise to codata. We are finding programming with codata to be very interesting. They allow for the definition of infinite data type using coinduction in a language without lazy evaluation. In addition we are finding that codata gives rise to first class patterns! No need to for a primitive notion of pattern matching. We get it for free! Our main application is to show that induction and coinduction can live harmoniously within the same type theory without any funny restrictions (Agda), or sacrificing type safety (Coq). In addition we think (keep in mind this is all on going research) that we might get some very interesting control operators by exploiting the fact that we have a primitive notion of duality.
So this all sounds great. This is where I will leave this post. In the next post I will give several perspectives of subtraction. I will define them in Heyting Algebras, give a categorical account of them, and then define them logically by defining Dualized Intuitionistic Logic (DIL). Following the definition of DIL I will give a few examples. To know what the third post is going to be about read the second. Stay tuned!