Saturday, June 4, 2011

Computation is Simplifying Proofs

I've been reading "Lectures on the Curry-Howard Isomorphism" recently. It covers all sorts of awesome things like intuitionistic logic, the lambda calculus, combinatorial logic, as well as things I haven't gotten to yet like dependent types (which are very neat), the lambda cube, and pure type systems.

This paper has made something clear to me which I've wondered for a while now, which is the question of why the lambda calculus would be the term language for a type system. What is it about the lambda calculus that makes it appropriate for determining the truth of propositions in the logic that a type theory corresponds to? Can we come up with other term languages that are just as good? I often have trouble with questions of why things are the way they are in math, as they can seem arbitrary at times, but every once in a while I stumble across an answer and all is well in the world.

In this case this paper mentions that normalizing in the typed lambda calculi is like proof simplification, hence the name of the post. Therefore computation is just simplifying a proof if we take normalization in the lambda calculus as our basis for computation. This is pretty awesome by itself, but it has one other surprising facet- the lambda terms are simply a linear representation of proofs in some logic! Amazing! I never saw it this way (perhaps it is obvious? certainly not to me) and I always took the idea that a term in a type made the proposition that it corresponds to true loosely as if the existence of the term simply validated the type by showing a construction for it, but it is a literal statement- the term is actually literally a proof of the statement of the type! Beta reduction is then simplifying the term which simplifies the proof.

When I get to dependent types and the lambda cube I will certainly post more about this.

No comments:

Post a Comment