Monday, March 21, 2011

Learning the Lambda Calculus, ski

To continue our discussion of the lambda calculus, I want to discuss three particularly interesting terms: S = (\fgx.fx(gx)), K = (\xy.x), I = (\x.x). We can also write them as: S = (\f.\g.\x.fx(gx)), K = (\x.\y.x), I = (\x.x).

The reason that these terms are important is that they, all by themselves, are a system of universal computation just like the whole of the lambda calculus. This system is called SKI and it is a form of combinatorial logic, but that is actually not why I am bringing them up. What I want to show is how we are already familiar with these terms in a different guise just like the three rules of the lambda calculus are familiar from our experience with functions in math.

Each of the terms S, K, and I have a very nice property: they contain no free variables. The term (\x.y) has y as a free variable, as it is not attached to any lambda the way x is. This would be like the function f(x) = y in math- we would have to already know the value of y to use this function. If y = 3 then f(x) = 3, but if y = 232 then f(x) = 232 so the only way to use f to get an actual result is with a context. In the LC we aren't so much interested in getting actual results (yet) but it is still important to understand the difference between free and bound variables. A term with no free variables is called a combinator, so all of S, K, and I are good examples of combinators.

We haven't gone over the idea of a type yet, but it is important to think of the type of these terms so I will use them informally and ignore various details. The term I is the easiest, we write (a -> a) as its type, which should be read as: given something of type a, return that thing. This is the identify function f(x) = x, and in logic it is the proposition "p implies p" or p -> p.

K is the next easiest, and it has type a -> b -> a which should be read as a function of two arguments that returns a value of the same type as the first argument. In logic this would be "p implies q implies p," which is obviously true- if p is true then everything implies it. If we write this as f(x, y) = x it doesn't seem very useful, but it actually comes up in programming every once in a while in the curried form (\x.\y.x). If you apply an argument, say 3, then you get back the function g(y) = 3. This is called the constant function.

The very last is the most complex. S has type (a -> b -> c) -> (a -> b) -> a -> c. I found this very confusing at first and the interpretation in logic is actually very helpful. Remember modus ponens? This is one of the basic ways of forming an argument- "if p implies q, p, then q". In other words, if p implies q and we know that p is true then q must also be true because it is implied by p. It is much clearer if written out as (p -> (q -> r)) -> ((p -> q) -> (p -> r)) which would be read as "if p implies that q implies r, then if p implies q then p implies r". In math, given a function f(x, y) = z, and a function g(x) = y we can make a function h(x) = f(x, g(x)).

I was very surprised the first time I saw terms of the lambda calculus interpreted as propositions in logic, and even more surprised when I read that these terms are not just propositions- they are axioms. In other words, they are part of the basis of (propositional) logic! They are exactly the axioms of the implication fragment of intuitionist logic.

I want to move on to type systems as an independent topic continuing the discussion of the lambda calculus but from a little different perspective so that the previous material is not a prerequisite. I would also like to get to the Y combinator as part of the motivation for type systems.

No comments:

Post a Comment