Any term in the lambda calculus can be described by 1. variables 2. applications 3. abstractions. This means that any term can be broken down into pieces that are in turn one of those three cases. The most common way to prove things in this system is by induction on the structure of a term.
In Haskell, we have the hugely useful pattern matching mechanism. Until now I thought of it only as a nice way to take apart things defined by algebraic data types and give case analysis. But of course this is exactly induction on a data structure, meaning that the function that does this matching is proving a property of the structure by showing it for the base cases and handling the recursive cases by structural induction. Not only are programs proofs, they can have the same form as a proof! Clearly the type of the function is a statement which the function is a proof of!
One of the cool things about this is that it makes it much more clear (to me) that not only are programs proofs, but proofs are programs. In constructionist math all proofs must actually construct they thing they are proving and are therefore algorithms. This is really inspiring me to read up on constructionist math, and for reference a good blog about this is:http://math.andrej.com/
I've also just realized why there is such a close association between functional programming and lists- lists describe a very simple form of induction! This even hints at the connection between functions and Scott domains. Also, not only can we do induction in Haskell, we can actually do coinduction because we can create codata. Very cool.
ReplyDelete