In Haskell the whitespace character indicates application, so "f a" is f applied to a, and "f a b" is f applied to a, and "f a" applied to b. This works since functions are automatically curried, so if f may have the type A->B->C with a:A and b:B then f a b : C. In a concatenative language, whitespace indicates composition, and arguments such as constants are considered functions, so for example the number 1 is a functions ()->N, a function of no arguments returning a single number. Then an expression "a f" has type ()->A composed with (A->B)->B, resulting in a function of type ()->B.
Interestingly, a concatenative expression can be translated to applicative form very easily- by translating it into a series of compositions of functions that take continuations. I imagine that this would make it pretty easy to typecheck them by translating them to lambda terms and type checking the resulting term. I'm not sure it works out nicely in all cases or in the presence of more complex expressions, so this is an investigation of the concept. The reason I'm skeptical is that I've seen some people discussing the difficulties in typing such languages, so I doubt that putting them in continuation passing style is enough to solve these problems.
I'm particularly interested in either pure functional concatenative language and simple ones that can easily be used to generate code for, say, a microcontroller. The syntax makes it easy to do lexical analysis, the semantics are particularly easy to implement in a forth-style runtime, and more complex runtime systems are possible.
A simple example of a program in a concatenative language would be " 1 2 +" which evaluates to 3. Translating that to a lambda term (using feature like numbers we assume are either added or translate into lambda terms) we could get "(\f.f 1) o (\f.f 2) o (\f.\y.\x.f (x+y))" with "o" as function composition. This expands to "(\z.(\f.f1)((\f'.f'2)(\f''.\x.\y.f''(y+x))z))" if we alpha-convert the symbols "f" to f' and f'' in the second and third term to make them distinct. We could have expanded it differently, but it would have reduced to this. Reducing further requires we give a function as the continuation of this new term. To get it to reduce the best term is one that add the least new informations. As application of lambda terms has a left identity, the identity function (\x.x), this is the term to apply, giving "(\z.(\f.f1)((\f'.f'2)(\f''.\y.\x.f''(x+y))z))(\x.x)" => "(\f.f1)((\f'.f'2)(\f''.\y.\x.(\x.x)(x+y))" => "(\f.f1)((\f'.f'2)(\f''.\y.\x.x+y)" => "(\f.f1)((\f''.\y.\x.x+y)2)" => "(((\y.\x.x+y)2)1)" => "2+1" => "3", which is the intended result.
This translation seems to deal nicely with functions that do not result in single results. A term that is only partially applied like "1 +" results in a function requiring an extra argument. Going the other way, terms like "1 2" result in something that requires something to reduce them, nicely dealing with what is usual implemented as a stack by a function requiring a function to reduce the extra arguments. This only works out so nicely if we curry all functions, which we can do without loss of generality.
To flesh this out, lets define some of the usual terms in these types of languages- terms like dup, drop, swap, tuck, nip, rot. Okay- drop is (\f.\x.f), or K, dup is (\f.\x.(fx)x), swap is (\f.\x.\y.(fy)x), tuck is (\f.\x.\y.((fx)y)x), nip is (\f.\x.\y.fx), and rot is (\f.\x.\y.\z.((fy)z)x).
I'm not sure that this works for some complex terms, but it does seem to deal correctly with things like "quoted" terms, usually written as "'+" indicating the function + as an object to be placed on the stack. Such terms are just like other constants, in this example (\f.f(\y.\x.f(x+y))). This is exactly the "return" or the unit of the continuation monad, which takes an arbitrary term turns it into a continuation, which is of type a->(a->r)->r for a continuation of type r.
Maybe more on this later, especially if it turns out to not work in some situations. The application I'm thinking of here is to either translate such terms into lambda terms and allow Haskell to type check them, or to use them as an exercise in type inference in the nicely familiar setting of the (polymorphic) lambda calculus.
No comments:
Post a Comment