Tuesday, February 21, 2012

Denotational Programming and Real Life

I just watched a pretty interesting lecture called "Inventing on Principle", http://vimeo.com/36579366. The lecturer presents the idea that inventors should have a principal in their work, something like what artists may develop for their work. The principal of the speaker is that ideas are important, and that the creative process requires immediate interaction with their work- without which ideas often die. As example he shows some nicely interactive programs where things that are interacted with in the static medium of text are given an image to play with in the form of an image in one program, a game in another, and a bunch of variable assignments for a binary search. One of the cool ones was investigating a circuit an seeing its properties change in "real-time".

One thing that strikes me about this is the particular implementation is that it seems to rely on state. Certainly, there is a notion of state in animation and games, there is a notion of state in imperative programming, and even the physical experience of the computer (its changing state in time). This brings up a possible weakness of the denotational/declarative programming that I love so much. This is similar to my post "A Case for Imperative Reasoning". In that case, his objection was that denotational programming gives up the sometimes vital ability to specify certain aspects of the execution of a program (see "Lazy Functional State Threads" however). Here we have a different problem, which is that declarative programs are in a certain sense static or finished. The reason this is a problem is that we can't interact with it or easily see the steps between the starting program and the result.

It is interesting that the fundamental problem here is one of reasoning. The whole point of declarative programming is that we have well understood principals with which to reason so we can understand ours programs in a way that is impossible in certain languages. One way to look at this problem is that we want to provide at least the tools for reasoning about our programs that we expect in imperative languages without giving up the awesomeness of the declarative tradition.

One thing I want to mention is that in a language like Haskell, we can definitely interact with the environment, and we can create stateful programs, and we can create reactive programs. However, the language may reorder statements in any way, and it is not easy to predict (for me at least) what statements will be evaluated and when. This is also true of something like the type checker, which is similarly opaque. There has been work trying to remove this difficulty, see "Compositional Explanations of Types and Algorithm Debugging of Type Errors", and to adding debugging capabilities to Haskell or at least doing post-mordem traces. In some sense this isn't as much a problem as it is in other languages- in a pure language a piece of functional code never acts strangely or misbehaves- if it works in isolation then it works everywhere for all time. Also, algebraic data structures are normally easy to visualize, and there are programs for drawing them.

Despite the existence of these tools in particular and what seems to be a generally improving exosystem of tools for functional languages in general, this lecturer's ideas may be applicable in some way to pure functional language, and I expect that it would manifest itself differently in Haskell then in, for example, Javascript. Perhaps a term reducer could show a reduction sequence as a tree, or a heap-visualizer (these already exist, btw), or something like the speaker showed, but with the values of bound variables instead of the state of assigned variables, or maybe something much more exotic? I like visualizations of complex things like programs, but for large scale things they are often just pretty pictures- maybe there is a form for something like a stack trace (in languages where you get actually get such a thing) or the heap that is meaningful and could be inspected on a sample input to get feedback on, for example, performance and memory usage?

Wednesday, February 15, 2012

Spaces in Genetic Algorithms

There are many spaces involved in a Genetic Algorithm- the space of solutions we want to search, the space of representations that are our individuals, the space of values that make up the individuals (the space {0, 1} for bit vector representations), the population as a space (normally a set), the space of populations (the set of all possible populations), and the space of fitnesses (usually R+). Some of the spaces can be ordered in different ways, and often given different distance metrics. Their shape and the mappings between them have complex effect on the run of a GA.

I've been trying to describe Genetic Algorithms in terms of these spaces, so that a GA is a collection of movements in them. Since it is also stochastic, and some movements must be more likely then other in some situations, it seems like movement and sampling from (usually discrete) probability distributions is all there is to a GA. If we can describe GAs this way we may get two nice things: a way of describing and unifying our treatment of different variations (although some may be easier to describe this way), and possibly some obvious generalizations that suggest modifications to GA. There may also have properties that aren't obvious otherwise, although I don't know much about the literature on the formal study of GAs except vaguely that Markov Chains have been used to represent them, so I don't really know how to proceed there.

If nothing else, I like the idea of visualizations of GAs, and this suggests some ways to make them approachable by intuitive descriptions. Its common to talk about the solution space as a landscape with mountains and valleys, but this doesn't really give the whole story at all. This is especially true when the solutions and the individuals are not the same, which is actually very common.

Books Have

I just got some books in the mail: "Categories of Types", "Category Theory", "Types and Programming Languages", "Advanced Topics in Types and Programming Languages". Each one seems pretty amazing, and they cover types from practical implementation to to advanced type systems with lots of nice features, to their category theory interpretations, to pure category theory.

I have a lot to read.