One thing that strikes me about this is the particular implementation is that it seems to rely on state. Certainly, there is a notion of state in animation and games, there is a notion of state in imperative programming, and even the physical experience of the computer (its changing state in time). This brings up a possible weakness of the denotational/declarative programming that I love so much. This is similar to my post "A Case for Imperative Reasoning". In that case, his objection was that denotational programming gives up the sometimes vital ability to specify certain aspects of the execution of a program (see "Lazy Functional State Threads" however). Here we have a different problem, which is that declarative programs are in a certain sense static or finished. The reason this is a problem is that we can't interact with it or easily see the steps between the starting program and the result.
It is interesting that the fundamental problem here is one of reasoning. The whole point of declarative programming is that we have well understood principals with which to reason so we can understand ours programs in a way that is impossible in certain languages. One way to look at this problem is that we want to provide at least the tools for reasoning about our programs that we expect in imperative languages without giving up the awesomeness of the declarative tradition.
One thing I want to mention is that in a language like Haskell, we can definitely interact with the environment, and we can create stateful programs, and we can create reactive programs. However, the language may reorder statements in any way, and it is not easy to predict (for me at least) what statements will be evaluated and when. This is also true of something like the type checker, which is similarly opaque. There has been work trying to remove this difficulty, see "Compositional Explanations of Types and Algorithm Debugging of Type Errors", and to adding debugging capabilities to Haskell or at least doing post-mordem traces. In some sense this isn't as much a problem as it is in other languages- in a pure language a piece of functional code never acts strangely or misbehaves- if it works in isolation then it works everywhere for all time. Also, algebraic data structures are normally easy to visualize, and there are programs for drawing them.
Despite the existence of these tools in particular and what seems to be a generally improving exosystem of tools for functional languages in general, this lecturer's ideas may be applicable in some way to pure functional language, and I expect that it would manifest itself differently in Haskell then in, for example, Javascript. Perhaps a term reducer could show a reduction sequence as a tree, or a heap-visualizer (these already exist, btw), or something like the speaker showed, but with the values of bound variables instead of the state of assigned variables, or maybe something much more exotic? I like visualizations of complex things like programs, but for large scale things they are often just pretty pictures- maybe there is a form for something like a stack trace (in languages where you get actually get such a thing) or the heap that is meaningful and could be inspected on a sample input to get feedback on, for example, performance and memory usage?
As in computer science , the programming is very necessary , there are various types of computer programming, as java , c++ comes under object oriented programming and c comes under procedural programming.
ReplyDeleteComputer Support