The fitness in this case can just be how many test cases the function/multiplexer works correctly for. In John Koza's work he used how many it misclassified instead, and then produced a scaling function to normalize the fitness' and make sure they could be used in roulette wheel selection. I'm going the simpler route and just finding how many cases (and you do try every possible case for every individual every generation) each solution gets correct. I am free to minimize or maximize with RGEP, and scaling would have absolutely no effect since the scaling function will be either isotone or antitone- either way tournament selection will only compare ordering not size.
I'm going to try this problem as a test for RGEP to show that it can evolve more then just functions, which is what most of my tests are doing. I plan on trying different size multiplexers, probably 3, 7, and 11 sized because that is what Koza tried in one of his works on GP. I want to also try doing decision tree evolution in RGEP for some variety, and it would be pretty simple to evolve a decision tree for this problem. I think it might even be easier then the boolean function approach as a completely correct tree could be very short.
I will try to post about decision trees when I get that far.
So I barely understand what you said. The idea of programmable multiplexing is not something I have ever heard of but thats cool (if I understood it right). Could that apply to PICs?
ReplyDelete