QuickCheck, the Haskell testing library, is a pretty amazing thing. The idea is that you don't specify test cases when testing your code, you specify what properties your code has. The QuickCheck library (which I'm using through the Tasty testing framework) will generate random data to check that your specified properties hold. If a test fails, it will try to reduce the generated input to a simple form and present it to you as a counterexample of the property.
There is a lot to say about this, so I'll just go over some things I liked:
1) Your test cases become a specification. Each test case declares a property of your code, and all together they give a description of what your intentions where.
2) Splitting pure and impure code is good practice, and QuickCheck certainly encourages this. You can test impure code with it too, but pure code is easier to test, its easier to generate repeatable cases for, and the examples should be easier to shrink.
I've seen a lot of Genetic Algorithm libraries mix the generation of random data and the application of that data to modify a population. There is no need to mix these two concerns, and splitting them out has definitely helped me with writing these tests.
3) You have to put some thought into what you meant when you wrote your code. I had a complex function at the heart of a Genetic Algorithm library I've been writing, and when I started writing test cases, I realized that it was not easy for me to say what properties it had. This indicates to me that I need to break it into smaller pieces that can be understood and tested separately.
This means that QuickCheck didn't just give me counter examples, it is helping me improve the overall structure of my code!
4) Once you get used to creating Arbitrary instances and using them to come up with test cases, it is pretty easy to specify a lot of properties. This requires a different way of thinking about testing, but it seems very effective and quite manageable.
5) Testing a complex, efficient algorithm against a simple specification can give you confidence in your code. Being able to generate hundreds of random cases and comparing the results seems like the perfect test.
I'm using an implementation of Point Mutation that is much more efficient then the usual implementation seen in nearly all Genetic Algorithm libraries, and I'm going to be very interested to test its correctness against the naive definition.
I would bet there is some way to even take performance data and cause a test to fail if the "efficient" implementation is slower (within a certain margin) then the simple one. This would be one nice way to regression test your efficiency concerns.
6) For random algorithms, there are stories of incorrect implementations giving bad research data. There are a lot of properties of a Genetic Algorithm that could be specified in this way to ensure that the implementation does contain certain types of mistakes.
For example- mutation increases diversity (usually), selection reduces diversity, elitism maintains the best k individuals, a Genetic Algorithm out-performs another algorithm (say, a hill-climber) on a particular problem, a mutation rate of 0.5 (50%) on a bitdata individual mutates approximately 50% of its bits, etc. These tests can fail a certain number of times, but must hold if enough random tests are performed.
I can see why the Haskell community loves this technique so much. I've not looked into its cousins (SmallCheck and SmartCheck) but I fully expect that they are just are effective.
I recommend this technique very much. There are ports of this concept to quite a few languages, so there are no excuses.