Wednesday, June 6, 2012

Condensed Table Method for Sampling Discrete Proability Distributions

My brother once told me that when faced with a problem in computer science, the answer is to use a hash table. It turns out that a mere lookup table, along with some cleverness, can be just as good. In this case, a lookup tables gives a pretty nice constant time algorithm for sampling discrete probability distributions. In this post I will describe the Condensed Table method and its implementation in the Haskell library "statistics". I was just about to implement Vose Alias algorithm, but seeing this already implemented method with constant time sampling may convince me otherwise (depending on its space cost for the distributions I'm interested in, which I think will be easily justifiable). The original paper (called "Fast Generation of Discrete Random Variables" and found at http://ideas.repec.org/a/jss/jstsof/11i03.html) describes this algorithm nicely, and I recommend it more then my explanation.

This algorithm requires the creation of a series of tables, one for each digit of the probabilities we are interested in. This may seem like a lot of tables if we want to resolve down to many digits of each probability, but we are free to chose any base system. The implementation in the statistic library uses base 256 and 32 bit numbers, so it only needs 4 tables (each digit is one byte of the 32 bit number). The implementation first multiples the probabilities by 2^32 and gets their integer equivalents, but we can imagine that we are inspecting the probabilities expressed in the desired base system one digit at a time.

Starting with some set of items paired with weights normalized so that they are between 0 and 1 and sum to 1 (a discrete probability distribution with finite support). Notice that if we multiple the probabilities of each item by, assuming base 10, 1000 (truncating after the decimal place) then we are left with numbers between 0 and 1000. If we allocate places in some huge array such that for each item "i" each number n between 0 and 1000, i gets put in n places. Sampling this array with a uniform distribution will get us an item i with probability equal to the original truncated after a certain number of digits (4 in this case). In applications where speed is more important that correctness, this cost of accuracy is certainly worth-while, especially as we can set it up so we get as much resolution as we could responsibly expect without resorting to much more complex and correct algorithms. Therefore, the main problem is that we need (in this case) O(1000*m) space, where m is the number of items in the original distribution. The more digits we use and the more items in the distribution, the more space we need, and the growth in space is fairly quick.

The trick in this algorithm can be seen by reorganizing the item in the large array. For an item that started with probability 0.252352 we multiple by 1000 and truncate, getting 2523. Instead of putting all 2523 of that item together, put 2000 of it. Do this for each item, so we are left with an array ending with groups of less than 1000 (all the groups larger then 1000 are grouped together in the front of the array). We repeat this process so that the 523 of our example item is grouped into 500 of that item and 23 at the end of the array. Doing this once more splits out the 20 together and the last 3 are in the last group of groups (which are all less then 10).

Having grouped this items by order of magnitude, we can reduce the 2000 to 2, the 500 to 5, the 20 to 2 and the 3 to 3. We have to keep these separately so we know which groups have been reduced by what number. We can then sample the distribution similar to before, but we are careful to search the group corresponding to where in the original large array we would have fallen even though we are not storing this array. Obviously we don't have to store this original array at all, but rather a series of arrays with numbers of representations in each array corresponding to the number found in each digit of the probabilities.

These array will still be somewhat large, but by my calculations, they will be on the order of kilobytes or maybe a couple of megabytes for the applications I have in mind. I have no problem paying in space for an advantage in time, especially since I expect to sample a single distribution many tens of thousands of times.

Again, I recommend the original paper- it walks through an example in much more depth then I bothered to.

No comments:

Post a Comment