Robert said:
The question makes me think you haven't thought about the problem for
long enough to have yet moved into the "so how do I implement this?"
range. What you're asking is algorithmic analysis, not
language-specific.
That said, I'd suggest you start thinking about whether you really
need this massive set. With 133,784,560 different elements in the
set, you're talking about a nontrivial amount of data.
But you don't necessarily need to keep it all. Depending on what you are
doing you might only need to look at each one and throw it away. On today's
PCs, 133 million hands can be analysed pretty quickly.
If your
ultimate goal is to look through the set looking for the frequency
certain hands come up, you're going to be far better served solving
your problem via math and statistics instead of exhaustive analysis.
I don't think so. Just for fun I once computed the probabilities of various
poker hands using maths alone, but it's very easy to make mistakes that way,
and how do you prove that you didn't make a mistake? Sometime after writing
them all out in a nice table, I discovered that the probability I'd
calculated for getting two pairs when dealt five cards was a factor of 2 out
(1/21 instead of 1/10.5) because I had made a mistake. On the other hand,
it's hard to go wrong if you simply deal every possible hand and count the
two-pairs among them. An additional advantage is that you can make the
criteria that constitute a matching hand anything you like without having to
work out each time how to work it out.
DW