Andreas said:
Except, if the machine is already RAM-stuffed to its limits...
Even if the machine wasn't yet fully RAM'ed, then buying more RAM
*and* using the arrays-kludge(yes, that's it, afterall) would allow
even larger galaxies to be simulated.
The RAM is cheaper than programmer time argument is useful to salt the
tail of the newbie that seeks to dive down every micro-optimization
rabbit hole that he come across on the path to the problems that truly
deserve such intense consideration. You have to admire the moxie of the
newbie that wants to catenate last name first as fast as possible, but
you explain to them that their are plenty of dragons to slay further
down the road.
It is not a good argument for someone who brings a problem that is truly
limited by available memory. Memory management is an appropriate
consideration for the problem. Memory management is the problem.
Memory procurement is the non-programmer solution. Throw money at it.
Scale up rather than scaling out, because we can scale up with cash, but
scaling out requires programmers who understand algorithms.
You're right that scaling up hits a foreseeable limit. I like to have
the limitations of my program be unforeseeable. That is, if I'm going to
read something into memory, say, every person in the world who would
loan money to me personally without asking questions, I'd like to know
that hitting the limits of the finite resource employed on a
contemporary computer system correlates to situation in reality that is
unimaginable.
Moore's Law does not excuse brute force.
Which is why I am similarly taken aback to hear RAM prices quoted for
something that has obvious solutions in plain old Java.
On some deeper level, a relational DB seems to actually use the "separate
arrays" approach, too. Otherwise I cannot explain the relatively low cost
of adding another column to a table of 100 million entries already in it.
On some deeper level, a relational database through an object relational
mapping layer will be paging information in and out of memory, on and
off of disk, as you need it. That is the feature you need to address
your memory problem.
Lately, I've been mucking about with `MappedByteBuffer`, so I imagine
for your (hypothetical) problem of modeling the Galaxy, you would model
it by keeping the primitives you describe in the `MappedByteBuffer` and
creating objects from them as needed. This is not `Flyweight` to my
mind, where you keep objects that map to finite set of values, these
values are assembled into a larger structure in an infinite number of
permutations. These atomic components exist within the larger structure,
but they are reused. Interned `String` is a flyweight to my mind.
I'm not sure what the pattern is for the short term objectification of a
record, but that is a lot of what Hibernate is about. Making objecty
that which is stringy, just long enough for you do your GRUD in the
security of your type-safe world.
100% agree to these points.
You create an `Star` object that can read the information from a
`MappedByteBuffer` at a particular index, and you can simply change the
`read` and `write` method of the star.
You've reached down to the deeper level of the ORM+RDBMS stack and
extracted the only design pattern you need to address the problem of
reading the Universe into memory.