Why the preference for powers of two?
Systems that live close to the architecture often find performance
benefits, if not hard requirements, to align things on boundaries. It's
quite possible that you might find low-level disc seeks that are not
capable of seeking to an arbitrary address, but instead, deal in offsets
from some block boundary -- and that will invariably be divided into
some power of two.
But in this case, it's not at all clear, if it's even defined, whether
it matters, or if there's any performance implication at all, or if the
compiler or bytecode machine aligns them for you anyway, or if it would
be more efficient to use a prime number instead of a power of two, or
anything else about it. It's not a common thing to divide a buffer by
two, or to arrange buffers for best fit in a larger "power-of-two"
block, or to deal separately with "high and low half-buffers", or
anything of this nature.
It appears this is a historical idiom, not of the language, but of the
programmers. But it's hardly coincidental. Everything digital is
organized in finite quantities, every resource being bounded by some
power of two.
Maybe the next generation will revisit the merits of this whole "binary"
thing, and something better will emerge. When it does, do you think we
will have to throw away everything we know about discrete math?
In the meantime, I'll bet a dollar it does not matter whether you make
your buffers 2000, 2047, 2048, or 2049 bytes. (And I'll gladly pay up
if someone can show me metrics that show otherwise!)