optimization of static data initialization

W

wkaras

I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?
 
T

Thomas Tutone

I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint. If your problem is code bloat (because, e.g.,
you are dealing with an imbedded system and every extra byte counts),
you can find a way to hardcode the initialization value if you really,
really had to (e.g., by doing some casting - which you would want to
avoid otherwise, of course).

But that's also leads to the answer to your question. Virtually all
optimizations are implementation-defined. The answer to your question
is "Because you haven't chosen a compiler that supports the
optimization that you seek." Nothing stops you from finding a
different compiler - or, if necessary, paying someone to create a
compiler - that supports your desired optimization. Why doesn't your
average compiler support such an optimization? Well, a compiler-writer
generally is going to devote his or her time to writing optimizations
that lead to the most bang for the buck. The optimization you're
asking for doesn't help anyone except in extraordinarily contrived
situations. Given a choice between (a) efforts that will help make a
loop run more efficiently, or (b) a way of initializing const arrays of
structs that at best will save less than a microsecond each time a
program is run, which do you think the compiler writer will devote time
to? Or, put another way, which optimization do you think consumers
will pay more for?

Best regards,

Tom
 
W

wkaras

Thomas said:
I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
....

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.

Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.
 
K

Kai-Uwe Bux

Thomas said:
I've compiled this code:

const int x0 = 10;
const int x1 = 20;
const int x2 = 30;

int x[] = { x2, x0, x1 };

struct Y
{
int i;
double d;
};

const Y y0 = {1, 1.0};
const Y y1 = {2, 2.0};
const Y y2 = {3, 3.0};

Y y[] = { y1, y0, y2 };

int z[] = { y1.i, y0.i, y2.i };

with a couple of compilers, with the highest possible optimization,
and looked at the disassembled object code. With both compilers,
only the x array is initialized from the load image. Move instructions
are generated to initialize both y and z. Why is it hard for the
compiler to initialize all of this from the load image, without having
to execute any init code at run time?

Before I give a possible answer, let me ask you this: Why do you care?
By definition initialization of a const array of structs will be
executed exactly once each time a program is run. On any modern
processor, that initialization will take, quite literally, less than a
microsecond, which is probably less than the margin of error for timing
the program. So the benefit of the optimization is essentially zero
from a speed standpoint.
...

You may very well be right. On the other hand, I think we all know
at least one person who has serious money problems, yet indulges
in many small luxuries, giving the argument that "it's only X Y's" (X
being some small number and Y being the name of your local
currency). Maybe that's not why they have money problems,
but it sure doesn't help. So I think it would be valuable for
researchers to try to quantify the value (or lack of value) of
"microoptimizations", either when done by the compiler or
habitually done by hand.

Hm, why would that be valuable for researchers? In case, their study just
confirms common wisdom (i.e., preconception) that micro-optimization does
not pay off, they probably would not even have a publishable paper. I would
bet you that most researchers estimate that the other outcome is too
unlikely to justify the effort of a study.

Also, I work on a high-availability application, so optimization of
initialization is of special concern too me.

To some degree, I think your argument is on a slippery slope,
that leads to the conclusion that any sort of compiler optimization
is not really of much value.

You snipped the other half of his argument: there are many other
optimizations that yield higher gains for comparable amount of effort on
the compiler writers part. Thus, the particular optimization that you are
interested in is assigned low priority. I cannot see anything unreasonable
here or any kind of slippery slope. The conclusion that all optimization is
close to useless, is nowhere near any sensible interpretation of the given
rationale.


Best

Kai-Uwe Bux
 
W

wkaras

Kai-Uwe Bux said:
Hm, why would that be valuable for researchers? In case, their study just
confirms common wisdom (i.e., preconception) that micro-optimization does
not pay off, they probably would not even have a publishable paper. I would
bet you that most researchers estimate that the other outcome is too
unlikely to justify the effort of a study.

At one time it was common wisdom that old wet rags kept
in the dark would turn into frogs (or something along those lines).
Common belief, if not backed up by quantitative analysis
and data, should only be relied upon as a last resort. Any
researcher thinks they shouldn't publish (and you can always
publish anything now, on the internet if nowhere else) the
results of a study because of the outcome (even if the outcome
just confirms common belief) should not be a researcher.
I wouldn't know enough to rate the relative importance of
studying micro-optimizations, but I continue to think it's
worth studying.
You snipped the other half of his argument: there are many other
optimizations that yield higher gains for comparable amount of effort on
the compiler writers part. Thus, the particular optimization that you are
interested in is assigned low priority. I cannot see anything unreasonable
here or any kind of slippery slope. The conclusion that all optimization is
close to useless, is nowhere near any sensible interpretation of the given
rationale.

If you reject optimizations with no verifiable quantitative analysis
and data,
you'll tend to reject all optimizations, because it always easier to do
nothing
than something. So there is the slipperly slope.
 
K

Kai-Uwe Bux

At one time it was common wisdom that old wet rags kept
in the dark would turn into frogs (or something along those lines).
Common belief, if not backed up by quantitative analysis
and data, should only be relied upon as a last resort.

This is not just any old common belief, it is the opinion of those who work
on compiler design and have invented and tried a wide variety of
optimization strategies. Chances are that their gut feelings are not far
off.

Any researcher thinks they shouldn't publish (and you can always
publish anything now, on the internet if nowhere else) the
results of a study because of the outcome (even if the outcome
just confirms common belief) should not be a researcher.
I wouldn't know enough to rate the relative importance of
studying micro-optimizations, but I continue to think it's
worth studying.

As a researcher, you get recognition for publishing *interesting* results in
*respected* journals. Just confirming what all your fellows know already
(although, maybe with a little less detail and justification) is not going
to be very interesting and won't make it into a peer-reviewed journal. A
researcher is better off spending his time on writing a different paper.

If you reject optimizations with no verifiable quantitative analysis
and data,
you'll tend to reject all optimizations, because it always easier to do
nothing
than something. So there is the slipperly slope.

Obviously, no one is going down that alledged slippery slope: Market forces
drive compiler vendors to include optimizations. Market forces also prevent
compiler vendors from investing too much resources on optimizations that
will benefit only a fringe group of customers.

Talking about market forces, ask yourself how much you would be willing to
pay: you could hire someone to hack that kind of optimization into g++.


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top