Faster for() loops?

K

Keith Thompson

Joe Butler said:
prick.

Default User said:
Joe said:
OK, point taken.
[snip]

Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.


You need to get the other point, the one about not top-posting.

I'm going to make one and only one attempt to make this clear to you.
It's arguably more than you deserve, but if you improve your posting
habits it will benefit all of us. If not, we can achieve the same
benefit by killfiling you -- which I'm sure many people already have.

We don't discourage top-posting because we like to enforce arbitrary
rules. We do so because top-posting makes articles more difficult to
read, especially in an environment like this one where bottom-posting
is a long-standing tradition. (In other words, top-posting in an
environment where it's commonly accepted is ok; top-posting where it's
not commonly accepted makes *your* article more difficult to read
because we have to make an extra effort to read a different format.)

Usenet is an asynchronous medium. Parent articles can arrive after
followups, or not arrive at all. It's important for each individual
article to be readable by itself, from top to bottom, providing as
much context as necessary from previous articles. It's not reasonable
to expect your readers to jump around within your article to
understand what you're talking about.

Quoting one of CBFalconer's sig quotes:

A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

See also <http://www.caliburn.nl/topposting.html>.

I'm giving you good advice. You can believe me and follow it, or you
can ignore it and never be taken seriously here again. It's your
call.
 
F

Flash Gordon

Joe said:

<snip information on correct posting which has been ignored>

Well, one thing you seem to have learned is how to avoid getting help
when you want it. The convention of bottom posting has been established
for a long time and the reasons for it have been discussed here and else
where on many occasions, so I'm not going to debate them. Had you taken
not of the practices of this group you would have known this.
 
F

Flash Gordon

Joe said:
OK, point taken. Although, when working with very small memories, it can
make all the difference if a byte can be saved here and there. Afterall, 50
such 'optimisations' could amount to 10% of the total memory available. I'm
not necessarily suggesting this should be done from day 1, but have found it
useful just to get a feel for what the compiler works best with.

If saving 50 or even 100 machine code instructions saves you 10% of
memory then you only have space for 1000 instructions and would, in my
opinion, be better off programming in assembler where you actually have
control over what happens. Otherwise you might well find changes in the
compiler and library from version to version are more significant.

<snip>

I would refer you to the above, but based on your response to Brian I'm
guessing that you have no consideration for other users of this group.
Don't be surprised if this leaves you with only the ignorant to talk to.
 
W

Walter Roberson

OK, point taken. Although, when working with very small memories, it can
make all the difference if a byte can be saved here and there. Afterall, 50
such 'optimisations' could amount to 10% of the total memory available. I'm
not necessarily suggesting this should be done from day 1, but have found it
useful just to get a feel for what the compiler works best with.

If you are working with just 512 bytes of program memory, then you
probably should not be writing in C. C as a programming language makes
no attempt to mimimize code space. And though it is a matter outside of the
standards, most compilers prefer to trade off space for increased speed.
 
D

Dan Henry

Where is the context? You've got me all twisted up.
Depends what you're doing.

I'm reading Usenet articles at the moment. "Depends what..." That's
what I am doing. What...?
If you're accessing a large chunk of memory on a system with
cache, you want to go through incrementing addresses to maximize the use of cache.

OK, I am not doing that at this particular moment. Why are you
advising me on my disposition to go for large chunks? I do sometimes
"swing the other way" after all. You may hate me, but don't judge me!
Decrementing through memory is generally pessimal.

I do it all the time. I'm bad, I know, but had to be told. Shame on
me.
 
J

Joe Butler

To Flash and Walter,

Maybe 10% saved for a 1k memory was a bit of an exageration on my part.

I'm currently working with some inhereted AVR GNU that was incomplete, by
quite some margin, yet close to the preferred memory limit with decent debug
info, and too big to feel comfortable without the debug. Checking out the
size differences resulting from alternative ways of writing the same code
has resulted in a worthwhile amount of memory being recovered (I believe) -
i.e. I'm more relaxed about the situation now. I'm telling the compiler to
optimise for size, since speed is not a problem. Of course, if things won't
fit at the end, then I'll have to look for other optimisations, such as the
assembler suggested. It's unlikely that I'll be forced to switch to a
different version of the compiler or libs. Changing the code's architecture
and data structures results in bigger savings - I'm almost thru doing that
(for other reasons) and it looks like I'll recover over 1k of mem (out of
8k). and the resulting code is cleaner and easier to understand too.

You both probably know this through experience, but one trick I've found of
simply making a local copy of a global variable, that is used a fair bit in
a function, and then coping it back to the global afterwards saves a
reasonable amount of code size, over the more obvious code, that it makes
this particular trick worth knowing/trying when things do get tight - it's
obviously (to me now) a bigger saving if the global happens to be volatile
as well.

I have no personal problem trying these things out if it helps me to
understand what the compiler is likely to do with new code that I author
(afterall, it takes about 6 minutes to try a little trick out, which means
that it'll take about an hour to know 10 new things about the behaviour of
the compiler and to recognise oportunities, etc. in the future.) Perhaps at
the end of the project, I would have had plenty of room anyway, but, as I
said things were tight, I was feeling uneasy, and every time I wanted more
debug info, it meant choosing something else for temporary culling which was
beginning to make things thorougly difficult.

Things seem to be going 'swimmingly' now - I hope that holds up to the end.
 
K

Keith Thompson

Al Borowski said:

Snipping attributions is considered rude. The quoted material
starting with "I'm going to make one and only one attempt" is mine.
I don't remember who write the line above that.

There is one valid point in the referenced web page (titled "In
Defence of Top Posting"):

Bottom posting without snipping irrelevant pats is at least as
annoying as a top-post.

Which is why nobody recommends bottom-posting without snipping
irrelevant parts.

Apart from that, the argument seems to be based on assumptions about
the software people use to read Usenet, and the environment in which
it runs.

Once again: Usenet is an asynchronous medium. The parent article may
not have arrived yet. It may have expired. It may never arrive at
all. I may have read it a week ago and forgotten it, and I very like
have my newsreader configured not to display articles I've already
read.

A simple command to jump to the parent article is a useful feature in
a newsreader, and one that exists in the one I use. I have no idea
which other newsreaders have such a command. That's why I try to make
each article readable by itself.

Perhaps the most telling point in the web page is:

A Threaded newsreader. These days, pretty much every PC has one.

Not everyone reads Usenet on a PC.

I realize this is cross-posted to comp.lang.c and comp.arch.embedded.
Perhaps top-posting is tolerated in comp.arch.embedded. In
comp.lang.c, we've reached a general consensus that top-posting is
discouraged, for perfectly valid reasons. And even if I *liked*
top-posting, I wouldn't do it in comp.lang.c; consistency is even more
important than the arguments in favor of a given style.
 
A

Al Borowski

Hi,

Peter said:
You might like to google for "fallacious arguments".


"...embarked on a crusade...
"...top-posting isn't the spawn of Satan...
"...Some...mantras...
"And to those of you who act like this is a religious issue,
get a life."

The only person in bringing religion into this is yourself.


I wrote that page some time ago in response to an argument on a
different newsgroup. The language I used was very tame compared to some
of the abuse heaped on top-posters at the time.

I'm not getting into an online debate on top-posting. That page has my
views and I won't repeat them here.

thanks,

Al
 
D

David Brown

Keith Thompson wrote:
I realize this is cross-posted to comp.lang.c and comp.arch.embedded.
Perhaps top-posting is tolerated in comp.arch.embedded. In
comp.lang.c, we've reached a general consensus that top-posting is
discouraged, for perfectly valid reasons. And even if I *liked*
top-posting, I wouldn't do it in comp.lang.c; consistency is even more
important than the arguments in favor of a given style.

Top-posting is strongly discouraged in comp.arch.embedded as well. It's
not as bad as google groups posts that fail to include any context,
however, which seems to be considered the worst sin at the moment (I'm
not sure where posting in html ranks these days - I haven't seen any
html posts here for a while).

David.
 
F

Flash Gordon

Joe said:
To Flash and Walter,

Maybe 10% saved for a 1k memory was a bit of an exageration on my part.

If it's less then it is even less worth while.

different version of the compiler or libs. Changing the code's architecture
and data structures results in bigger savings - I'm almost thru doing that
(for other reasons) and it looks like I'll recover over 1k of mem (out of
8k). and the resulting code is cleaner and easier to understand too.

Which just goes to show that you should concentrate on high level
optimisations not micro-optimisations.
You both probably know this through experience, but one trick I've found of
simply making a local copy of a global variable, that is used a fair bit in
a function, and then coping it back to the global afterwards saves a
reasonable amount of code size, over the more obvious code,

I've used processors where it would be one instruction to set up the
offset then another to read/write the value for a local as compared to
in the worst case a single instruction the same size as that pair of
instructions for a global. Of course, the compiler optimisations make
this irrelevant with most compilers since they will keep it in a
register if it is used much.
> that it makes
this particular trick worth knowing/trying when things do get tight - it's
obviously (to me now) a bigger saving if the global happens to be volatile
as well.

If it is volatile you are significantly changing the semantics, so
either the original code was wrong or your "optimised" code is wrong.
I have no personal problem trying these things out if it helps me to
understand what the compiler is likely to do with new code that I author
(afterall, it takes about 6 minutes to try a little trick out, which means
that it'll take about an hour to know 10 new things about the behaviour of
the compiler and to recognise oportunities, etc. in the future.)

All that you learn about micro-optimisation on one system you have to
forget on the next where your "optimisations" actually make it worse.
Also, if your optimisations make the code harder to read you have just
increased the cost to the company of maintenance, and since maintenance
is often a major part of the cost of SW over its lifetime this is not good.
> Perhaps at
the end of the project, I would have had plenty of room anyway, but, as I
said things were tight, I was feeling uneasy, and every time I wanted more
debug info, it meant choosing something else for temporary culling which was
beginning to make things thorougly difficult.

It would almost certainly have saved you time and effort to restructure
the code and optimise the algorithms first (which you say you are doing)
since then you would not have found the need for "temporary culling"
because you have already admitted that it is saving you a significant
amount.
Things seem to be going 'swimmingly' now - I hope that holds up to the end.

Not really. You included in your message that restructuring is saving
you a vast amount of space which is further evidence that you should
always start at the algorithm and work down, not start with
micro-optimisation.

<snip>

For continual refusal to post properly despite being informed that top
posting is not accepted here...

*PLONK*
 
J

Joe Butler

Flash Gordon said:
If it's less then it is even less worth while.

If you say so. It seems to me that I've just given myself a bit more memory
to play with in a tight situation.
Which just goes to show that you should concentrate on high level
optimisations not micro-optimisations.

Yes, that's correct. It does not mean, so-called micro-optimisations should
not be attempted.
I've used processors where it would be one instruction to set up the
offset then another to read/write the value for a local as compared to
in the worst case a single instruction the same size as that pair of
instructions for a global. Of course, the compiler optimisations make
this irrelevant with most compilers since they will keep it in a
register if it is used much.

Well, in my case, GNU didn't seem to choose this optimisation - so, the
trick saved some bytes - that's indisputable, and as far as I can see, odd
that you'd want to dismiss the idea.
If it is volatile you are significantly changing the semantics, so
either the original code was wrong or your "optimised" code is wrong.

I don't think so. The code is in an ISR. While the ISR is running, the
global won't be changed externally. The little trick saved something like
28 bytes and yet the global was only touched twice, if I remember correctly.

However, if someone has a counter example, I would be interested in knowing.
All that you learn about micro-optimisation on one system you have to
forget on the next where your "optimisations" actually make it worse.

Perhaps. Yet, for very little effort, I have given myself more room to play
with in the current system, I've got a better feeling for the compiler's
quirks, and have saved (with just a few micro-optimisations) about 250-bytes
(out of 8k) - that's plenty more to extend an existing parameter look up
table built into the system in both directions (thus increasing the
versitility of the system), or to include more human-readable debug output
and even to use more sophisticated code for some other operations that are
yet to be written. So, as far as I can see, I've gained by this exercise.
I've given myself an advantage, and you are still telling me it was next to
worthless because it comes under the lable 'micro-optimisation' or
'premature optimisation' - to me, it's just an achieved gain.
Also, if your optimisations make the code harder to read you have just
increased the cost to the company of maintenance, and since maintenance
is often a major part of the cost of SW over its lifetime this is not
good.

Errm, in this case, I don't think the code is harder to read - it's
straighforward C-code - nothing particularly difficult. There are ample
comments documenting reasons for some of the slightly none-obvious ways of
doing something, along with the 'obvious' code snippet for ready-reference.
The code will go into a mass-produced product that is to be thorougly tested
before release - maintenance is not an option.
It would almost certainly have saved you time and effort to restructure
the code and optimise the algorithms first (which you say you are doing)
since then you would not have found the need for "temporary culling"
because you have already admitted that it is saving you a significant
amount.

I can't see this. I've positively gained thru this exercise. I have not
lost anything.
end.

Not really. You included in your message that restructuring is saving
you a vast amount of space which is further evidence that you should
always start at the algorithm and work down, not start with
micro-optimisation.

It looks like I will save about 1k with the restructure (but I cannot tell
for sure until I get the full re-structure into the embedded compiler - I'm
currently writing and testing the restructured part under Windows). So, the
'micro-optimisations' would amount to another 25% on top of that - hardly a
worthless effort, wouln't you say.
 
R

Robert Scott

Top-posting is strongly discouraged in comp.arch.embedded as well...

It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.


-Robert Scott
Ypsilanti, Michigan
 
P

pete

Robert said:
It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.

I don't like top posting.
 
S

Steve at fivetrees

Robert Scott said:
It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.

For the record, I simply don't care. I'm happy to make allowances for all
kinds. Life's too short.

Steve
http://www.fivetrees.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,754
Messages
2,569,527
Members
44,999
Latest member
MakersCBDGummiesReview

Latest Threads

Top