what is fast dynamically linked executable or statically linked executable ?how to decide?

P

pratap

Could someone clarify how could one reduce the size of an executable
code during compile time.
Could one use specific compile time flag with makefile or is it
advisable to go for dynamic linking.
The idea here is that smaller the code size the faster is the code.
Is Dynamically linked executable really faster than a single
executable file which is not linked dynamically.?

Is there any performance measuring metrics on gcc version 3.2.2
 
S

santosh

pratap said:
Could someone clarify how could one reduce the size of an executable
code during compile time.
Could one use specific compile time flag with makefile or is it
advisable to go for dynamic linking.
The idea here is that smaller the code size the faster is the code.
Is Dynamically linked executable really faster than a single
executable file which is not linked dynamically.?

Is there any performance measuring metrics on gcc version 3.2.2

This is off-topic for this group. Post to a system specific group or
comp.programming.
 
F

Flash Gordon

santosh wrote, On 05/03/07 06:27:
This is off-topic for this group. Post to a system specific group or
comp.programming.

There is one on-topic answer to some of it, and when you seriously need
to reduce the size of the executable because you are out of space it is
often the best answer. Delete some of your code. Deleting code generally
(not always) leads to faster execution which is another of the OPs
goals. After all, it takes in general less time to execute 0 statements
and >0 statements.

:)
 
W

William Ahern

Could someone clarify how could one reduce the size of an executable
code during compile time.

Write less code? I'm being serious. 1) Its the topical answer here; and
2) it may be the best answer you'll ever get. Sometimes its the last thing
holding you back when you begin to over-design a solution or project.
Could one use specific compile time flag with makefile or is it
advisable to go for dynamic linking.

Probably both. If you're using a Unix you might want to ask over at
comp.unix.programmer.
The idea here is that smaller the code size the faster is the code.

Interesting idea. You won't know if its true or not unless you test it
w/ your existing platform, compiler, code and usage environment.
Is Dynamically linked executable really faster than a single
executable file which is not linked dynamically.?

Try comp.unix.programmer or another relevant group.
Is there any performance measuring metrics on gcc version 3.2.2

Google might know, or someone on a GCC specific group. Actually, there are
lots of knowledgable GCC folk on comp.unix.programmer.
 
S

santosh

Flash said:
santosh wrote, On 05/03/07 06:27:

There is one on-topic answer to some of it, and when you seriously need
to reduce the size of the executable because you are out of space it is
often the best answer. Delete some of your code. Deleting code generally
(not always) leads to faster execution which is another of the OPs
goals. After all, it takes in general less time to execute 0 statements
and >0 statements.

:)

I've often found deleting code to require more care than throwing in
more code! Maybe that's a sign of poor design.
 
F

Flash Gordon

santosh wrote, On 05/03/07 08:15:
Flash Gordon wrote:


I've often found deleting code to require more care than throwing in
more code! Maybe that's a sign of poor design.

Both deleting code and adding in more code requires care with any major
project. Deleting code, however, generally makes subsequent maintenance
easier for a lot of reasons, so any extra effort gets paid back.
 
R

Richard Bos

santosh said:
I've often found deleting code to require more care than throwing in
more code! Maybe that's a sign of poor design.

Il semble que la perfection soit atteinte non quand il n’y a plus
rien à ajouter, mais quand il n’y a plus rien à retrancher.
-- Antoine de Saint-Exupéry

Richard
 
S

santosh

Richard said:
Il semble que la perfection soit atteinte non quand il n'y a plus
rien à ajouter, mais quand il n'y a plus rien à retrancher.
-- Antoine de Saint-Exupéry

Le langage est source de malentendus.
-- Antoine de Saint-Exupéry

Fortunately, not in this case! I'm off to borrow a copy of The Little
Prince.
 
J

Jean-Marc Bourguet

santosh said:
Le langage est source de malentendus.
-- Antoine de Saint-Exupéry

Fortunately, not in this case! I'm off to borrow a copy of The Little
Prince.

The second citation is from Le Petit Prince (The Little Prince). The first
is from Terre des Hommes (Wind, Sand and Stars).

Yours,
 
D

Daniel Rudy

At about the time of 3/4/2007 10:21 PM, pratap stated the following:
Could someone clarify how could one reduce the size of an executable
code during compile time.
Could one use specific compile time flag with makefile or is it
advisable to go for dynamic linking.
The idea here is that smaller the code size the faster is the code.
Is Dynamically linked executable really faster than a single
executable file which is not linked dynamically.?

Is there any performance measuring metrics on gcc version 3.2.2

If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.

As for the C language, the best that I can suggest is dynamic linking.
It saves space, it's handled by the kernel exec loader, and it's
transparent to you. The type of linking only buys you a very very
marginal gain on program start. When the CPU is running in your code,
linking has nothing to do with it.

--
Daniel Rudy

Email address has been base64 encoded to reduce spam
Decode email address using b64decode or uudecode -m

Why geeks like computers: look chat date touch grep make unzip
strip view finger mount fcsk more fcsk yes spray umount sleep
 
S

Stephen Sprunk

pratap said:
Could someone clarify how could one reduce the size of an
executable code during compile time.
Could one use specific compile time flag with makefile or is it
advisable to go for dynamic linking.
The idea here is that smaller the code size the faster is the code.
Is Dynamically linked executable really faster than a single
executable file which is not linked dynamically.?

Your whole premise here is flawed; smaller does not mean faster. In fact,
most compilers have options to either compile for speed (e.g. gcc's -O3) or
for size (e.g. gcc's -Os), and the two are mutually exclusive. There are
lots of tricks modern compilers use to speed up code that actually make the
executable (sometimes significantly) bigger. Likewise, the tricks they use
to make code small often make executables significantly slower.

Dynamic linking may make your code faster or slower; it will almost always
make it smaller. The potential speed boost comes not from the size, though;
it comes from multiple copies of the same code being shared in memory
instead of duplicated, which reduces swapping and cache misses.

However, it all boils down to this: if you have to care about optimizations
at this level, you're most likely doing something seriously wrong.

S
 
K

Keith Thompson

Daniel Rudy said:
If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.
[...]

That's not *necessarily* true, and it's likely to be false if you're
not an experienced and/or talented assembly language programmer.

A good optimizing compiler isn't going to be as smart as a good
assembly language programmer, but it is more patient and persistent.
Optimization is often about tradeoffs; one technique might be better
or worse than another depending on some seemingly minor detail. To
pick a hypothetical example out of the air, algorithm X might be best
if an array size happens to be a power of 2, but algorithm Y might be
better than X if it isn't. A good optimizing compiler can make this
decision every time you compile your program; changing that single
array declaration and recompiling might result in substantially
different generated code for a 1% performance improvement. And there
can be many such decision points in your program. If you're manually
coding the whole thing in assembly language, you'll probably have to
make each such decision at design time.

And of course you completely lose any semblance of portability.

Disclaimer: I worked on optimizing compilers in the distant past, but
I've done very little assembly language programming.
 
U

user923005

[...]> If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.

[...]

That's not *necessarily* true, and it's likely to be false if you're
not an experienced and/or talented assembly language programmer.

A good optimizing compiler isn't going to be as smart as a good
assembly language programmer, but it is more patient and persistent.
Optimization is often about tradeoffs; one technique might be better
or worse than another depending on some seemingly minor detail. To
pick a hypothetical example out of the air, algorithm X might be best
if an array size happens to be a power of 2, but algorithm Y might be
better than X if it isn't. A good optimizing compiler can make this
decision every time you compile your program; changing that single
array declaration and recompiling might result in substantially
different generated code for a 1% performance improvement. And there
can be many such decision points in your program. If you're manually
coding the whole thing in assembly language, you'll probably have to
make each such decision at design time.

And of course you completely lose any semblance of portability.

Disclaimer: I worked on optimizing compilers in the distant past, but
I've done very little assembly language programming.

--
Keith Thompson (The_Other_Keith) (e-mail address removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

I would like to address the original claim:
[...]> If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.

from a sightly different stance.

Of course, that's mostly nonesense. Assembly gurus can write patches
of assembly that are faster than equivalent patches of C. However,
that is a terrible way to try to make things faster and should be used
as a last resort (ONLY after everything else fails).

The first step is to determine is what is the slow bit. This is
determined by profiling. Only after this information is made sure
should any attempt at speedup be launched.

After knowing the slow spots, the sensible way to improve the speed is
to improve the algorithm. For instance, for equality searches change
from a skiplist to a hash table will make average case performance go
from O(log(n)) to O(1). After researching all possible algorithmic
improvements, choose the most promising one and try it. If there are
no known algorithmic improvements, then try to come up with your own
algorithmic improvement.

If we have identified the problem and no known algorithmic
improvements exist, and we are unable to create any, then it is time
to look for linear speedups. These can be created by a number of
things that include:
1. Cache conscious algorithms
2. Profile guided optimization
3. Assembly language for the slow bits
4. Hardware improvements
{probably some others that I can't think of right now}

All of these linear improvements suffer from a terrible defect: They
do not scale with the problem. That is to say, we can make the
program run (perhaps) 4x faster. But when the size of the input data
set scales up to the equivalent of the savings then we are back to
square 1. And if the fundamental algorithm in question is not O(1) we
will rapidly lose ground as the problem expands. That is why an
algorithm improvement is the very best way to improve speed problems.
This is especially true because the problem ALWAYS scales up. Over
time, more and more data is going to accumulate, and this enlarged
data set is going to be fed into the programs.

Problems with Assembly:
1. It is tedious (look at how many lines of assembly you will need to
create the equivalent in a high level language)
2. It is not portable. Even across the same chip family, problems
develop over time (imagine a program using an old assembly routine
which returns and answer in AX instead of EAX, for example). The
semantics of inline assembly change from compiler to compiler.
3. You have to be really good at it to be able to outsmart a current
optimizing C compiler.
4. There are not as many good Assembly programmers as there are good
C programmers. For that reason it may be harder to maintain
(depending on the resources of the orgainization receiving the
solution of course).

IMO-YMMV
 
F

Flash Gordon

Keith Thompson wrote, On 05/03/07 20:24:
Daniel Rudy said:
If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.
[...]

That's not *necessarily* true, and it's likely to be false if you're
not an experienced and/or talented assembly language programmer.

A good optimizing compiler isn't going to be as smart as a good
assembly language programmer, but it is more patient and persistent.
Optimization is often about tradeoffs; one technique might be better

Disclaimer: I worked on optimizing compilers in the distant past, but
I've done very little assembly language programming.

Having in the past been the smart assembly language programmer (but not
been involved in writing compilers) and in that role reviewed lots of
code written than others my experience is:

1) Most hand written assembler 10 years or more back of significant size
is not as efficient as compiled code
2) 10 years back I could easily beet some compiler on certain *small*
tasks (by a factor of more than 2 in the worst case)
3) The effort involved in trying to beet the compiler is normally not
worth it, and I certainly would not try to beet a more modern compiler
until I had proved it was essential.

Optimisers have moved on, so I'm sure the good assembly language
programmers find it harder to beet the compiler today than it was in my
days as an assembly language programmer.
 
S

santosh

Flash said:
Keith Thompson wrote, On 05/03/07 20:24:
Daniel Rudy said:
If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.
[...]

That's not *necessarily* true, and it's likely to be false if you're
not an experienced and/or talented assembly language programmer.

A good optimizing compiler isn't going to be as smart as a good
assembly language programmer, but it is more patient and persistent.
Optimization is often about tradeoffs; one technique might be better

Disclaimer: I worked on optimizing compilers in the distant past, but
I've done very little assembly language programming.

Having in the past been the smart assembly language programmer (but not
been involved in writing compilers) and in that role reviewed lots of
code written than others my experience is:

1) Most hand written assembler 10 years or more back of significant size
is not as efficient as compiled code
2) 10 years back I could easily beet some compiler on certain *small*
tasks (by a factor of more than 2 in the worst case)
3) The effort involved in trying to beet the compiler is normally not
worth it, and I certainly would not try to beet a more modern compiler
until I had proved it was essential.

Optimisers have moved on, so I'm sure the good assembly language
programmers find it harder to beet the compiler today than it was in my
days as an assembly language programmer.

Sometime back I was interested in this enough to write x86 assembler
replacements for strlen and atoi. Beyond ten thousand calls, I found
my replacements to edge in front of glibc's routines, and the
difference became human visible at around a few million calls. The
interesting thing was that my assembler was straightforward and
totally unoptimised. I can dredge up the code if anyone is interested.

It seems hand crafted assembly *is* faster than C code, but the
difference is trivial in almost all situations. On the other hand, the
portability advantage of C triumphs over the whole issue, except in
the cases of known hot-spots and other critical sections, which are
usually tiny to non-existent in most applications.
 
S

santosh

Flash said:
Keith Thompson wrote, On 05/03/07 20:24:
Daniel Rudy said:
If you want faster code, then use assembler....that's about as fast as
you can get without upgrading hardware.
[...]

That's not *necessarily* true, and it's likely to be false if you're
not an experienced and/or talented assembly language programmer.

A good optimizing compiler isn't going to be as smart as a good
assembly language programmer, but it is more patient and persistent.
Optimization is often about tradeoffs; one technique might be better

Disclaimer: I worked on optimizing compilers in the distant past, but
I've done very little assembly language programming.

Having in the past been the smart assembly language programmer (but not
been involved in writing compilers) and in that role reviewed lots of
code written than others my experience is:

1) Most hand written assembler 10 years or more back of significant size
is not as efficient as compiled code
2) 10 years back I could easily beet some compiler on certain *small*
tasks (by a factor of more than 2 in the worst case)
3) The effort involved in trying to beet the compiler is normally not
worth it, and I certainly would not try to beet a more modern compiler
until I had proved it was essential.

Optimisers have moved on, so I'm sure the good assembly language
programmers find it harder to beet the compiler today than it was in my
days as an assembly language programmer.

I'm not a native English speaker, so I may be missing something, but
shouldn't all instances of the word 'beet', in your reply above,
actually be 'beat'?
 
C

CBFalconer

santosh said:
Flash Gordon wrote:
.... snip ...

I'm not a native English speaker, so I may be missing something,
but shouldn't all instances of the word 'beet', in your reply
above, actually be 'beat'?

Yup. Beets are nounlike, edible, red and sugary, while beat is a
verb, which goes with wars, games, rugs, and generations. :) Then
there are beetles, which are not small beets, have wings, and make
considerably more noise than beets, but may be red. I am not sure
about the (vitamin) C content of any of either beets or beetles.
Thus maintaining rigid topicality.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
F

Flash Gordon

santosh wrote, On 06/03/07 04:22:
Flash Gordon wrote:


I'm not a native English speaker, so I may be missing something, but
shouldn't all instances of the word 'beet', in your reply above,
actually be 'beat'?

I'm a dyslexic native English speaker, so this probably just throws up
the limitations of spelling checkers. Without actually looking it up in
a dictionary I've no idea.
 
F

Flash Gordon

santosh wrote, On 06/03/07 04:19:
Flash Gordon wrote:


Sometime back I was interested in this enough to write x86 assembler
replacements for strlen and atoi. Beyond ten thousand calls, I found
my replacements to edge in front of glibc's routines, and the
difference became human visible at around a few million calls. The
interesting thing was that my assembler was straightforward and
totally unoptimised.

A sufficiently good assembly language programmer can always at least
equal any compiler (anything the compiler can write it is *possible*,
however hard, for a human to produce). Sometimes there is even something
blindingly obvious to the assembly programmer that for whatever reason
the compiler does not see allowing such things as you report.
> I can dredge up the code if anyone is interested.

No thanks. Not exactly topical here, is it ;-)
It seems hand crafted assembly *is* faster than C code,

Only given a sufficiently good assembly programmer. I've seen far too
much badly hand crafted assembler to believe it as a general rule.
> but the
difference is trivial in almost all situations. On the other hand, the
portability advantage of C triumphs over the whole issue, except in
the cases of known hot-spots and other critical sections, which are
usually tiny to non-existent in most applications.

Agreed. I would also say that people good enough to beat modern
compilers are probably far less common than in my days as an assembler
programmer and my experience back then was less than 50% of assembler
programmers would stand a chance of beating the compiler on anything
more complex than implementing strlen.

This makes you better than average based on my experience, since atoi is
more complex than strlen. ;-)
 
D

Dave Vandervies

Keith Thompson wrote, On 05/03/07 20:24:

It also has a much larger working memory, so it can make bigger-picture
optimizations without getting lost in details.

Optimisers have moved on, so I'm sure the good assembly language
programmers find it harder to beet the compiler today than it was in my
days as an assembly language programmer.

The good assembly language programmers still have the advantage of
being able to start with what the compiler does and improve on that.
No competent assembly programmer will be beat by an optimizing compiler
they have access to, since the worst case is to compare the compiler's
output to their code and decide to use the compiler's version.
They can also get away with leaving the little things to the compiler
and focus on the parts of the program that *really* *matter*, where
saving one or two CPU cycles actually adds up to something noticeable.


dave
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,734
Messages
2,569,441
Members
44,832
Latest member
GlennSmall

Latest Threads

Top