Ask for book for efficient coding in C

D

Dann Corbit

txr@x- said:
websnarf said:
The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.

I agree with this wholeheartedly.

There is one thing though, and it has to do with initial creation of the
code.

If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.

So (IMO) the initial design phase is a good place for some thought in
this regard.

After the code is written, never try to optimize without measuring
first.

I have two main comments on this:

1) You can *measure* performance before you even write code. [snip]

It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

O(f(n)) is a measure.
 
P

Phil Carmody

Dann Corbit said:
txr@x- said:
websnarf said:
The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.

I agree with this wholeheartedly.

There is one thing though, and it has to do with initial creation of the
code.

If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.

So (IMO) the initial design phase is a good place for some thought in
this regard.

After the code is written, never try to optimize without measuring
first.

I have two main comments on this:

1) You can *measure* performance before you even write code. [snip]

It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

O(f(n)) is a measure.

Without a scale.

Phil
 
T

Tim Rentsch

Dann Corbit said:
txr@x- said:
websnarf said:
The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.

I agree with this wholeheartedly.

There is one thing though, and it has to do with initial creation of the
code.

If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.

So (IMO) the initial design phase is a good place for some thought in
this regard.

After the code is written, never try to optimize without measuring
first.

I have two main comments on this:

1) You can *measure* performance before you even write code. [snip]

It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

O(f(n)) is a measure.

A bound on the worst-case order of algorithm run-time is a
measure, at least in one sense of the word measure. However,
this result isn't really _measured_, it comes out of analyzing
the algorithm mathematically. And in any case we can't
determine than an algorithm is O(f(n)) before the algorithm
exists.
 
D

Dann Corbit

txr@x- said:
Dann Corbit said:
txr@x- said:
The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.

I agree with this wholeheartedly.

There is one thing though, and it has to do with initial creation of the
code.

If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.

So (IMO) the initial design phase is a good place for some thought in
this regard.

After the code is written, never try to optimize without measuring
first.

I have two main comments on this:

1) You can *measure* performance before you even write code. [snip]

It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

O(f(n)) is a measure.

A bound on the worst-case order of algorithm run-time is a
measure, at least in one sense of the word measure. However,
this result isn't really _measured_, it comes out of analyzing
the algorithm mathematically. And in any case we can't
determine than an algorithm is O(f(n)) before the algorithm exists

True, but we can apply algorithms of known complexity so that we know
how the program will behave when the problem scales.

It also matters how well I understand the input data domain.

When I refer to measuring performance before I write code, I am
describing the selection of known algorithms using worst case and
average case complexity to put a crude bound on performance so that when
the problem scales larger the tool set will not go into the crapper.

Another way of saying the same thing is that we write better code when
we start with a good design. One common reason that we end up profiling
code is that a poor choice was made in the design phase.
 
B

Ben Pfaff

Branimir Maksimovic said:
Well, you can;t measure performance till you know on what hardware
it will run. For example fast jpeg algorithm can be order of magnitude
slower than slower algorithm but more compact code.
For example if you have 64kb of fast memory and 32 mb of external
memory .
Measuring performance is meaningless if not applied to particular
hardware.

In the cache of cache size and design this is not necessarily
true. Consider cache-oblivious algorithms, for example (see
http://en.wikipedia.org/wiki/Cache-oblivious_algorithm for an
introduction).
 
T

Tim Rentsch

Dann Corbit said:
txr@x- said:
Dann Corbit said:
The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.

I agree with this wholeheartedly.

There is one thing though, and it has to do with initial creation of the
code.

If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.

So (IMO) the initial design phase is a good place for some thought in
this regard.

After the code is written, never try to optimize without measuring
first.

I have two main comments on this:

1) You can *measure* performance before you even write code. [snip]

It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

O(f(n)) is a measure.

A bound on the worst-case order of algorithm run-time is a
measure, at least in one sense of the word measure. However,
this result isn't really _measured_, it comes out of analyzing
the algorithm mathematically. And in any case we can't
determine than an algorithm is O(f(n)) before the algorithm exists

True, but we can apply algorithms of known complexity so that we know
how the program will behave when the problem scales.

It also matters how well I understand the input data domain.

When I refer to measuring performance before I write code, I am
describing the selection of known algorithms using worst case and
average case complexity to put a crude bound on performance so that when
the problem scales larger the tool set will not go into the crapper.

Another way of saying the same thing is that we write better code when
we start with a good design. One common reason that we end up profiling
code is that a poor choice was made in the design phase.

I understand your point, and I agree that the end result is
almost always better if we do good design up front. But I think
it's dishonest to call that sort of up-front design effort
"measurement", and certainly it is _not_ measuring performance.
 
T

Tim Rentsch

Branimir Maksimovic said:
It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.

Well, you can;t measure performance till you know on what hardware
it will run. [snip example]

Perhaps not run-time performance, but there are other
performance metrics, and some of these certainly can
be measured. How many function calls are executed,
as one example.
Measuring performance is meaningless if not applied to particular
hardware. [snip elaboration]

Perhaps not as meaningful as we would like, but not completely
meaningless either.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top