Swaping?

J

Jerry Coffin

[ ... ]
I'd love to give a great example of cryptic-looking code that I'd
actually use... but I can't really think of any. I'd bitshift instead of
multiplying or dividing by a power of two... but that's not really
cryptic.

This depends (heavily) upon what you mean by "cryptic". Just for
example, consider changing:

if (x) y();

to:

x && y();

Is this cryptic or just strange? It does the same thing, and anybody who
knows C or C++ at all well should be able to figure out what it does
pretty easily, but I doubt anybody would claim it makes the code any
easier to read. Certainly few people code that way on a regular basis.
Are things considered cryptic if they're hidden away in a macro?

That depends. It's pretty typical that (especially in C) macros can be
written in a fashion that would otherwise be rather cryptic. For
example, if you want to put more than one statement in a macro, you
typically write it like:

do { statement1; statement2; } while (0);

In normal code that would be pointless and probably cryptic. In a macro,
it's widely recognized, though some frown upon it.
Is the following cryptic?

SWAP_INTS(a,b);

As it stands, no, it's fairly readable. OTOH, somebody might be likely
to wonder why you didn't just use std::swap.
The only thing I am against is the phobia of using efficient constructs
that are a little more complicated than their less efficient
counterparts. Of course, there are times I have gone for elegancy
instead of efficiency (especially when it comes to designing abstract
base classes as interfaces), but if I'm writing an algorithm then you
can be sure I won't be programming in the manner in which old women
drive.

I don't think there's any such phobia. There is, however, a perfectly
natural and well-founded dislike of using constructs that are odd for
the sake of being odd. Most code gains little or nothing from such
things as the XOR-based swap mentioned previously in this thread.

For most code that's really as efficient as its author can make it,
there's also something about 95% as efficient that's MUCH more readable.
The latter is _usually_ preferable.
Take an operating system like Windows XP... how much faster do you think
it would run if the programmers didn't go for the easiest solution every
time? Actually that's a bad example; Windows XP wouldn't run at all if
they went for the easist solution every time because Microsoft
programmers are mostly incompetant. But pretending for the moment that
they can program competantly, how much faster do you think it would run?
I'd say maybe 15% or 20% faster.

I think if you want to make such claims, you should back them up with
some sort of evidence -- and you should do so somewhere that such a post
is topical, of course. As it stands, your opinions appear to have little
basis in reality and marginal topicality (at best) to the newsgroup.
 
R

Ron Natalie

Erik said:
In general no, if there are several ways to do something there is
nothing wrong with using the most efficient one, *unless* the most
efficient one also obscures the intent of the code. Code should be
written with clarity and correctness as primary goal, when you have
clear and correct working code you can, if necessary, start using less
obvious constructs to make it more efficient.
And if the more efficient way isn't necessarily the more efficient
way in all cases. XOR is neither universally correct nor efficient.
Std::swap is correct and MOST LIKELY the most efficient.
The assumption that a limitted scope local variable causes inefficiency
is inane.
 
T

Tomás Ó hÉilidhe

Pete Becker:

No. It's not ambiguous. It's simply wrong.


This isn't hard: you can't swap the values of types, so claiming that
that's what the code does is wrong.


I say "take the value of an integer type" instead of "take the value of
an integer type variable" in the same fashion that I say "give food to the
poor" rather than "give food to the poor people".

(Strictly speaking I should probably have a hyphen in it, i.e. "integer-
type", because I'm using it in similar fashion to an adjective).

I strip out redundancies, but of course there's no point debating it here.
 
T

Tomás Ó hÉilidhe

Jerry Coffin:
This depends (heavily) upon what you mean by "cryptic". Just for
example, consider changing:

if (x) y();

to:

x && y();


In reading the latter, I'd sort of presume that the author of the code
is trying to impress.

The "if" method is far more usual, but more importantly it was designed
exactly for this purpose.

I don't think there's any such phobia. There is, however, a perfectly
natural and well-founded dislike of using constructs that are odd for
the sake of being odd. Most code gains little or nothing from such
things as the XOR-based swap mentioned previously in this thread.


I'm doing an embedded systems project in college at the moment. I
have to flash a display a good few times a second... and if my
algorithmic code is too slow then I won't get back around to re-flashing
the display quick enough. If an XOR swap would remedy this by virtue of
it being faster, then you can bet I'd use it.

I think if you want to make such claims, you should back them up with
some sort of evidence -- and you should do so somewhere that such a
post is topical, of course. As it stands, your opinions appear to have
little basis in reality and marginal topicality (at best) to the
newsgroup.


OK I see what you mean. Instead, I'll give approach specific methods:

* How many times do you think Windows uses dynamic memory allocation
when it could have used the stack?
* How many times does it peform redundant checks? (e.g. checking that a
parameter is valid before passing it to a function, and then checking it
again within the function).

In the case of Microsoft however, I woudn't suggest they remove the
redundant checks... but for competant programmers there's no need for
them to be there.
 
T

Tomás Ó hÉilidhe

Ron Natalie:
And if the more efficient way isn't necessarily the more efficient
way in all cases. XOR is neither universally correct nor efficient.
Std::swap is correct and MOST LIKELY the most efficient.
The assumption that a limitted scope local variable causes inefficiency
is inane.


You're exactly right in this case, std::swap should be magnificently fast.

Forgetting the XOR method for a moment tho, I was referring to the phobia
that people have of writing efficient code as they're writing it.
 
E

Erik Wikström

Pete Becker:




I say "take the value of an integer type" instead of "take the value of
an integer type variable" in the same fashion that I say "give food to the
poor" rather than "give food to the poor people".

(Strictly speaking I should probably have a hyphen in it, i.e. "integer-
type", because I'm using it in similar fashion to an adjective).

I strip out redundancies, but of course there's no point debating it here.

If that is you goal you should say "take the value of an integer".
 
A

Alf P. Steinbach

* Tomás Ó hÉilidhe:
Jerry Coffin:



In reading the latter, I'd sort of presume that the author of the code
is trying to impress.

The "if" method is far more usual, but more importantly it was designed
exactly for this purpose.

No, it depends. Writing instead, equivalently,

!x() || y();

you have a construction that is idiomatic in many languages, especially

whatever() || die();

In C++ it is very useful for visually separating (1) normal case code,
(2) failure checking and (3) exception throwing, where the construction
above then contains (2) and (3), laid out in a visually easy-to-identify
way,

// Normal case code:
int const someResult = someApiFunction();

// "Assertion" Action on assertion failure
(someResult > 0) || throwX( "someApiFunction failed" );

which without the comments would look like

int const someResult = someApiFunction();
(someResult > 0) || throwX( "someApiFunction failed" );

which I find to be more clear than an ordinary if, which indicates
normal case code.

Cheers, & hth.,

- Alf
 
E

Erik Wikström

Jerry Coffin:


In reading the latter, I'd sort of presume that the author of the code
is trying to impress.

The "if" method is far more usual, but more importantly it was designed
exactly for this purpose.

The same can be said about a lot of facilities that people try to find
clever alternatives for, such as std::swap().
I'm doing an embedded systems project in college at the moment. I
have to flash a display a good few times a second... and if my
algorithmic code is too slow then I won't get back around to re-flashing
the display quick enough. If an XOR swap would remedy this by virtue of
it being faster, then you can bet I'd use it.

But you measured first to see that the swap was the part where you get
the most gain in optimising, right? There is nothing wrong with
optimisation, just premature optimisation. Why use a more complex (and
often more error-prone) construct unless it gives you a significant
speed-up in return? Unless you have been working with the same or
similar projects for some time and done a lot of measurements you
usually do not know shit about what needs to be optimised.
OK I see what you mean. Instead, I'll give approach specific methods:

* How many times do you think Windows uses dynamic memory allocation
when it could have used the stack?

What exactly do you mean with Windows in this case? And further more,
what investigations have you made that tells you that they are using the
stack more than they have to. We still would like some kind of evidence.

I am not a kernel hacker myself but I do know this, in kernel mode you
do not use the stack if you can help it, because the stack is usually tiny.
* How many times does it peform redundant checks? (e.g. checking that a
parameter is valid before passing it to a function, and then checking it
again within the function).

Examples? I have absolutely no idea about how many checks that they do,
but you seem to know.
 
T

Tomás Ó hÉilidhe

=?UTF-8?B?RXJpayBXaWtzdHLDtm0=?=:
There is nothing wrong with
optimisation, just premature optimisation.


I argue that there's no such thing as premature optimisation. While I'm
writing code, I'll think there and then "hmmm maybe I should do it this way
it'd be pretty fast".
Exactly do you mean with Windows in this case? And further more,
what investigations have you made that tells you that they are using
the stack more than they have to. We still would like some kind of
evidence.


If you use std::string's where you could have used char arrays on the
stack, then the code will be slower. Most times, the difference in speed
will not be noticable. However there was one time I wrote an algorithm to
convert "8732" to "Eight thousand, seven hundred and thirty-two" in three
different languages. I ran it against a database of fifty-hundred different
numbers, and the char array form was a hell of a lot faster than the
std::string form.
 
E

Erik Wikström

Ron Natalie:



You're exactly right in this case, std::swap should be magnificently fast.

Forgetting the XOR method for a moment tho, I was referring to the phobia
that people have of writing efficient code as they're writing it.

One problem that software engineers face is the fact that unlike other
engineering professions we do not have as much hard facts about best
practices as they do, but one of those we do have is that premature
optimisation is the root of much evil. More efficient constructs are
often more error-prone, harder to understand, and less generic (which
makes future changes harder). And, unfortunately, all to often they do
not buy you very much efficiency, usually using clean code with a better
algorithm will give more noticeable differences.

Notice here that the important word is premature. Most of these
efficiency hacks are done without any prior measurements of what parts
of the program needs to improve, and often in parts where no noticeable
gain can be had.

Writing efficient code as they are writing it usually means that no
measurements have been done, and the optimisation is therefore usually
premature.
 
E

Erik Wikström

=?UTF-8?B?RXJpayBXaWtzdHLDtm0=?=:


I argue that there's no such thing as premature optimisation. While I'm
writing code, I'll think there and then "hmmm maybe I should do it this way
it'd be pretty fast".

And that piece of code will now be one hundred of a second faster and
will run once every hour, who will notice? Was the increased maintenance
cost worth it? Of course there are premature optimisation, if you work
in the industry for some time you are bound to come across it.
If you use std::string's where you could have used char arrays on the
stack, then the code will be slower. Most times, the difference in speed
will not be noticable. However there was one time I wrote an algorithm to
convert "8732" to "Eight thousand, seven hundred and thirty-two" in three
different languages. I ran it against a database of fifty-hundred different
numbers, and the char array form was a hell of a lot faster than the
std::string form.

And what does that have to do with Windows code?
 
T

Tomás Ó hÉilidhe

=?UTF-8?B?RXJpayBXaWtzdHLDtm0=?=:
And that piece of code will now be one hundred of a second faster and
will run once every hour, who will notice? Was the increased
maintenance cost worth it? Of course there are premature optimisation,
if you work in the industry for some time you are bound to come across
it.


You're exactly right my friend.

But if it's run 50 thousand times in a loop per second, then you'll notice.
 
R

Ron Natalie

Tomás Ó hÉilidhe said:
Ron Natalie:



You're exactly right in this case, std::swap should be magnificently fast.

Forgetting the XOR method for a moment tho, I was referring to the phobia
that people have of writing efficient code as they're writing it.
Who the hell said anything about phobia.
What I was railing about was people who have delusional ideas of what is
fast that flies in the face of over fifty years of software design.
I've been programming since 1975 and writing in C since 77. I've wrote
several compilers, and written portable code from everything from a
archaic microprocessor up to $25 million dollar supercomputers.
The problem is that people come up with economy of C lines of code
which have no freaking bearing to reality or if they can actually
measure some improvement in some tiny piece of the world, they can't
understand why it fails in the larger environment.
 
P

Pete Becker

Pete Becker:




I say "take the value of an integer type" instead of "take the value of
an integer type variable" in the same fashion that I say "give food to the
poor" rather than "give food to the poor people".

plonk.
 
J

Jerry Coffin

[ ... ]
The "if" method is far more usual, but more importantly it was designed
exactly for this purpose.

Two basic points. One, substitute 'std::swap' for 'if' and your
statement remains true. Second, I do know of one (ancient) compiler for
which the && version was marginally smaller and faster...

[ ... ]
I'm doing an embedded systems project in college at the moment. I
have to flash a display a good few times a second... and if my
algorithmic code is too slow then I won't get back around to re-flashing
the display quick enough. If an XOR swap would remedy this by virtue of
it being faster, then you can bet I'd use it.

You seem to be starting with the precept that under some circumstances,
the xor version (or something else that's obscure) is likely to be
faster. So far, you've failed to show any evidence of any such thing.

Way back when, I wrote a fair amount of code on 8-bit processors running
at 1 MHz and occasionally even slightly slower. You used every trick you
could come up with to get things done, save tiny bits of code here and
there, and so on. When you have no choice, you do what you've got to do.

There are two elements to that though: first, be sure you really need
the tricks. Second, be sure they're really effective. Unless both apply,
code that's clear and readable is obviously better. I'd also note that
every improvement in compilers and optimizers reduces the usefulness of
obscure code -- to the point that even when it theoretically saves you a
variable or cycle, it can easily compile to something less efficient.

[ ... ]
* How many times do you think Windows uses dynamic memory allocation
when it could have used the stack?

"Could have" is hard to pin down. In theory, there's nothing to prevent
you from statically allocating everything if you want to badly enough.
Back in the FORTRAN days, we all did exactly that.

There's no question that the NT kernel uses dynamic allocation in quite
a few situations, and depending on your viewpoint, some of those could
be allocated statically or on a stack -- but in doing so, you'd lose
some capabilities currently present, such as being able to expand an
allocation, even if it's only rarely needed. This is part of how the
same kernel, with only trivial tuning, works reasonably well on machines
ranging from 486's with 32 megabytes of RAM to 32-processor boxes with
64 gigabytes of RAM.
* How many times does it peform redundant checks? (e.g. checking that a
parameter is valid before passing it to a function, and then checking it
again within the function).

Very few -- if you knew the kernel well at all, you'd know this. It's
specifically segregated into routines that check parameters and routines
that don't. I'll repeat the advice to pick up a book about it if you
really care. While you're at it, spend a bit of time hanging out some
place that things like this are topical.
 
J

James Kanze

[...]
I'd love to give a great example of cryptic-looking code that
I'd actually use... but I can't really think of any. I'd
bitshift instead of multiplying or dividing by a power of
two... but that's not really cryptic.

If the goal is to multiply or divide by a power of two, it's
obfuscation. It's the sort of thing amateurs love, but a
professional eschews. Also, of course, it's not even really
optimization---I've used machines where multiplication was
faster than shifting, and of course, today, the compiler will
choose whichever is faster.

(I've actually benchmarked 127 * x as opposed to (x << 7) - 1,
on a Sun Sparc, which doesn't---or didn't back then---have
hardward multiplication. The compiler converted the
multiplication into a shift and subtract, and managed to do so
with one less instruction than when I wrote it out explicitly,
doubtlessly because it knew the finality which was desired. I
did use something like ((a << 2) + a) << 1 to multiply by 10
once. But only after the profiler indicated a problem. With
conditional compilation so it didn't slow things down on our
other platform, which had fast multiplication. And of course,
modern compilers do this all by themselves---generally better
than the programmer can.)

The whole thread is becoming ridiculous. You're proposing
obfuscated code for presumed performance improvements, when 1)
the code doesn't actually work, 2) there is no indication that
performance is a problem to begin with, and 3) it's far from
sure that the obfuscated code will actually be faster. You
can't get much more amateurish.
Take an operating system like Windows XP... how much faster do
you think it would run if the programmers didn't go for the
easiest solution every time?

Have you actually seen the source code to Windows XP? Have you
benchmarked it to know where the speed is lost? Or are you just
spouting off about something you know nothing about.
Actually that's a bad example; Windows XP wouldn't run at all
if they went for the easist solution every time because
Microsoft programmers are mostly incompetant.

Do you actually know any Microsoft programmers. I've met
several, and all were very far from incompetent.
But pretending for the moment that they can program
competantly, how much faster do you think it would run? I'd
say maybe 15% or 20% faster.

I'd say that you don't know what you're talking about. Just a
guess, but I'd imagine that most of the performance in Windows
is used 1) protected the code from idiot users, and 2) making
things flashier---fancy visual effects can eat up enormous
amounts of CPU.
 
J

James Kanze

=?UTF-8?B?RXJpayBXaWtzdHLDtm0=?=:
I argue that there's no such thing as premature optimisation.

That's because you're an amateur, and not a professional. Me,
I'm paid to write code which fulfills the requirements
specifications and is easy to maintain, not to waste my time
(and the customer's money) on unnecessary optimizations which
aren't, and which render the code less maintainable.
While I'm writing code, I'll think there and then "hmmm maybe
I should do it this way it'd be pretty fast".

When I'm writing code, I'll thing there and then "hmmm maybe I
should do it this way---it'll be pretty easy to understand and
modify in the future."
 
T

Tomás Ó hÉilidhe

James Kanze:
That's because you're an amateur, and not a professional. Me,
I'm paid to write code which fulfills the requirements
specifications and is easy to maintain, not to waste my time
(and the customer's money) on unnecessary optimizations which
aren't, and which render the code less maintainable.


Indeed your right. I'd hate to be a professional programmer. I program
just for the fun of it and I don't have to reshape the way I program to
satisfy customers and employers and the like.

Of course you make valid points when it comes to doing programming for
business reasons... but if someone's doing it for hobby then they'll be
doing it just exactly the way they want to.
 
J

James Kanze

James Kanze:
Indeed your right. I'd hate to be a professional programmer. I
program just for the fun of it and I don't have to reshape the
way I program to satisfy customers and employers and the like.

In that case, of course: if such analysis turns you on, then by
all means, do it. Just realize that that isn't the case for
most people programming in C++ (or just programming, for that
matter---but C++'s virtues really come to the fore in big
projects, which of course, an amateur can't do).
Of course you make valid points when it comes to doing
programming for business reasons... but if someone's doing it
for hobby then they'll be doing it just exactly the way they
want to.

Quite. It wouldn't be fun otherwise, would it?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,160
Latest member
CollinStri
Top