Writing Scalabe Software in C++

R

Rudy Velthuis

Stephen said:
"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason -
including blind stupidity." - W.A. Wulf

Skybuck is very good in the "blind stupidity" department, though. <g>
 
S

Stephen Sprunk

Skybuck Flying said:
For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits
but would still be faster than simulating it.

Not likely. On common 64-bit machines, all operations take the same amount
of time regardless of whether they're 32- or 64-bit, so there's no potential
speedup. So, you're only talking about potential benefits of not using
emulation on older 32-bit machines. The performance of detecting the 32-bit
case and then branching to either the 32- or 64-bit code paths (or, in the
16/32-bit equivalent, setting a bit in the segment descriptor) will usually
outweigh the savings you'll get from not needing to emulate 64-bit
operations. Even if it's not a certain victory, the programmer cost of the
code complexity will likely decide things in such a case -- especially since
it only benefits people with outdated machines.
Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection
that overhead might disappear ! ;) :)

Adding that detection logic into the CPU will just change where the overhead
is paid for; that cost has to be paid _somewhere_.

You seem to think that counting instructions is how to measure speed. That
hasn't been true on x86 since the days of the 486, or possibly even earlier.
Memory latency, cache (both instruction and data) hit rates, BPU and BHT
misses, utilization of varying types of functional units, parallelism, OOE,
and various other things mean the _only_ way to determine what's fastest is
to actually write the code and test it -- and the answers may be different
depending on the chips being used.

You are postulating chips that do not exist (this mythical BitMode) and that
the makers have shown no interest in making. You also ignore the cost of
figuring out what to set the BitMode too, as if that were free. You further
ignore how width-independent instructions are supposed to know how much data
to load/store, or how the compiler is supposed to efficiently reserve space
for such when the data types are not known at compile time.

S
 
R

Ron Natalie

MooseFET said:
You claim the above and then go on to say the below:


The "typedef" declares a new type.

No it does not. It makes an alias for an existing type. You can't
distinguish between the typedef and the original type either through
overloading or typeid or anything else.
 
M

MooseFET

No it does not.

Yes it does in all the ways tha matter to the argument with Skybuck.
It causes a new name to be associated with a type. This makes it a
declaration of a type. Just because C doesn't do as strict of type
checking as some other languages doesn't make it not a declare of a
type. After the typedef has been done there is a new symbol that is a
type.
 
F

Frithiof Andreas Jensen

"David Brown" <[email protected]> skrev i en meddelelse

If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.

It's this particular troll's mode of operation. News2020 was more fun.
 
F

Frithiof Andreas Jensen

"Frederick Williams" <"Frederick Williams"@antispamhotmail.co.uk.invalid>
skrev i en meddelelse
I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?

....because he knows that there are always a few people in
'sci.electronics.design' that will take the bait!
 
F

Frithiof Andreas Jensen

Most people here stick to a wisdom:
1st collect your thoughts

"SkyBuck troll tard vs 1.0": Fatal Exception: Brain dropped out on floor at
birth ... continuing.
 
B

Bo Persson

MooseFET wrote:
::: MooseFET wrote:
::::: MooseFET wrote:
:::
:::::: This statement is incorrect. C, C++, Borland Pascal and it
:::::: decendants, and just about every other language I can think of
:::::: allow you to declare a new type to be the same as a simple
:::::: type, allow conditional compiles, and allow include files.
:::::: You don't need to have two copies of the source code.
::::: Incorrect. C and C++ certainly do not.
:::
:::: You claim the above and then go on to say the below:
:::
::::: You can #define or typedef
::::: something that appears to be a type but they aren't distinct
::::: types.
:::
:::: The "typedef" declares a new type.
:::
::: No it does not.
::
:: Yes it does in all the ways tha matter to the argument with
:: Skybuck. It causes a new name to be associated with a type. This
:: makes it a declaration of a type. Just because C doesn't do as
:: strict of type checking as some other languages doesn't make it
:: not a declare of a type. After the typedef has been done there is
:: a new symbol that is a type.

We don't care much about how Skybuck defines a "new type".

After the typedef there is a new symbol that is the name of the type.
The type, however, is exactly the same as it was before the typedef.
It has just got a "nickname", or an alias.


typedef Skybuck Bucky;

doesn't create a new person!



Bo Persson
 
J

JosephKK

Skybuck Flying (e-mail address removed) posted to sci.electronics.design:
Yes very simply statement.

Achieving this in a scalable way is what this thread is all about.

Re-writing code, or writing double code, or even using multiple
libraries is not really what this is about.

It's nearly impossible to achieve without hurting performance. Only
solutions might be c++ templates or generics, not even sure how easy
it would be to switch between two generated class at runtime.

Bye,
Skybuck.

Can't speak for other libraries but the integer and floating point
routines GCC C++ libraries are written as templates. Just don't
expect to change the CPU ALU mode on the fly. (at least not on
x86_64, PPC, MIPS or SPARC architectures.)
 
J

JosephKK

Skybuck Flying (e-mail address removed) posted to sci.electronics.design:
For addition and subtraction probably.

For multiple and division some performance could be reduced for 32
bits but would still be faster than simulating it.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the
detection that overhead might disappear ! ;) :)

Bye,
Skybuck.

Precisely why we use typing and let the compiler do it. It moves the
overhead of the detection clear out of the running program.
 
S

Skybuck Flying

You have valid points.

Just because an if statement/switch to 32 bit code proved faster on my
system and the simple test program doesn't have to mean it will always be
faster, or always be faster on other chips.

I am pretty much done investigating this issue.

I am now convinced extending the code with int64's is ok.

And I will do it only at those places where it's absolutely necessary, the
rest will remain 32 bits.

So current code base is being converted/upgrade to a mix of 32 bit and 64
bit numbers.

Haven't look at the compare statements for 64 bits and copy statements,
there some more overhead.

Some new algorithm parts to cope with 64 bits as well, so program will
probably be a bit slower anyway.

It's the price to pay lol ;)

Bye,
Skybuck.
 
M

Miguel Guedes

ChairmanOfTheBored said:
Gee... Look! The SkyTard is back, and he even answered his own post
seven times!

What are you on? He wouldn't be the SkyTard otherwise!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,280
Latest member
BGBBrock56

Latest Threads

Top