Comparision of C Sharp and C performance

M

Moi

Get back to you when I have more time to study this, but...

I don't know why you calculate what seems to be an invariant in a for
loop

IMHO there is no invariant.
The 'var' variable is redundant and not used, but that is harmless.
(the 'slot' variable can also be eliminated; at the expense of clarity)
I don't know why it's apparently necessary to calculate the size of the
array

Nothing is calculated. Maybe you are confused by the sizeof operator ?

HTH,
AvK
 
S

spinoza1111

spinoza1111wrote:
The fact is, you can learn a lot more sometimes from understanding why
something is wrong than you would from understanding why it is right..
I've seen dozens of implementations, recursive and iterative alike, of
the factorial function.  Spinny's was the first O(N^2) I've ever seen,
Again, if you'd studied computer science in the university environment,
in which practitioners are not being infantilized by being made
perpetual candidates, you'd realize that order(n^2) is not a evaluative
or condemnatory term. It was pointed out to you that the purpose of the
code was to execute instructions repeatedly, and discover whether C
Sharp had an interpretive overhead. It was pointed out to you that had
it such an overhead, the time taken by the C Sharp would have itself
order(n^2) with respect to the C time.
It was pointed out to you that the reason for starting order n^2 was to
factor out random effects from a too-fast program, to let things run.
In university classes in computer science that you did not attend, a
qualified professor would have shown you that not only is the language
of computational complexity not evaluative, but also that we have to
write such algorithms to learn more.
The reason for the sub-optimal hashtable now is clear to me.
BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
There still is a lot of place for improvement. Feel free to improve it..
*****/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000
struct meuk {
        int head;
        int next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];
#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
int main(void)
{
    int val , *p;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk[ii].head = -1;
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = -1;
            slot =  val % COUNTOF(nilgesmeuk);
            for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p = &nilgesmeuk[*p].next ) {hops++;}
            *p = ii;
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
    return 0;
}
/*****
AvK
Get back to you when I have more time to study this, but...
I don't know why you calculate what seems to be an invariant in a for
loop

If you mean the COUNTOF thing, it's a compilation-time calculation.
There is no run-time cost, since the compiler can work out the division
at compilation time.

I am relieved to realize that thx to u. Of course, a modern language
would not so conflate and confuse operations meant to be executed at
compile time with runtime operations.

However, you assume that all C compilers will do the constant division
of the two sizeofs. Even if "standard" compilers do, the functionality
of doing what we can at compile time can't always be trusted in actual
compilers, I believe. In fact, doing constant operations inside
preprocessor macros neatly makes nonsense of any simple explanation of
preprocessing as a straightforward word-processing transformation of
text.

Dijkstra's "making a mess of it" includes hacks that invalidate
natural language claims about what software does.
So that he can change the size of the array during editing, and have
that change automatically reflected in the loop control code without his
having to remember to update it.

That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
you don't evaluate an invariant in a for loop IF the preprocessor
divides sizeofs. If this was made a Standard, that, in my view, is a
gross mistake.
He could just as easily have used ARRAYSIZE, but doing so would
introduce into the code an assumption that his array has (at least)
ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
regard to later editing of the definition of the array.

I think, with all due respect, that it's a mess. It assumes that the
preprocessor will execute constant operations in violation of the
understanding of intelligent people that the preprocessor doesn't do
anything but process text, and that constant time evaluation is the
job of the optimizer.
 
S

spinoza1111

IMHO there is no invariant.
The 'var' variable is redundant and not used, but that is harmless.
(the 'slot' variable can also be eliminated; at the expense of clarity)




Nothing is calculated. Maybe you are confused by the sizeof operator ?

At this point, dear boy, I think you're confused. I know that the
sizeof operator is constant time And, my C compiler won't let me
#define FOO as (1/0), which means that constant evaluation is done
inside the preprocessor.

Well, I'll be dipped in shit.

As my grandpa said when he found a headless statue of Venus in a
thrift shop, who busted this.

Who was the moron who added that unnecessary feature to the standard
when I wasn't coding in C? Clive? Was that your idea?

It's dumb, since like sequence points, it invalidates English
discourse about C.

And it was your responsibility, Moi, not to use such a stupid bug-
feature.
 
I

Ike Naar

[...]
#define ARRAY_SIZE 1000
struct meuk nilgesmeuk[ARRAY_SIZE];
#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
int ii;
for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
[...]

If you mean the COUNTOF thing, it's a compilation-time calculation.
There is no run-time cost, since the compiler can work out the division
at compilation time.
[...]
So that he can change the size of the array during editing, and have
that change automatically reflected in the loop control code without his
having to remember to update it.

He could just as easily have used ARRAYSIZE, but doing so would
introduce into the code an assumption that his array has (at least)
ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
regard to later editing of the definition of the array.

Caveat: if you ever decide to change the array from a
statically allocated array to a dynamically allocated array:

struct meuk *nilgesmeuk = malloc(ARRAY_SIZE * sizeof *nilgesmeuk);

then the COUNTOF macro silently does the wrong thing.
It would be prudent to add a compile-time assertion to make sure that
nilgesmeuk is a real array. Something like:

#define IS_ARRAY(a) ((void*)&(a) == &(a)[0])
COMPILE_TIME_ASSERT(IS_ARRAY(nilgesmeuk));
 
I

Ike Naar

I know that the
sizeof operator is constant time And, my C compiler won't let me
#define FOO as (1/0), which means that constant evaluation is done
inside the preprocessor.

Then your C compiler is broken. Or perhaps you're mistaken.
Can you post the code that you used to check this?
 
M

Moi

spinoza1111wrote:
The fact is, you can learn a lot more sometimes from understanding
why something is wrong than you would from understanding why it is
right. I've seen dozens of implementations, recursive and
iterative alike, of the factorial function.  Spinny's was the
first O(N^2) I've ever seen,
Again, if you'd studied computer science in the university
environment, in which practitioners are not being infantilized by
being made perpetual candidates, you'd realize that order(n^2) is
not a evaluative or condemnatory term. It was pointed out to you
that the purpose of the code was to execute instructions
repeatedly, and discover whether C Sharp had an interpretive
overhead. It was pointed out to you that had it such an overhead,
the time taken by the C Sharp would have itself order(n^2) with
respect to the C time. It was pointed out to you that the reason
for starting order n^2 was to factor out random effects from a
too-fast program, to let things run. In university classes in
computer science that you did not attend, a qualified professor
would have shown you that not only is the language of computational
complexity not evaluative, but also that we have to write such
algorithms to learn more.
The reason for the sub-optimal hashtable now is clear to me.
BTW, I fixed your hashtable for you. It is even shorter, and runs
faster. There still is a lot of place for improvement. Feel free to
improve it.

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000
struct meuk {
        int head;
        int next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];
#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
int main(void)
{
    int val , *p;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++) {
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        nilgesmeuk[ii].head = -1;
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = -1;
            slot =  val % COUNTOF(nilgesmeuk);
            for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p =
            &nilgesmeuk[*p].next ) {hops++;} *p = ii;
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d
    hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions),
           sessions);
    return 0;

Get back to you when I have more time to study this, but...
I don't know why you calculate what seems to be an invariant in a for
loop

If you mean the COUNTOF thing, it's a compilation-time calculation.
There is no run-time cost, since the compiler can work out the division
at compilation time.

I am relieved to realize that thx to u. Of course, a modern language
would not so conflate and confuse operations meant to be executed at
compile time with runtime operations.

However, you assume that all C compilers will do the constant division
of the two sizeofs. Even if "standard" compilers do, the functionality
of doing what we can at compile time can't always be trusted in actual
compilers, I believe. In fact, doing constant operations inside
preprocessor macros neatly makes nonsense of any simple explanation of
preprocessing as a straightforward word-processing transformation of
text.

Dijkstra's "making a mess of it" includes hacks that invalidate natural
language claims about what software does.
So that he can change the size of the array during editing, and have
that change automatically reflected in the loop control code without
his having to remember to update it.

That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
you don't evaluate an invariant in a for loop IF the preprocessor
divides sizeofs. If this was made a Standard, that, in my view, is a
gross mistake.
He could just as easily have used ARRAYSIZE, but doing so would
introduce into the code an assumption that his array has (at least)
ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust
with regard to later editing of the definition of the array.

I think, with all due respect, that it's a mess. It assumes that the
preprocessor will execute constant operations in violation of the
understanding of intelligent people that the preprocessor doesn't do
anything but process text, and that constant time evaluation is the job
of the optimizer.


Well, it depends.
Suppose I want to decouple the heads and nextpointers:


*****************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define ARRAY2_SIZE (3*ARRAY_SIZE)
#define SESSIONS 100000

struct meuk {
struct meuk* next;
int val;
} nilgesmeuk[ARRAY_SIZE];

struct meuk *nilgesheads[ARRAY2_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
int val ;
struct meuk **hnd;
time_t start , end;
double dif;
unsigned ii,slot,sessions,hops;
time (&start);
hops = 0;
for (sessions = 0; sessions < SESSIONS; sessions++)
{
for (ii = 0; ii < COUNTOF(nilgesheads); ii++) nilgesheads[ii] = NULL;

for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
{
nilgesmeuk[ii].val = val = rand();
nilgesmeuk[ii].next = NULL;
slot = val % COUNTOF(nilgesheads);
for( hnd = &nilgesheads[slot] ; *hnd ; hnd = &(*hnd)->next ) {hops++;}
*hnd = &nilgesmeuk[ii];
}
}
time (&end);
dif = difftime (end,start);
printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
, dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
return 0;
}

/*****************

Then I would not have to care which ARRAY_SIZE to use as an upper bound to my indexing operations.
The preprocessor and the compiler take care for me, for I don't fear them.

HTH,
AvK
 
M

Moi

Richard Heathfield said:
[...]
#define ARRAY_SIZE 1000
struct meuk nilgesmeuk[ARRAY_SIZE];
#define COUNTOF(a) (sizeof(a)/sizeof(a)[0]) int ii;
for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
[...]

If you mean the COUNTOF thing, it's a compilation-time calculation.
There is no run-time cost, since the compiler can work out the division
at compilation time.
[...]
So that he can change the size of the array during editing, and have
that change automatically reflected in the loop control code without his
having to remember to update it.

He could just as easily have used ARRAYSIZE, but doing so would
introduce into the code an assumption that his array has (at least)
ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
regard to later editing of the definition of the array.

Caveat: if you ever decide to change the array from a statically
allocated array to a dynamically allocated array:

struct meuk *nilgesmeuk = malloc(ARRAY_SIZE * sizeof *nilgesmeuk);

then the COUNTOF macro silently does the wrong thing. It would be
prudent to add a compile-time assertion to make sure that nilgesmeuk is
a real array. Something like:

#define IS_ARRAY(a) ((void*)&(a) == &(a)[0])
COMPILE_TIME_ASSERT(IS_ARRAY(nilgesmeuk));

Yes, of course this is true. (nice macro)
But, changing static allocation ( := constant size)
into dynamic allocation would always need an extra variable
somewhere to store the current size (or number of elements).
Plus a minor rework on the index bounds. IMHO, Renaming the
(array-->>pointer) variable would in that case be a sane thing
to catch all the dangling COUNTOF()s.


AvK
 
D

Dennis \(Icarus\)

Walter Banks said:
Don't confuse two separate translate operations. Macros are pure text
transformations. Most compilers separately may choose to evaluate
constant operations at compile or code generation time to save code
and execution time.

If he knew anything about compilers or code generation........

Dennis
 
S

Seebs

Why is it that whenever C does something that you did not know about you
blame the language?

I am beginning to suspect that all internet trolls have NPD. Consider how
much more sense his behavior makes if you assume that. He's never at
fault; it's always someone or something else.
Now there is the issue about compile versus interpreted as if this were
a binary choice. As those who have actually listened to their lecturers
in CS courses know there is a continuum between heavily optimised
compilation and completely unoptimised (one word at a time) interpretation.

Yes. Compilation to bytecode is an intermediate state between purely
interpreted language (such as most Unix shells) and languages like C,
which compile directly to machine code. There are a number of interesting
special cases to be found. From what little I know of C#, it sounds like
C# and Java are in the same basic class -- they compile once into an
intermediate form other than processor instructions, which is then
"interpreted" by a bytecode interpreter of some sort. That's a pretty
reasonable compromise.
One hallmark of professionals is that they are aware of the level of
their ignorance. You seem to assume that everyone else is ignorant. You
just rant and rarely argue.

Dunning-Kruger effect.

-s
 
S

spinoza1111

If he knew anything about compilers or code generation........

You know, I'm reading Habermas (A Theory of Communicative Action,
etc). And he would find this constant (and off-topic) raising of the
issue of personal competence very strange, because to him, coherent
language starts in basic decency, including the basic decency of not
going offtopic, and making absurd claims about people's competence.

These claims are especially absurd given the fact that the moderator
of clcm, Peter Seebach, is without any qualifications whatsoever in
computer science, having majored in psychology.

Whereas I took all the academic compsci I could manage with a straight-
A average, debugged a Fortran compiler in machine language at the age
of 22, wrote a compiler in 1K of storage, developed compilers at Bell
Northern Research, and authored the book and the software for Build
Your Own .Net Language and Compiler. But I'm tired of having to
identify these facts to people who crawl in here wounded by what
corporations do, which is constantly discount and downgrade real
knowledge in favor of a money-mad mysticism in which "results" as
predefined by the suits "are all that matters", and take their anger
out on their fellow human beings.

Stop wasting my time with these endless canards. They are more
reflective of a basic personal insecurity of people who were hired by
corporations because they seemed pliable, and assembled into teams
where their mistakes, it's hoped by the suits, can be factored out as
they learn on the job.


Asshole.
 
S

spinoza1111

spinoza1111wrote:

Don't confuse two separate translate operations. Macros are pure text
transformations. Most compilers separately may choose to evaluate
constant operations at compile or code generation time to save code
and execution time.


I think you're confused, Walter, but it's understandable, since C is a
badly-designed language that creates confusion. While it is said that
"macros are pure text operations", on a bit more thought I realized
that this statement has never been true.

You see, to evaluate #if statements in C, we need to evaluate constant
expressions. Might as well do this all the time, or perhaps some of
the time (evaluate constant expressions only in the #if statement). My
guess is that compilers vary, and this is one more reason not to use
C.

C Sharp got rid of all but a very simple for of preprocessing for this
reason.

In my example, I wanted to define 1/0 as a pure text operation. I
would want to do so in reality in order let us say to test an error
handler by generating an error. The compiler wouldn't let me.

This means I have to retract my charge that the Standard invalidated
perfectly reasonable English prose about C which was illuminating. It
looks like it was never true that the preprocessor was "a pure text
operation" after all.

*Quel dommage*!
 
S

spinoza1111

Then your C compiler is broken. Or perhaps you're mistaken.
Can you post the code that you used to check this?

#define FOO (1/0)

int main(void)
{
int i; i = FOO;
return 0;
}

Output at compile and link time:

1>------ Build started: Project: hashing, Configuration: Release Win32
------
1>Compiling...
1>hashing.c
1>..\..\..\hashing.c(5) : error C2124: divide or mod by zero
1>Build log was saved at "file://c:\egnsf\C and C sharp comparisions
\Hashing\C\hashing\hashing\Release\BuildLog.htm"
1>hashing - 1 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped
==========


But: maybe it's not the preprocessor's "fault". Lets try this:



int main(void)
{
int i; i = 1/0;
return 0;
}

Same diagnostic, of course. So the preprocessor can be said to make a
straight textual substitution, after all.

However, I can conceive of any number of scenarios in which I'd want
to divide by zero, as in my example of an error handler. Given the
permissive philosophy of C, I don't see why I can't do this.

"Constant folding" should be for optimization. The fatal error should
be a warningg IMO.
 
S

spinoza1111

spinoza1111wrote:
The fact is, you can learn a lot more sometimes from understanding
why something is wrong than you would from understanding why it is
right. I've seen dozens of implementations, recursive and
iterative alike, of the factorial function.  Spinny's was the
first O(N^2) I've ever seen,
Again, if you'd studied computer science in the university
environment, in which practitioners are not being infantilized by
being made perpetual candidates, you'd realize that order(n^2) is
not a evaluative or condemnatory term. It was pointed out to you
that the purpose of the code was to execute instructions
repeatedly, and discover whether C Sharp had an interpretive
overhead. It was pointed out to you that had it such an overhead,
the time taken by the C Sharp would have itself order(n^2) with
respect to the C time. It was pointed out to you that the reason
for starting order n^2 was to factor out random effects from a
too-fast program, to let things run. In university classes in
computer science that you did not attend, a qualified professor
would have shown you that not only is the language of computational
complexity not evaluative, but also that we have to write such
algorithms to learn more.
The reason for the sub-optimal hashtable now is clear to me.
BTW, I fixed your hashtable for you. It is even shorter, and runs
faster. There still is a lot of place for improvement. Feel free to
improve it.
*****/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000
struct meuk {
        int head;
        int next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];
#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
int main(void)
{
    int val , *p;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++) {
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        nilgesmeuk[ii].head = -1;
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = -1;
            slot =  val % COUNTOF(nilgesmeuk);
            for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p =
            &nilgesmeuk[*p].next ) {hops++;} *p = ii;
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d
    hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions),
           sessions);
    return 0;
}
/*****
AvK
Get back to you when I have more time to study this, but...
I don't know why you calculate what seems to be an invariant in a for
loop
If you mean the COUNTOF thing, it's a compilation-time calculation.
There is no run-time cost, since the compiler can work out the division
at compilation time.
I am relieved to realize that thx to u. Of course, a modern language
would not so conflate and confuse operations meant to be executed at
compile time with runtime operations.
However, you assume that all C compilers will do the constant division
of the two sizeofs. Even if "standard" compilers do, the functionality
of doing what we can at compile time can't always be trusted in actual
compilers, I believe. In fact, doing constant operations inside
preprocessor macros neatly makes nonsense of any simple explanation of
preprocessing as a straightforward word-processing transformation of
text.
Dijkstra's "making a mess of it" includes hacks that invalidate natural
language claims about what software does.
That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
you don't evaluate an invariant in a for loop IF the preprocessor
divides sizeofs. If this was made a Standard, that, in my view, is a
gross mistake.
I think, with all due respect, that it's a mess. It assumes that the
preprocessor will execute constant operations in violation of the
understanding of intelligent people that the preprocessor doesn't do
anything but process text, and that constant time evaluation is the job
of the optimizer.

Well, it depends.
Suppose I want to decouple the heads and nextpointers:

*****************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define ARRAY2_SIZE (3*ARRAY_SIZE)
#define SESSIONS 100000

struct meuk {
        struct meuk* next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];

struct meuk *nilgesheads[ARRAY2_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
    int val ;
    struct meuk **hnd;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (ii = 0; ii < COUNTOF(nilgesheads); ii++) nilgesheads[ii] = NULL;

        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = NULL;
            slot =  val % COUNTOF(nilgesheads);
            for( hnd =  &nilgesheads[slot] ; *hnd ; hnd = &(*hnd)->next ) {hops++;}
            *hnd = &nilgesmeuk[ii];
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
    return 0;

}

/*****************

Then I would not have to care which ARRAY_SIZE to use as an upper bound to my indexing operations.
The preprocessor and the compiler take care for me, for I don't fear them..

OK, this makes sense now. I can see where it might be a useful
technique.

The problem remains: sizeof as a pure compile time operation isn't
textually distinguished from a run time operation.

The preprocessor is widely considered to be a mistake because what it
wants to do is better done by OO language features. It sounded cool in
the early days to be able to write "conditional macros", and
assemblers (such as IBM mainframe BAL) had Turing-complete facilities,
which I used to generate tables for the first cellphone OS. But
statistically speaking, you're using a programmable monkey at a
typewriter when you use conditional macro instructions, and the C
preprocessor (perhaps fortunately) isn't Turing-complete whereas in
BAL you could construct loops to an assembler time go to.

Also, isn't there an alignment/padding problem on some machines in the
way you calculate the size of the total array?
 
S

spinoza1111

On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
[snip]
Um, the stack of the threads is where you typically put cheap per-
thread data. =3DA0Otherwise you allocate it off the heap. =3DA0In the = case of
the *_r() GNU libc functions they store any transient data in the
structure you pass it. =3DA0That's how they achieve thread safety..
It's a clumsy and old-fashioned method, not universally used. It also
has bugola potential.
You see, the called routine is telling the caller to supply him with
"scratch paper". This technique is an old dodge. It was a requirement
in IBM 360 BAL (Basic Assembler Language) that the caller provide the
callee with a place to save the 16 "general purpose registers" of the
machine.
The problem then and now is what happens if the caller is called
recursively by the callee as it might be in exception handling, and
the callee uses the same structure. It's not supposed to but it can
happen.
He's not talking about the technique used in BAL etc. =A0The
transient data is contained within a structure that is passed by
the caller to the callee. =A0The space for the structure is on the
stack. =A0Recursion is permitted.
If control "comes back" to the caller who has stacked the struct and
the caller recalls the routine in question with the same struct, this
will break.

This isn't right; I dare say the fault is mine for being unclear.
C uses call by value; arguments are copied onto the stack.  The
upshot is that callee operates on copies of the original
variables.  This is true both of elemental values, i.e., ints,
floats, etc, and composite values, i.e., structs.

So, when the calling sequence contains a struct a copy of the
struct is placed on the stack.  The callee does not have access
to the caller's struct.  To illustrate suppose that foo calls bar
and bar calls foo, and that foo passes a struct to bar which in
turn passes the struct it received to foo.  There will be two
copies of the struct on the stack, one created when foo called
bar, and one created when bar called foo.

Correct. And foo sees bar's scratchpad memory, which is a security
exposure. It creates opportunities for fun and games.

I'm foo. I have to pass bar this struct:

struct { int i; char * workarea; }

i is data. workarea is an area which bar needs to do its work. bar
puts customer passwords in workarea. Control returns to foo in an
error handler which is passed the struct. foo can now see the
passwords.

Because you violate encapsulation, you have a security hole, right?
Better to use an OO language in which each invocation of the stateful
foo object gets as much memory as it needs.

Let me know if I am missing anything.
 
S

spinoza1111

spinoza1111wrote:



Right. In theory, it isn't guaranteed by the Standard, and there's
nothing to stop a compiler performing the division at runtime. In
practice, I don't know of any current mainstream compiler that does
this, and it is unlikely that any future mainstream implementation will
pessimise in that way.

 > Even if "standard" compilers do, the functionality


Do you have a specific compiler in mind?




No, it's not absurd, in the general case. If you're only interested in
this specific example, you can just alter ARRAY_SIZE, as you say. But if
you have two or more arrays that originally use the same #define,
ARRAY_SIZE say, for their size, and then a requirements change makes it
necessary to modify the size of one of the arrays, i <
COUNTOF(arrayname) will not need to be modified, whereas i < ARRAY_SIZE
would need to be changed (and could easily be missed).

You and moi have shown me that this could be a useful coding
technique. However, using fixed array sizes, whether they are the same
or different for two arrays, always tends to allocate, not, strangely,
too much (leading to the dreaded "code bloat" which is not in itself a
bug) but too little in production. Using objects hides the choice. Of
course, at some point inside a low-level object, an array may have to
be allocated, but it happens in one place. Or, linked lists of classes
and structures can be used, replacing malloc with object creation.

The trick seems a shibboleth which relies on a non-orthogonal fact
about sizeof: that it happens to be a compile time operation.
 
S

spinoza1111

spinoza1111wrote:


And that particular macro is an idiom that most experienced C
programmers will recognise at once.

Well hoo ray.
Why is it that whenever C does something that you did not know about you
blame the language? Computer Science does not specify a 'one true way'.
In addition every time you write that something is not the way that a
modern language would do it you are being unreasonable (even when it is
true) because C is an ancient language (in computer terms).

Which means it should be sent to the knacker's yard.
Whether you like it or not, there is an immense amount of software
written in C and quite a bit of it is in safety critical areas. Changing
the language so that such source code is broken is not an exercise that
any standards authority would undertake lightly. 'Modern' languages
start with an advantage in that they do not have to consider the needs
of code written more than a decade ago.

The "needs of code" is a strange concept. I prefer human needs.
Some industries (such as the automotive industry) are less than happy
with the changes that were made by C99 to the extent that they did not
want to use C99 compilers.

You mean, in fact, those geniuses in the AMERICAN auto industry: the
folks who need government handouts to survive because they preferred
in the fat years to build gas guzzlers for Yuppie scum.
Whether you agree or not, standards are concerned with commercial usage
and not academic purity. But, of course, you consider that to be some
form of conspiracy.

No, I think it's quite open and blatant. An unqualified upper
management doesn't understand its own data systems (this was clear
when Ken Lay said he didn't know what was going on at Enron). It
preserves out of date approaches through FUD (fear, uncertainty and
doubt).
And while I have my fingers on my keyboard, note that any attempt to
compare C with C# is pretty pointless. Compare C# with Java or C++ or
even Smalltalk if you must.

Why? I can solve the same problems using C Sharp without having to
rote-memorize non-orthogonal nonsense and use techniques that although
valid and somewhat elegant in a twisted way, insult the intelligence.
Now there is the issue about compile versus interpreted as if this were
a binary choice. As those who have actually listened to their lecturers
in CS courses know there is a continuum between heavily optimised
compilation and completely unoptimised (one word at a time) interpretation.

Incorrect. Pure interpretation replaces the computer you have with one
that is K slower where K is the time needed to "understand" individual
bytecodes. In some interpreters, K can vary (as in the case of
interpretation of a bytecode that says "scan for this character" (for
example). The result is a dramatic slowdown.

Whereas an interpreter that translates the bytecodes on first
encountering them is only marginally slower than direct machine
language.
One hallmark of professionals is that they are aware of the level of
their ignorance. You seem to assume that everyone else is ignorant. You
just rant and rarely argue.

No, I think that many people here buy into a paradigm that's out of
date and it controls their imagination.
 
S

spinoza1111

I am beginning to suspect that all internet trolls have NPD.  Consider how
much more sense his behavior makes if you assume that.  He's never at
fault; it's always someone or something else.

How dare you brag about your own ADHD and implicitly ask for sympathy
for it, and use a fantasized narcissistic personality disorder in
another as an argument in a technical discussion?

Yes.  Compilation to bytecode is an intermediate state between purely

Wrong. It's not halfway. It's more like 1..10% slower. And for this,
you get protection against incompetence and fraud, and the ability to
port without having to hire a Fat Bastard of a C "expert".
interpreted language (such as most Unix shells) and languages like C,
which compile directly to machine code.  

Supported in fact by a runtime in all cases which performs stack and
heap management, as described by Herb Schildt, by example.
There are a number of interesting
special cases to be found.  From what little I know of C#, it sounds like

....you're going to pontificate about C# based on ignorance while
decrying this with respect to C. You can open your yap about C# but
nobody can speak about C without repeating shibboleths.

What a jerk!
C# and Java are in the same basic class -- they compile once into an
intermediate form other than processor instructions, which is then
"interpreted" by a bytecode interpreter of some sort.  That's a pretty
reasonable compromise.

WRONG. The compiler translates to bytecode. That bytecode is then
transformed into threaded, and directly executable, machine
instructions the first time the runtime software (which is not an
interpreter, but is properly known as just in time compiler)
encounters it. Bytecodes that are not executed remain in bytecode
form. After this ONE TIME operation the code is MACHINE LANGUAGE. The
one time operation also performs the invaluable service of detecting
incompetence and malfeasance.

Do your homework before posting on matters on which you are without
standing. Withdraw the Schildt post, because without standing you
seriously damaged his reputation because as here, you did not do your
homework ("the 'heap' is a DOS term"). Stop calling people "morons"
and "crazy" based on your knowing some detail of C that they don't, or
based on their willingness to speak truth to power.

"You CHILD. You COMPANY MAN. You stupid ****ing ****. You, Williamson,
I'm talking to you, ****head. You just cost me $6,000. Six thousand
dollars, and one Cadillac. That's right. What are you going to do
about it? What are you going to do about it, asshole? You're ****ing
****. Where did you learn your trade, you stupid ****ing ****, you
idiot? Who ever told you that you could work with men? Oh, I'm gonna
have your job, ****head. " - David Mamet, Glengarry Glen Ross
 
D

Dennis \(Icarus\)

Yes. Compilation to bytecode is an intermediate state between purely
interpreted language (such as most Unix shells) and languages like C,
which compile directly to machine code. There are a number of interesting
special cases to be found. From what little I know of C#, it sounds like
C# and Java are in the same basic class -- they compile once into an
intermediate form other than processor instructions, which is then
"interpreted" by a bytecode interpreter of some sort. That's a pretty
reasonable compromise.

You can also compile as a native executable. This is useful when you have a
c# executable that needs to work with a 32-bit dll on a 64-but system.
If left as MSIL, the executable is run as a 64-bit process which cannot work
with the 32-bit dll.


Dennis
 
M

Moi

The preprocessor is widely considered to be a mistake because what it

In the rest of the universe, the use of the word 'widely' is considerd
useless.
to generate tables for the first cellphone OS. But statistically
speaking, you're using a programmable monkey at a typewriter when you

You cannot speak 'statistically'. You can speak nonsense, but that would
easily be recognized.

use conditional macro instructions, and the C preprocessor (perhaps
fortunately) isn't Turing-complete whereas in BAL you could construct

Stop calling Turing (or other names). BTW did I mention that I am related
to Dijkstra? *NO I did not.* I only post code here.
(and too many comments to trolls and/or idiots)

You are not worthy. Post code instead, plaease
Also, isn't there an alignment/padding problem on some machines in the
way you calculate the size of the total array?

No, there is not (IMHO)
sizeof yields a size_t (introduced by C89/C90, IIRC)

so: dividing two of them yields a ... _drumroll_ ... size_t.

Which (in this case) seems adequate for me
(there is a subtle issue lurking here, but that
is _presumably_ beyond your grasp)


HTH,
AvK
 
S

spinoza1111

In the rest of the universe, the use of the word 'widely' is considerd
useless.

Perhaps inside black holes into which light goes and from which it
returneth, not.
You cannot speak 'statistically'. You can speak nonsense, but that would
easily be recognized.

Oh? I just did.
Stop calling Turing (or other names). BTW did I mention that I am related
to Dijkstra? *NO I did not.* I only post code here.

Yeah, but you just did.

Didn't you.

And...actually helping Nash find a bug is a real qualification,
whereas being related to Dijkstra...isn't.
(and too many comments to trolls and/or idiots)

You are not worthy. Post code instead, plaease

I'd rather not code in C, because it's a poor language.
No, there is not (IMHO)
sizeof yields a size_t (introduced by C89/C90, IIRC)

so: dividing two of them yields a ... _drumroll_ ... size_t.

Which (in this case) seems adequate for me
(there is a subtle issue lurking here, but that
is _presumably_ beyond your grasp)

**** off, turd blossom. This space is for technical discussion, not
personalities. In fact, the less I know of your personality, the
better, it seems.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,816
Messages
2,569,713
Members
45,500
Latest member
João Cardoso

Latest Threads

Top