Comparision of C Sharp and C performance

S

spinoza1111

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results

Here is the C code.

#include <stdio.h>
#include <time.h>

long long factorial(long long N)
{
long long nFactorialRecursive;
long long nFactorialIterative;
long long Nwork;
if (N <= 2) return N;
for ( nFactorialIterative = 1, Nwork = N;
Nwork > 1;
Nwork-- )
nFactorialIterative *= Nwork;
nFactorialRecursive = N * factorial(N-1);
if (nFactorialRecursive != nFactorialIterative)
printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
N,
nFactorialIterative,
nFactorialRecursive);
return nFactorialRecursive;
}

int main(void)
{
long long N;
long long Nfactorial;
double dif;
long long i;
long long K;
time_t start;
time_t end;
N = 19;
K = 1000000;
time (&start);
for (i = 0; i < K; i++)
Nfactorial = factorial(N);
time (&end);
dif = difftime (end,start);
printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
N, Nfactorial, dif, K);
return 0; // Gee is that right?
}

Here is the C Sharp code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace N_factorial
{
class Program
{
static void Main(string[] args)
{
long N;
long Nfactorial = 0;
TimeSpan dif;
long i;
long K;
DateTime start;
DateTime end;
N = 19;
K = 1000000;
start = DateTime.Now;
for (i = 0; i < K; i++)
Nfactorial = factorial(N);
end = DateTime.Now;
dif = end - start;
Console.WriteLine
("The factorial of " +
N.ToString() + " is " +
Nfactorial.ToString() + ": " +
dif.ToString() + " " +
"seconds to calculate " +
K.ToString() + " times");
return;
}

static long factorial(long N)
{
long nFactorialRecursive;
long nFactorialIterative;
long Nwork;
if (N <= 2) return N;
for ( nFactorialIterative = 1, Nwork = N;
Nwork > 1;
Nwork-- )
nFactorialIterative *= Nwork;
nFactorialRecursive = N * factorial(N-1);
if (nFactorialRecursive != nFactorialIterative)
Console.WriteLine
("The iterative factorial of " +
N.ToString() + " " +
"is " +
nFactorialIterative.ToString() + " " +
"but its recursive factorial is " +
nFactorialRecursive.ToString());
return nFactorialRecursive;
}
}
}

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".

As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.

But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.

Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.

Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.

But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.

Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.

When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!

Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".

Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.

It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".

Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.

C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.

I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.
 
S

scattered

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results

Here is the C code.

#include <stdio.h>
#include <time.h>

long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
              nFactorialRecursive);
    return nFactorialRecursive;

}

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?

}

Here is the C Sharp code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }

        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }

}

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".

As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.

But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.

Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.

Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.

But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.

Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.

When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!

Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".

Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.

It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".

Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.

C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.

I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.

OT, but *much* more interesting than your rambles about "C:The
Complete Nonsense", etc.
What about the memory footprint of C vs C#? Sure, .Net code is
compiled rather than interpreted so it doesn't have a huge time
penalty - but it is compiled to an intermediate language designed to
run in a *huge* virtual machine. Bloatware which is only slightly
slower than nonbloated code is still bloated and still slower. Also, a
much more interesting question (than simple factorial calculations) is
how something like Microsoft Excel would work if its code base was
rewritten from C to C#. I suspect that the cummulative slowdowns would
result in a spreadsheet which was annoyingly slower.

Out of curiousity, what rules of grammar does the sentence "C is more
effecient than C#." violate?
 
B

bartc

spinoza1111 said:
A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results

Here is the C code.
Here is the C Sharp code.
The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".

C# was 10% faster? On my 32-bit machine, the C code was generally faster,
depending on compilers and options.

But changing both to use 32-bit arithmetic, the C was nearly double the
speed of C#, which is what is to be expected.

That's not bad, but C# does need a very complex environment to run, so
cannot be used as a replacement for many uses of C, and it is still slower.

(BTW your timing routines in the C code seem to round to the nearest second;
not very useful for that purpose.)
Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.

There are interpreters, and interpreters. You could be looking at
slow-downs, compared with C, of 3x to 300x, but usually there will be some
benefits that make up for performance deficiencies in these kinds of
benchmarks.

C#, I believe, is executed as native code, after translation from MSIL or
CIL or some such acronym.
 
J

Jens Stuckelberger

The C Sharp code runs at 110% of the speed of the C code, which may seem
to "prove" the half-literate Urban Legend that "C is more efficient than
C Sharp or VM/bytecode languages in general, d'oh".

Really? I would have thought that, at best, it proves that a
particular C# implementation of a particular algorithm, when using a
particular C# virtual machine, on a particular platform, has beaten a
particular C implementation of a particular algorithm, with some
particular C compiler and some particular compile-time settings, on a
particular platform. This, assuming that one can believe you - and why
should we?

If anything, you seem to have proved that you know nothing much
about rigor, and that in this occasion to indulged in forcing facts to
fit your preconceived notions. Way to go.

As for your ranting on the American and British constitutions -
what kind of mental condition affects you?
 
J

jacob navia

I transformed that program a bit. It calculates now factorial of 20, and
it does that 10 million times.

Note that the difftime function is not accurate. I used the utility "timethis".
Machine: Intel i7 (8 cores) with 12GB RAM

The results are:
-------------------------------------------------------------------------------------
D:\temp>csc /o tfact.cs C# Optimizations ON
Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.1
for Microsoft (R) .NET Framework version 3.5
Copyright (C) Microsoft Corporation. All rights reserved.
D:\temp>timethis tfact
TimeThis : Command Line : tfact
TimeThis : Start Time : Sun Dec 27 18:33:53 2009

The factorial of 20 is 2432902008176640000: 00:00:03.7460000 seconds to calculate 10000000 times

TimeThis : Command Line : tfact
TimeThis : Start Time : Sun Dec 27 18:33:53 2009
TimeThis : End Time : Sun Dec 27 18:33:57 2009
TimeThis : Elapsed Time : 00:00:03.804
---------------------------------------------------------------------------------------
D:\temp>cl -Ox tfact.c C optimizations ON
Microsoft (R) C/C++ Optimizing Compiler Version 15.00.21022.08 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
tfact.c
Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation. All rights reserved.
/out:tfact.exe
tfact.obj
D:\temp>timethis tfact
TimeThis : Command Line : tfact
TimeThis : Start Time : Sun Dec 27 18:34:10 2009

The factorial of 20 is 2432902008176640000: 3.00 seconds to calculate 10000000 times

TimeThis : Command Line : tfact
TimeThis : Start Time : Sun Dec 27 18:34:10 2009
TimeThis : End Time : Sun Dec 27 18:34:13 2009
TimeThis : Elapsed Time : 00:00:02.666
D:\temp>
 
S

Seebs

Here is the C code.
long long factorial(long long N)
{
long long nFactorialRecursive;
long long nFactorialIterative;
long long Nwork;
if (N <= 2) return N;
for ( nFactorialIterative = 1, Nwork = N;
Nwork > 1;
Nwork-- )
nFactorialIterative *= Nwork;
nFactorialRecursive = N * factorial(N-1);
if (nFactorialRecursive != nFactorialIterative)
printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
N,
nFactorialIterative,
nFactorialRecursive);
return nFactorialRecursive;

}

I'm impressed. Very few people could define an N^2 algorithm for calculating
factorials.

Hint: When you call factorial(19), you calculate factorial(19) iteratively,
and then you calculate 19 * factorial(18). You then calculate factorial(18)
iteratively, then calcualte 18 * factorial(17). Etcetera.

In short, for factorial(19), instead of performing 38 multiplications and
19 calls, you perform 19 multiplications and 19 calls for the recursive
calculation, plus 164 multiplications for the iterative calculations.

This is not a reasonable way to go about things. This is a pretty
impressive screwup on your part, and you can't blame the language design;
this is purely at the algorithm-design level, not any kind of mysterious
quirk of C.

Again, the problem isn't with C's design; it's that you are too muddled
to design even a basic test using two algorithms, as you embedded one
in another.

Here's two test programs. One's yours, but I switched to 'long double'
and used 24! instead of 19! as the test case, and multiplied the number
of trials by 10.

The main loop is unchanged except for the change in N and the switch to
%Lf.

Yours:
long double factorial(long double N)
{
long double nFactorialRecursive;
long double nFactorialIterative;
long double Nwork;
if (N <= 2) return N;
for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
nFactorialIterative *= Nwork;
nFactorialRecursive = N * factorial(N-1);
if (nFactorialRecursive != nFactorialIterative)
printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
N,
nFactorialIterative,
nFactorialRecursive);
return nFactorialRecursive;
}

Mine:

long double ifactorial(long double N)
{
long double nFactorialIterative;
long double Nwork;
if (N <= 2) return N;
for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
nFactorialIterative *= Nwork;
return nFactorialIterative;
}

long double rfactorial(long double N)
{
long double nFactorialRecursive;
if (N <= 2) return N;
nFactorialRecursive = N * rfactorial(N-1);
return nFactorialRecursive;
}

long double factorial(long double N)
{
long double nFactorialRecursive;
long double nFactorialIterative;
nFactorialIterative = ifactorial(N);
nFactorialRecursive = rfactorial(N);
if (nFactorialRecursive != nFactorialIterative)
printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
N,
nFactorialIterative,
nFactorialRecursive);
return nFactorialRecursive;
}

Output from the main loops:

24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times

.... Which is to say, no one cares whether C# is faster or slower than C
by a few percent, when non-idiotic code is faster than idiotic code by
nearly a factor of three.

-s
 
B

bartc

Seebs said:
I'm impressed. Very few people could define an N^2 algorithm for
calculating
factorials.
Here's two test programs. One's yours, but I switched to 'long double'
and used 24! instead of 19! as the test case, and multiplied the number
of trials by 10.

The main loop is unchanged except for the change in N and the switch to
%Lf.
Output from the main loops:

24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times

... Which is to say, no one cares whether C# is faster or slower than C
by a few percent, when non-idiotic code is faster than idiotic code by
nearly a factor of three.

It doesn't matter too much that the code was idiotic, provided both
languages were executing the same idiotic algorithm.

The original code wasn't a bad test of long long multiplication in each
language.
 
B

BGB / cr88192

bartc said:
C# was 10% faster? On my 32-bit machine, the C code was generally faster,
depending on compilers and options.

yeah, much more significant may well be what the code actually does...

micro-benchmarks of this sort are rarely telling of overall performance.

something a little larger, such as a ray-tracer, would likely be a better
example.

But changing both to use 32-bit arithmetic, the C was nearly double the
speed of C#, which is what is to be expected.

yes, but to be fair, C# 'long' == C 'long long'...

(since C# followed Java in making long always 64-bits).

That's not bad, but C# does need a very complex environment to run, so
cannot be used as a replacement for many uses of C, and it is still
slower.

(BTW your timing routines in the C code seem to round to the nearest
second; not very useful for that purpose.)


There are interpreters, and interpreters. You could be looking at
slow-downs, compared with C, of 3x to 300x, but usually there will be some
benefits that make up for performance deficiencies in these kinds of
benchmarks.

C#, I believe, is executed as native code, after translation from MSIL or
CIL or some such acronym.

yeah.
MSIL is the older / original term.
the IL was latter redubbed CIL.

it is much the same as the difference between x86-64, x64, and AMD64 or
EMT64T...


C# is first compiled to MSIL / CIL, and then the JIT stage does further
compilation and optimization.

MSIL is not, in itself, inherently slow.

rather, what usually adds some slowdown is the OO facilities, array
handling, ... this is because the OO is hard to fine-tune to the same extent
as in, say, C++, and because of the use of bounds-checking of arrays
(although there are many cases where bounds checking can be eliminated by
"proving" that it will never fail).

like Java, it uses a particular approach to GC which tends to make GC
inevitable (although, they may do like what many JVM's do, and essentially
pay a slight up-front cost to free objects known-garbage up-front, or
possibly use ref-counting as a culling method).


actually, simple stack machines I have found are underrated.
granted, SSA is a little easier to optimize, but I have gradually learned
from experience what was my original intuition:
a stack machine is generally a much nicer IL stage than trying to connect
the frontend and backend directly with SSA. SSA is nasty, IMO, and it is
much better IME to unwind these internal stack-machines to a pseudo-SSA form
during low-level compilation.

it also leads to the "nicety" that one has a much better idea where the
temporaries and phi's are located, since these tend to emerge "naturally"
from the operations on the virtual stack.

actually, AFAICT, it is much easier to unwind a stack into SSA form than it
is to try to coerce ASTs into SSA. I am not sure then why most other
compilers seem to take the direct AST -> SSA route, since there would not
seem to be a whole lot of difference in terms of the quality of the final
code produced, even though there is a notable difference WRT how much
difference it is to work on the compiler.


nevermind that I may at some point want to redesign RPNIL some (my personal
RPN-based IL), probably essentially "flipping" the stack (IOW: putting
everything in left-to-right ordering, rather than x86-style right-to-left).
this would make it more in-line with pretty much every other RPN.

originally, I chose an essentially backwards ordering, as I had assumed more
or less 1:1 mapping with the underlying x86, and so everything was laid out
to allow a "naive" translator (my original lower-end was rather naive, but
as a cost it has left me with a fairly ugly IL with extra funky rules and a
"backwards" stack ordering).

FWIW, this is no longer with good reason, so I may consider eventual
redesign (and probably also for a lot of the syntax, such as making it
properly token-based, ...).

 
S

Seebs

It doesn't matter too much that the code was idiotic, provided both
languages were executing the same idiotic algorithm.
True.

The original code wasn't a bad test of long long multiplication in each
language.

Fair enough. It's not exactly what I'd call an interesting test case for
real-world code. I'd be a lot more interested in performance of, say,
large lists or hash tables.

-s
 
S

spinoza1111

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results
Here is the C code.
#include <stdio.h>
#include <time.h>
long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
              nFactorialRecursive);
    return nFactorialRecursive;

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?

Here is the C Sharp code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }
        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".
As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.
But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.
Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.
Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.
But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.
Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.
When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!
Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".
Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.
It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".
Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.
C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.
I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.

OT, but *much* more interesting than your rambles about "C:The
Complete Nonsense", etc.
What about the memory footprint of C vs C#? Sure, .Net code is
compiled rather than interpreted so it doesn't have a huge time
penalty - but it is compiled to an intermediate language designed to
run in a *huge* virtual machine. Bloatware which is only slightly
slower than nonbloated code is still bloated and still slower. Also, a
much more interesting question (than simple factorial calculations) is
how something like Microsoft Excel would work if its code base was
rewritten from C to C#. I suspect that the cummulative slowdowns would
result in a spreadsheet which was annoyingly slower.

Newer spreadsheets such as Google's are faster but they're mostly
written in C, probably (by highly competent people) owing to
inertia...not in Java and not in C Sharp.

In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
7680 bytes. Again, not even an order of magnitude, and you get greater
safety for the so-called "bloat".

It's barbaric, given Moore's law, to so overfocus in such a Puritan
way on "bloat" when the number one issue, as Dijkstra said in 2000, is
"not making a mess of it".
 
S

spinoza1111

C# was 10% faster? On my 32-bit machine, the C code was generally faster,
depending on compilers and options.

No, dear boy. 110% means that C Sharp is ten percent SLOWER, and it
means I don't care that it is.
But changing both to use 32-bit arithmetic, the C was nearly double the
speed of C#, which is what is to be expected.

OK, let's try that at home...

....of course, you do know that the value overflows for C Sharp int and
C long arithmetic. The maximum value we can compute is 12 factorial...

....to maintain realistic execution times, we must change both versions
to execute ten million times.

Oops. Check this out! Here, C Sharp is actually faster!

C Sharp result: The factorial of 12 is 479001600: 00:00:06.8750000
seconds to calculate 10000000 times
C result: 12! is 479001600: 8.00 seconds to calculate 10000000 times

In other words, you changed your version of the code to calculate the
wrong answer "twice as fast" as C Sharp. In other words, you made the
code significantly less "powerful" to get the wrong result, and when
we fix it, C runs slower. This is probably because cache usage, which
is easily implemented safely in a hosted VM, becomes more and more
important the more the same code is executed on similar data.

That's not bad, but C# does need a very complex environment to run, so
cannot be used as a replacement for many uses of C, and it is still slower.

Excuse me, but that's a canard. C, to run efficiently, needs a
complete optimizer. A VM is less complex. Architectures have in fact
been tuned in very complex ways to run C adequately in ways that have
caused errors, perhaps including the Intel arithmetic bug.

Whereas in OO designs such as are effectively supported by VM-hosted
languages like C Sharp and Java, data can be retained in stateful
objects making cacheing at all levels a natural gesture. For example,
a stateful factorial class can save each factorial calculated over its
execution lifetime. A C program cannot do this without saving material
at a global level thats overly visible to other modules.
(BTW your timing routines in the C code seem to round to the nearest second;
not very useful for that purpose.)

Yes...the timing routines standard in C SHARP EXPRESS, shipped free to
script kiddies and even girl programmers world wide, are superior to
the stuff in C.
There are interpreters, and interpreters. You could be looking at
slow-downs, compared with C, of 3x to 300x, but usually there will be some
benefits that make up for performance deficiencies in these kinds of
benchmarks.

No. An interpreter has a fixed overhead K for each instruction, and in
programs like this, K is multipled many, many times. C Sharp is not
interpreted save in the male programmer subconscious which because of
the very real emasculation he faces in the corporation, fears a
fantasy emasculation implicit in not using his father's language.

C sharp doesn't have this.
C#, I believe, is executed as native code, after translation from MSIL or
CIL or some such acronym.

That is more or less correct.
 
K

Kenny McCormack

spinoza1111 said:
Newer spreadsheets such as Google's are faster but they're mostly
written in C, probably (by highly competent people) owing to
inertia...not in Java and not in C Sharp.

In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
7680 bytes. Again, not even an order of magnitude, and you get greater
safety for the so-called "bloat".

It's barbaric, given Moore's law, to so overfocus in such a Puritan
way on "bloat" when the number one issue, as Dijkstra said in 2000, is
"not making a mess of it".

Obviously, neither you nor Dijkstra understand the true economic
underpinnings of the software biz. The whole point is to be hot, sexy,
and broken. Repeat that a few times to get the point.

Because, as I've said so many times, if they ever actually solve the
problem in the software world, the business is dead. And this has
serious economic consequences for everyone. Think job-loss, which is
generally considered the worst thing imaginable.

There are many examples of situations where programmers did actually
solve their problem and ended up on unemployment as a result. At the
bigger, company level, think Windows XP. MS basically solved the OS
problem with Windows XP. They were thus forced to tear it up and start
over with Vista/Windows 7/etc.
 
S

spinoza1111

        Really? I would have thought that, at best, it proves that a
particular C# implementation of a particular algorithm, when using a
particular C# virtual machine, on a particular platform, has beaten a
particular C implementation of a particular algorithm, with some
particular C compiler and some particular compile-time settings, on a
particular platform. This, assuming that one can believe you - and why
should we?

        If anything, you seem to have proved that you know nothing much
about rigor, and that in this occasion to indulged in forcing facts to
fit your preconceived notions. Way to go.

Blow me, dear chap. In fact, we can generalize from what we see, as
long as we take the results cum grano salis.
        As for your ranting on the American and British constitutions -
what kind of mental condition affects you?

I'd call it literacy, dear boy. You see, to help a student, I read AV
Dicey's Introduction to the Study of the Law of the Constitution and
Ian Loveland's Constitutional Law, Administrative Law, and Human
Rights cover to cover over a couple of weeks. Like Godel, I noticed
some quasi-mathematical paradoxes, in my case in the British
constitution. I thought them amusing.
 
S

spinoza1111

I transformed that program a bit. It calculates now factorial of 20, and
it does that 10 million times.

Note that the difftime function is not accurate. I used the utility "timethis".
Machine: Intel i7 (8 cores) with 12GB RAM

The results are:
--------------------------------------------------------------------------- ----------
D:\temp>csc /o tfact.cs                         C# Optimizations ON
Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.1
for Microsoft (R) .NET Framework version 3.5
Copyright (C) Microsoft Corporation. All rights reserved.
D:\temp>timethis tfact
TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:33:53 2009

The factorial of 20 is 2432902008176640000: 00:00:03.7460000 seconds to calculate 10000000 times

TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:33:53 2009
TimeThis :      End Time :  Sun Dec 27 18:33:57 2009
TimeThis :  Elapsed Time :  00:00:03.804
--------------------------------------------------------------------------- ------------
D:\temp>cl -Ox tfact.c                             C optimizations ON
Microsoft (R) C/C++ Optimizing Compiler Version 15.00.21022.08 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.
tfact.c
Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation.  All rights reserved.
/out:tfact.exe
tfact.obj
D:\temp>timethis tfact
TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:34:10 2009

The factorial of 20 is 2432902008176640000: 3.00 seconds to calculate 10000000 times

TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:34:10 2009
TimeThis :      End Time :  Sun Dec 27 18:34:13 2009
TimeThis :  Elapsed Time :  00:00:02.666
D:\temp>

--------------------------------------------------------------------------- -------------

The result is clear: C takes 2.666 seconds, C# takes 3.804 seconds

Previous chap had a glimmer of insight with which I partly agreed. We
do have to take individual results cum grano salis. We need to think
here only in terms of orders of magnitude, and you haven't shown, Mr.
Navia, and with all due respect, that C Sharp runs an order of
magnitude or more slower, which it would be if *le canard* that "C
Sharp is for girls and script kiddies because it is interpreted" were
true, which we have falsified.

Thanks for the better timer. However, part of the C problem is the
fact that suboptimal library routines appear first in documentation
whereas in C Sharp, best of breed is at the top.
 
S

spinoza1111

I'm impressed.  Very few people could define an N^2 algorithm for calculating
factorials.

Hold on to your hats, it's Scripto Boy, who's never taken a computer
science class..
Hint:  When you call factorial(19), you calculate factorial(19) iteratively,
and then you calculate 19 * factorial(18).  You then calculate factorial(18)
iteratively, then calcualte 18 * factorial(17).  Etcetera.

In short, for factorial(19), instead of performing 38 multiplications and
19 calls, you perform 19 multiplications and 19 calls for the recursive
calculation, plus 164 multiplications for the iterative calculations.

This is not a reasonable way to go about things.  This is a pretty
impressive screwup on your part, and you can't blame the language design;
this is purely at the algorithm-design level, not any kind of mysterious
quirk of C.

Again, the problem isn't with C's design; it's that you are too muddled
to design even a basic test using two algorithms, as you embedded one
in another.

Here's two test programs.  One's yours, but I switched to 'long double'
and used 24! instead of 19! as the test case, and multiplied the number
of trials by 10.

The main loop is unchanged except for the change in N and the switch to
%Lf.

Yours:
        long double factorial(long double N)
        {
            long double nFactorialRecursive;
            long double nFactorialIterative;
            long double Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
               printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
                      N,
                      nFactorialIterative,
                      nFactorialRecursive);
            return nFactorialRecursive;
        }

Mine:

        long double ifactorial(long double N)
        {
            long double nFactorialIterative;
            long double Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
                nFactorialIterative *= Nwork;
            return nFactorialIterative;
        }

        long double rfactorial(long double N)
        {
            long double nFactorialRecursive;
            if (N <= 2) return N;
            nFactorialRecursive = N * rfactorial(N-1);
            return nFactorialRecursive;
        }

        long double factorial(long double N)
        {
            long double nFactorialRecursive;
            long double nFactorialIterative;
            nFactorialIterative = ifactorial(N);
            nFactorialRecursive = rfactorial(N);
            if (nFactorialRecursive != nFactorialIterative)
               printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
                      N,
                      nFactorialIterative,
                      nFactorialRecursive);
            return nFactorialRecursive;
        }

Output from the main loops:

24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times

... Which is to say, no one cares whether C# is faster or slower than C
by a few percent, when non-idiotic code is faster than idiotic code by
nearly a factor of three.

You misunderstood the solution, Scripto boy, just as you misunderstood
Schildt; and as in the case of Schildt, you use your own mental
confusion to attack the credibility of your elders and betters. I
don't care that within each recursion I recalculate the next value of
N both ways, recursively and iteratively. Indeed, it was my goal to
get meaningful performance numbers by executing many, many
instructions repeatedly. Since I can look up factorials in seconds, I
did not need a new factorial program, as you so very foolishly seem to
believe.

My point was the absence of "interpretive overhead" in the C sharp
caused it to run at (far) less than 1 order of magnitude slower than
the C code, and that by migrating to C sharp one avoids the legacy
unsafety and stupidity of C, as well as its fragmentation into
mutually warring tribes, with the attendant bad feeling and politics
of personalities.

I suggest you return to school and acquire academic qualifications and
the ability to behave like a professional before ruining people's
reputations with falsehoods, Scripto boy.
 
S

spinoza1111

I'm impressed.  Very few people could define an N^2 algorithm for calculating
factorials.

Hint:  When you call factorial(19), you calculate factorial(19) iteratively,
and then you calculate 19 * factorial(18).  You then calculate factorial(18)
iteratively, then calcualte 18 * factorial(17).  Etcetera.

In short, for factorial(19), instead of performing 38 multiplications and
19 calls, you perform 19 multiplications and 19 calls for the recursive
calculation, plus 164 multiplications for the iterative calculations.

This is not a reasonable way to go about things.  This is a pretty
impressive screwup on your part, and you can't blame the language design;
this is purely at the algorithm-design level, not any kind of mysterious
quirk of C.

Again, the problem isn't with C's design; it's that you are too muddled
to design even a basic test using two algorithms, as you embedded one
in another.

Here's two test programs.  One's yours, but I switched to 'long double'
and used 24! instead of 19! as the test case, and multiplied the number
of trials by 10.

The main loop is unchanged except for the change in N and the switch to
%Lf.

Yours:
        long double factorial(long double N)
        {
            long double nFactorialRecursive;
            long double nFactorialIterative;
            long double Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
               printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
                      N,
                      nFactorialIterative,
                      nFactorialRecursive);
            return nFactorialRecursive;
        }

Mine:

        long double ifactorial(long double N)
        {
            long double nFactorialIterative;
            long double Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
                nFactorialIterative *= Nwork;
            return nFactorialIterative;
        }

        long double rfactorial(long double N)
        {
            long double nFactorialRecursive;
            if (N <= 2) return N;
            nFactorialRecursive = N * rfactorial(N-1);
            return nFactorialRecursive;
        }

        long double factorial(long double N)
        {
            long double nFactorialRecursive;
            long double nFactorialIterative;
            nFactorialIterative = ifactorial(N);
            nFactorialRecursive = rfactorial(N);
            if (nFactorialRecursive != nFactorialIterative)
               printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
                      N,
                      nFactorialIterative,
                      nFactorialRecursive);
            return nFactorialRecursive;
        }

Output from the main loops:

24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times

... Which is to say, no one cares whether C# is faster or slower than C
by a few percent, when non-idiotic code is faster than idiotic code by
nearly a factor of three.

-s

Also, it's a mistake to use "long double" for a mathematical function
that is defined only for integers in terms of stylistic
communication...although it is not likely here that double precision
errors would play a role. Long double doesn't even make a difference
in Microsoft C++ as far as I know.

So, "we" are not impressed with your stunt. You optimized code not
meant to be optimized, removing "our" ability to detect, at some
sufficiently long execution time to confirm that "C is an order of
magnitude faster than C sharp", at which point, and only at which
point, would "we" be interested in converting back to C. You
calculated the exact value of 24 factorial faster for no reason,
whereas I ran code side by side to test whether C provides
sufficiently dramatic and repeatable performance savings to warrant
even considering its use. I proved it does not.

As to lists and hash tables: C Sharp provides these precoded using
state of the art and best of breed algorithms written by really cool
guys, perhaps Ray Chen and Adam Barr. Whereas in C, one's always being
advised to visit some really creepy site to download some creep's
code, bypassing exhortations to come to Jesus and advertisements for
ammo. As King Lear said, it would be a "delicate stratagem", not to
shoe a troop of horse with felt, but to write lists and hash tables,
but us grownups have better things to do at this time.

Peter Seebach, we are not amused.
 
S

spinoza1111

...




Obviously, neither you nor Dijkstra understand the true economic
underpinnings of the software biz.  The whole point is to be hot, sexy,
and broken.  Repeat that a few times to get the point.

Because, as I've said so many times, if they ever actually solve the
problem in the software world, the business is dead.  And this has
serious economic consequences for everyone.  Think job-loss, which is
generally considered the worst thing imaginable.

There are many examples of situations where programmers did actually
solve their problem and ended up on unemployment as a result.  At the
bigger, company level, think Windows XP.  MS basically solved the OS
problem with Windows XP.  They were thus forced to tear it up and start
over with Vista/Windows 7/etc.

ROTFLMAO. Hey is that why I'm a teacher in Asia? Izzit cuz I was such
a great programmer, actually providing Bell Northern Research and
Princeton which stuff that worked, instead of posting nasty notes
about Herb Schildt? I like to think I prefer to be a teacher, but
perhaps I'm Deludo Boy.

Thanks as always for another five star and amusing remark. Have one on
me.

She's hot sexy and broke
And that ain't no joke
 
S

spinoza1111

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results
Here is the C code.
#include <stdio.h>
#include <time.h>
long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
              nFactorialRecursive);
    return nFactorialRecursive;

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?

Here is the C Sharp code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }
        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".
As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.
But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.
Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.
Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.
But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.
Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.
When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!
Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".
Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.
It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".
Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.
C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.
I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.

OT, but *much* more interesting than your rambles about "C:The
Complete Nonsense", etc.
What about the memory footprint of C vs C#? Sure, .Net code is
compiled rather than interpreted so it doesn't have a huge time
penalty - but it is compiled to an intermediate language designed to
run in a *huge* virtual machine. Bloatware which is only slightly
slower than nonbloated code is still bloated and still slower. Also, a
much more interesting question (than simple factorial calculations) is
how something like Microsoft Excel would work if its code base was
rewritten from C to C#. I suspect that the cummulative slowdowns would
result in a spreadsheet which was annoyingly slower.

Out of curiousity, what rules of grammar does the sentence "C is more
effecient than C#." violate?

Glad you asked. It violates a semantic rule. A LANGUAGE cannot be
"efficient" unless it must be executed by a strict interpreter (code
that unlike a Java or .Net virtual machine does something each time
each instruction is executed that takes K time). But, of course, C
Sharp can be compiled. Sure, extra instructions are compiled in that
check array bounds, but these are very nice things if one takes
responsibility for one's code after some clown has changed it.

Bjarne Stroustrup somewhere writes about programmers who claim that it
is "inefficient" to keep run time tests and asserts() in code after it
is released to "production". He compares them to sailors who leave
port without lifeboats or life jackets.

Most programmers have only a crude and overly normative understanding
of what "efficiency" is. It's not raw and blinding speed. For example,
Peter Seebach thinks it was more "efficient" to take loops out that
needed to stay in code meant to be a simple benchmarks.

Indeed, many times, "inefficient" means "I don't want to think".

"Far from perceiving such prohibitions on thought as something
hostile, the candidates – and all scientists are candidates – feel
relieved. Because thinking burdens them with a subjective
responsibility, which their objective position in the production-
process prevents them from fulfilling, they renounce it, shake a bit
and run over to the other side. The displeasure of thinking soon turns
into the incapacity to think at all: people who effortlessly invent
the most refined statistical objections, when it is a question of
sabotaging a cognition, are not capable of making the simplest
predictions of content ex cathedra [Latin: from the chair, e.g. Papal
decision]. They lash out at the speculation and in it kill common
sense. The more intelligent of them have an inkling of what ails their
mental faculties, because the symptoms are not universal, but appear
in the organs, whose service they sell. Many still wait in fear and
shame, at being caught with their defect. All however find it raised
publicly to a moral service and see themselves being recognized for a
scientific asceticism, which is nothing of the sort, but the secret
contour of their weakness. Their resentment is socially rationalized
under the formula: thinking is unscientific. Their intellectual energy
is thereby amplified in many dimensions to the utmost by the mechanism
of control. The collective stupidity of research technicians is not
simply the absence or regression of intellectual capacities, but an
overgrowth of the capacity of thought itself, which eats away at the
latter with its own energy. The masochistic malice [Bosheit] of young
intellectuals derives from the malevolence [Bösartigkeit] of their
illness."

TW Adorno, Minima Moralia (Reflections on Damaged Life) 1948
 
S

stan

Kenny said:
Obviously, neither you nor Dijkstra understand the true economic
underpinnings of the software biz. The whole point is to be hot, sexy,
and broken. Repeat that a few times to get the point.

It seems that you consider yourself quite knowledgeable and expert,
even superior to Dijkstra, yet you offer so little actual news or
information actually about C.

In fact I have to wonder, do you ever actually contribute anything
about the C programming language?
Because, as I've said so many times, if they ever actually solve the
problem in the software world, the business is dead. And this has
serious economic consequences for everyone. Think job-loss, which is
generally considered the worst thing imaginable.

There are many examples of situations where programmers did actually
solve their problem and ended up on unemployment as a result. At the
bigger, company level, think Windows XP. MS basically solved the OS
problem with Windows XP. They were thus forced to tear it up and start
over with Vista/Windows 7/etc.

This sort of sarcasm, petty whining, attempted humor, ... can get
boorish when applied, in your words, "many times". Just a thought but
the phrase "you can be part of the problem, or you can be part of the
solution" springs to mind often when trying to follow threads.

As for the topic/rant in question, if I ever set out to develop a
project for only a MS proprietary platform I'll probably care about C#
vs options but since my normal development world is not MS the
question isn't really important for me personally. My world also
slants more towards system stuff than the latest shiny user app - in
other words I'm not fascinated by eye candy. I seem to be in a
minority admittedly. But in my world the fact that C is used to build
C tools intuitively feels like a win. No intended swipe here but does
a C# compiler for C# exist? With source for study?
 
B

bartc

spinoza1111 said:
No, dear boy. 110% means that C Sharp is ten percent SLOWER, and it
means I don't care that it is.

So about 90% of the speed of C.
OK, let's try that at home...

...of course, you do know that the value overflows for C Sharp int and
C long arithmetic. The maximum value we can compute is 12 factorial...

I didn't care; both versions overflowed to give the same wrong result. I
just didn't want the results dominated by the attempting 64-bit multiplies
on my 32-bit machine.
...to maintain realistic execution times, we must change both versions
to execute ten million times.

Oops. Check this out! Here, C Sharp is actually faster!

C Sharp result: The factorial of 12 is 479001600: 00:00:06.8750000
seconds to calculate 10000000 times
C result: 12! is 479001600: 8.00 seconds to calculate 10000000 times

Check again. I got 5.4 seconds for C# and 3.0 seconds for C. Are you using
32-bits in C, where it's not always obvious?
In other words, you changed your version of the code to calculate the
wrong answer "twice as fast" as C Sharp. In other words, you made the
code significantly less "powerful" to get the wrong result, and when
we fix it, C runs slower. This is probably because cache usage, which
is easily implemented safely in a hosted VM, becomes more and more
important the more the same code is executed on similar data.

In this tiny program everything should fit into the cache.
Excuse me, but that's a canard. C, to run efficiently, needs a
complete optimizer. A VM is less complex. Architectures have in fact
been tuned in very complex ways to run C adequately in ways that have
caused errors, perhaps including the Intel arithmetic bug.

An optimising C compiler is a few MB, and is not needed to run the result.
To compile your C# code, I needed a 63MB download. Then I needed .NET which
probably was already in my OS, but appears to 'take up to 500MB of hard disk
space'.

The C version would have required a few tens of KB in total to run.
Yes...the timing routines standard in C SHARP EXPRESS, shipped free to
script kiddies and even girl programmers world wide, are superior to
the stuff in C.

C's clock() function gives elapsed times in msec.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,039
Messages
2,570,376
Members
47,029
Latest member
EmiliaSton

Latest Threads

Top