Comparision of C Sharp and C performance

S

spinoza1111

You are correct in saying that the C version becomes very slow as the
number of table entries approaches the size of the table. I've already
noted that this is a problem, and how to solve it by using a linked
list originating at each entry.

The solution is not to use linked lists [at least not that way].
Either use a larger table or a tree.  Generally for unbounded inputs a
tree is a better idea as it has better properties on the whole (as the
symbol set size grows...).

Fair enough. The linked list might have long searches whereas the tree
search can be bounded as long as you can keep it balanced. See (you
probably have) Knuth. I have to buy a new copy if this nonsense
continues, I gave away my old copy to DeVry in a Prospero moment:

Now my Charmes are all ore-throwne,
And what strength I haue's mine owne.

However, the two programs demonstrate my point. C more or less forces
the C program to decide on the maximum table size, therefore the test,
which filled up the table, was realistic, since tables often fill up
in production.

No, you could grow the table at some threshold, reimport existing
symbols into the new hash.  A naive approach would just add another X
symbol spots every time you get more than [say] 80% full.

Been there, did that...in PL.1 at Illinois Bell. It's a good practice.
Most of the time when a hash is preferable over a tree [or other
structure] is when the symbol set is of known dimensions.  E.g. you're
writing a lexer for a parser for a language.

This is discussed in my book (buy my book or die Earthlings...ooops
commercial promo), "Build Your Own .Net Language and Compiler", Apress
2004.
Whereas in a completely encapsulated way the C Sharp program either
preallocated more than enough storage or automatically expanded the
table. We don't know what it did because we do not need to know. In
production, the only responsibility of the user of hashset is to but
additions in try..catch error handling.

And you couldn't have a hash_add(), hash_find() function in C that
does all of this hash table manipulations [like done in C#] in C
because....?

....because IT'S BEEN DONE, and is more easily accessible in C Sharp
and Java. C'mon, guy. I need to program more examples in Wolfram and
use my computer to search for intelligent life on other planets (sure
ain't found much here ha ha).
The same algorithm(s) that C# uses could be implemented in a C library
[I'll bet they exist all over the net].

Dammit, I addressed this. Yes, it's possible that the C Sharp
libraries are in C. It's also possible that they were written in MSIL
directly, or C Sharp itself. And before there was C Sharp, C was used
to produce better things than C, just like the old gods built the
world only to be overthrown by cooler gods.
Or doing a 2 second google search

http://www.gnu.org/s/libc/manual/html_node/Hash-Search-Function.html

Wow.  That was hard.

You're neglecting the forensic problem. Not only "should" I not use
this code in commercial products, I have gnow way of gnowing that gnu
will ingneroperate.
hash_delete()

You really need to learn what functions are.

I think I do. Do u? And I have said before that this constant, lower
middle class, questioning of credentials by people who have every
reason to worry about their own is boring and disgusting.
So because you suck at software development and computer science in
general, C sucks.

No, C sucks because I started programming before it existed and saw
University of Chicago programmers do a better job, only to see C
become widely use merely because of the prestige of a campus with no
particular distinction in comp sci at the time but an upper class
reputation. I was hired by Princeton in the 1980s in part because it
was behind the curve technically.

You see, public relations machinery worked on behalf of Princeton on
the right coast and later Apple on the left coast to re-present men
who were at best tokens of a type as real inventors. Because
industrial relations had imposed the same overall contours on
technology world-wide, men were simultaneously discovering the same
things all over the world but world-wide, American military power
(established by slaughtering the people of Hiroshima and Nagasaki)
made it seem as if prestige Americans invented the computer whereas it
was being simultaneously invented in places as diverse as Nazi
Germany, Pennsylvania, and Iowa.

The result is a perpetual childishness and anxiety in which certain
inventors are celebrated as gods by public relations machinery and the
"rest of us" are encouraged to fight for scraps of status.

When you're comparing equivalent algorithms maybe you might have a
point.   Until then this is just further demonstration that there are
people in the world stupider than I.

Well, if that were true, that would make you happy. But if your goal
is to discover a vanishingly small value for a number, I suggest you
scram.
 
S

spinoza1111

You've been in software for 40 years (1971 was 38 years ago btw...)
and it didn't occur to you to use a binary tree?


man hsearch
man bsearch
man tsearch

Sure those are not part of C99, but they're part of GNU libc, which a
lot of people have access to.  There are standalone data management
libraries out there.

You're being obtuse to think that people haven't worked on these
problems and that ALL developers must code their own solutions from
scratch.


rand() is useful for simple non-sequential numbers.  If you need a
statistically meaningful PRNG use one.  I'd hazard a guess C# is no
different in it's use of a LCG anyways.  So I wouldn't be so apt to
praise it.

You could easily write an API that had functions like

hash_add(table, key, value);
value = hash_search(table, key);
hash_remove(table, key);

Why would that be so much harder than your class based methods from
C#?

Because I didn't have to write them. I just used hashkey.

And even if I use the gnu products (which should not be ethically used
in commercial products) the forensic problems of C remain. Because it
provides zero protection against memory leaks, aliasing, and the
simultaneous usage of certain non-reentrant library functions, I would
have to test the code doing the hashing thoroughly IN ADDITION to the
code written to solve the actual problem. Whereas forensically I am
within my rights to assume hashkey works.
 
O

Oliver Jackson

The solution is not to use linked lists [at least not that way].
Either use a larger table or a tree.  Generally for unbounded inputs a
tree is a better idea as it has better properties on the whole (as the
symbol set size grows...).

Fair enough. The linked list might have long searches whereas the tree
search can be bounded as long as you can keep it balanced. See (you
probably have) Knuth. I have to buy a new copy if this nonsense
continues, I gave away my old copy to DeVry in a Prospero moment:

Now my Charmes are all ore-throwne,
And what strength I haue's mine owne.
Oh jeeze well then you'd better get that book back post haste.
 
M

Michael Foukarakis

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results

Here is the C code.

#include <stdio.h>
#include <time.h>

long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
              nFactorialRecursive);
    return nFactorialRecursive;

}

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?

}

Here is the C Sharp code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }

        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }

}

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".

You don't know how to benchmark programs. D'oh (sic).
But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is.

Show them "many C programmers".
I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.

I long for the day when you'll grow tired of (your own) incompetence
as well.
 
B

Boris S.

Fair enough.  It's not exactly what I'd call an interesting test case for
real-world code.  I'd be a lot more interested in performance of, say,
large lists or hash tables.

C code to hash several numbers, iterated to get somewhat better
performance numbers.

#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000

int main(void)
{
    int hash[ARRAY_SIZE];
    int i;
    int r;
    int j;
    int k;
    int collisions;
    time_t start;
    time_t end;
    double dif;
    int tests;
    int sessions;
    time (&start);
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (i = 0; i < ARRAY_SIZE; i++) hash = 0;
            collisions = 0;
            tests = ARRAY_SIZE;
            for (i = 0; i < tests; i++)
            {
                r = rand();
                j = r % ARRAY_SIZE;
                k = j;
                if (hash[j] != 0) collisions++;
                while(hash[j] != r && hash[j] != 0)
                {
                    if (j >= ARRAY_SIZE) j = 0; else j++;
                    if (j==k)
                    {
                        printf("Table is full\n");
                        break;
                    }
                }
                if (hash[j] == 0) hash[j] = r;
            }
        }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d
collisions, %d times\n",
           dif, tests, collisions, sessions);
    return 0; // Gee is that right?

}

C Sharp code to do the same:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            HashSet<int> hash = null;
            hash = new HashSet<int>();
            Random R = new Random(1);
            int i;
            int j;
            TimeSpan dif;
            DateTime start;
            DateTime end;
            start = DateTime.Now;
            int sessions = 100000;
            int tests = 1000;
            for (i = 0; i < sessions; i++)
            {
                hash.Clear();
                for (j = 0; j < tests; j++)
                    hash.Add(R.Next(32767));
            }
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("It took C Sharp " +
                 dif.ToString() + " " +
                 "seconds to hash " +
                 tests.ToString() + " numbers " +
                 sessions.ToString() + " times");
            return; // Ha ha I don't have to worry about the
shibboleth
        }
    }

}

The C Sharp code is not only smaller above, it runs dramatically
faster: 24 secs on my machine as contrasted with 35 secs for the C
code.

This is because the HashSet (also available in Java) can be written as
fast as you like in that it's a part of the extended "os". It may
itself be written in C, and please note that this does NOT mean that
YOU should write in C after all, because the HashSet was written state
of the art by studly guys and gals at Microsoft, and painstakingly
reviewed by other studly guys and gals.

And no, I don't want to visit some creepy site to get best C practise
for hashing. The fact is that the research it takes at creepy come-to-
Jesus and ammo sites to find "good" C, quite apart from its being a
waste of spirit in an expense of shame, provides no "forensic"
assurance that the creepy guy who gave you the code didn't screw up or
insert a time bomb. HashSet is available, shrink-wrapped and out of
the box, and IT RUNS TWICE AS FAST.

HashSet can even safely run as managed code but be developed as Java
bytecode or .Net MSIL as a form of safe assembler language. When it is
maintained by the vendor, you get the benefit. Whereas el Creepo's
code is all you get, period, unless you call him a year later, only to
find that his ex-wife has thrown him out the house.

C is NOT more "efficient" than C Sharp. That is not even a coherent
thing to say.

Furthermore, even the best of us screw up (as I have screwn up) when
implementing the basics. Donald R Knuth has said that he always gets
binary search wrong the first time he recodes it. It is a mark of the
smart person to have trouble with low-level math and code; Einstein
(cf Walter Kaufman's bio) had in fact a great deal of difficulty
working out the details of relativistic mathematics and required help
from other mathematicians and physicists.

Therefore we class acts prefer C Sharp.


With a little modification here is the my result;

root@varyag-laptop:~# time ./x
It took C 6.00 seconds to hash 1000 numbers with 82 collisions, 100000
times

real 0m5.468s
user 0m5.456s
sys 0m0.000s
root@varyag-laptop:~# time mono xc.exe
It took C Sharp 00:00:10.1764660 seconds to hash 1000 numbers 100000
times

real 0m10.260s
user 0m10.241s
sys 0m0.004s

This is, where this trash talking ends.. ;)
 
S

spinoza1111

C code to hash several numbers, iterated to get somewhat better
performance numbers.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000
int main(void)
{
    int hash[ARRAY_SIZE];
    int i;
    int r;
    int j;
    int k;
    int collisions;
    time_t start;
    time_t end;
    double dif;
    int tests;
    int sessions;
    time (&start);
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (i = 0; i < ARRAY_SIZE; i++) hash = 0;
            collisions = 0;
            tests = ARRAY_SIZE;
            for (i = 0; i < tests; i++)
            {
                r = rand();
                j = r % ARRAY_SIZE;
                k = j;
                if (hash[j] != 0) collisions++;
                while(hash[j] != r && hash[j] != 0)
                {
                    if (j >= ARRAY_SIZE) j = 0; else j++;
                    if (j==k)
                    {
                        printf("Table is full\n");
                        break;
                    }
                }
                if (hash[j] == 0) hash[j] = r;
            }
        }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d
collisions, %d times\n",
           dif, tests, collisions, sessions);
    return 0; // Gee is that right?

C Sharp code to do the same:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            HashSet<int> hash = null;
            hash = new HashSet<int>();
            Random R = new Random(1);
            int i;
            int j;
            TimeSpan dif;
            DateTime start;
            DateTime end;
            start = DateTime.Now;
            int sessions = 100000;
            int tests = 1000;
            for (i = 0; i < sessions; i++)
            {
                hash.Clear();
                for (j = 0; j < tests; j++)
                    hash.Add(R.Next(32767));
            }
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("It took C Sharp " +
                 dif.ToString() + " " +
                 "seconds to hash " +
                 tests.ToString() + " numbers " +
                 sessions.ToString() + " times");
            return; // Ha ha I don't have to worry about the
shibboleth
        }
    }

The C Sharp code is not only smaller above, it runs dramatically
faster: 24 secs on my machine as contrasted with 35 secs for the C
code.
This is because the HashSet (also available in Java) can be written as
fast as you like in that it's a part of the extended "os". It may
itself be written in C, and please note that this does NOT mean that
YOU should write in C after all, because the HashSet was written state
of the art by studly guys and gals at Microsoft, and painstakingly
reviewed by other studly guys and gals.
And no, I don't want to visit some creepy site to get best C practise
for hashing. The fact is that the research it takes at creepy come-to-
Jesus and ammo sites to find "good" C, quite apart from its being a
waste of spirit in an expense of shame, provides no "forensic"
assurance that the creepy guy who gave you the code didn't screw up or
insert a time bomb. HashSet is available, shrink-wrapped and out of
the box, and IT RUNS TWICE AS FAST.
HashSet can even safely run as managed code but be developed as Java
bytecode or .Net MSIL as a form of safe assembler language. When it is
maintained by the vendor, you get the benefit. Whereas el Creepo's
code is all you get, period, unless you call him a year later, only to
find that his ex-wife has thrown him out the house.
C is NOT more "efficient" than C Sharp. That is not even a coherent
thing to say.
Furthermore, even the best of us screw up (as I have screwn up) when
implementing the basics. Donald R Knuth has said that he always gets
binary search wrong the first time he recodes it. It is a mark of the
smart person to have trouble with low-level math and code; Einstein
(cf Walter Kaufman's bio) had in fact a great deal of difficulty
working out the details of relativistic mathematics and required help
from other mathematicians and physicists.
Therefore we class acts prefer C Sharp.

With a little modification here is the my result;

root@varyag-laptop:~# time ./x
It took C 6.00 seconds to hash 1000 numbers with 82 collisions, 100000
times

real    0m5.468s
user    0m5.456s
sys     0m0.000s
root@varyag-laptop:~# time mono xc.exe
It took C Sharp 00:00:10.1764660 seconds to hash 1000 numbers 100000
times

real    0m10.260s
user    0m10.241s
sys     0m0.004s

This is, where this trash talking ends.. ;)


The trash talking in my case is in all instances a response to the
annoying lower middle class habit of always transforming technical
issues into issues of personal standing, a habit that is based on
rather well-deserved feelings of inadequacy.

You forgot to disclose the modification you made, and the performance
improvement of C simply doesn't warrant using it, because C as such
presents the forensic problem that any C program can do nasty things
that a managed C Sharp program cannot.

Besides, we know that the C program performs poorly because the table
fills up, resulting in longer and longer searches for holes. I was the
first, in fact, to point this out, because I'm not here to destroy
others by concealing information unfavorable to my case.

Here are the comparative numbers when the C program, only, is
restricted to using 75% of the table (which trades bloat for speed).

It took C 13.00 seconds to hash 750 numbers in a table with available
size 1000 with 101913509 collisions, 100000 times
Total probes: 176913509: Average number of collisions: 0.576064

It took C Sharp 00:00:24.4531250 seconds to hash 1000 numbers 100000
times

Yes, folks, C sharp took twice as long.

Shame on it. Bad, Commie, Terrorist, Evil C Sharp, from the Dark Side
of the Force.

However, only an order of magnitude would be significant, because an
order of magnitude would indicate that the C Sharp code was
interpreted, which it is not despite urban legends. With the slower
speed (and the "larger" executable and putative "code bloat", the C
Sharp executable being 5120 bytes and the C executable being...hey
wait a minute...7680 bytes (because MSIL is not prolix) one gets
freedom from The Complete Nonsense of C which stunts the mind and rots
the spirit down to the level of the Troglodyte.
 
M

Moi

On Dec 30, 12:53 am, Tom St Denis <[email protected]> wrote:
And even if I use the gnu products (which should not be ethically used
in commercial products) the forensic problems of C remain. Because it

They are probably licenced under the LGPL.
provides zero protection against memory leaks, aliasing, and the
simultaneous usage of certain non-reentrant library functions, I would
have to test the code doing the hashing thoroughly IN ADDITION to the
code written to solve the actual problem. Whereas forensically I am
within my rights to assume hashkey works.

You are wrong. C offers 100% protection against all evil scenarios.
Remember: it is your code, if it is wrong you can fix it. Its behavior is
dictated in the standard.

OTOH the dotnet runtime offers no such guarantee. It does what it does.
It may or may not leak, it may or may not be reentrant, it may suddenly start
the garbage collector, without you knowing it. It may even call home to invoke
real programmers. And its internals may change at any time, without you
knowing it.

Most clc regulars will code a trivial hashtable like this in about half an hour
(and probably another to get it right); and most of us have some existing code
base (or knowlefge) to lend from.

Your time savings were in the second part: getting it right. Instead you relied
on Bill G. and Steve B. to get it right for you.

AvK
 
T

Tom St Denis

Fair enough. The linked list might have long searches whereas the tree

No fair enough, you don't get to post an inefficient algorithm compare
it to optimized C# and make conclusions, then when told basically say
"fair enough." Admit you were wrong.
...because IT'S BEEN DONE, and is more easily accessible in C Sharp
and Java. C'mon, guy. I need to program more examples in Wolfram and
use my computer to search for intelligent life on other planets (sure
ain't found much here ha ha).

It's been done in C too though. Did you miss the part where I pointed
out that there are libraries for C out there that do trees, heaps,
queues, etc?

You seem to assume that all C developers start with no external APIs
and write everything from scratch themselves. That's both naive and
ignorant.
Dammit, I addressed this. Yes, it's possible that the C Sharp
libraries are in C. It's also possible that they were written in MSIL
directly, or C Sharp itself. And before there was C Sharp, C was used
to produce better things than C, just like the old gods built the
world only to be overthrown by cooler gods.

Ok, but what I'm trying to say is your argument that C sucks because
it doesn't /come with/ a hash library is dishonest. Finding a hash
library for C is not hard, and they're not hard to use. You were
being dishonest when you claimed that only C# provides such
functionality.
You're neglecting the forensic problem. Not only "should" I not use
this code in commercial products, I have gnow way of gnowing that gnu
will ingneroperate.

Ok, but there are other libraries out there. Point is I found that
with a 2 second google search. So for you to claim that there are no
suitably licensed data management libraries out there is lazy and
dishonest.
I think I do. Do u? And I have said before that this constant, lower
middle class, questioning of credentials by people who have every
reason to worry about their own is boring and disgusting.

You keep claiming that people have to embed 100s of messy C lines in
all of their code to get anything done in C. First it was messy
string functions, now it's hashes, what next malloc and free? My
point was you're missing the part where you put those algorithms in
functions that users can then call without embedding 100s of lines of
messy C all over their programs.
No, C sucks because I started programming before it existed and saw
University of Chicago programmers do a better job, only to see C
become widely use merely because of the prestige of a campus with no
particular distinction in comp sci at the time but an upper class
reputation. I was hired by Princeton in the 1980s in part because it
was behind the curve technically.

You keep claiming that you're this "old timer" programmer from back in
the day, like that matters. Even if it were true, that doesn't
preclude the possibility that even with all that time to get
experience you STILL have no idea what you're talking about.

If you want to impress me with your credentials you'd stop spouting
obvious lies, better yet, you'd stop trolling usenet. Better yet,
you'd post with your real name...

Tom
 
T

Tom St Denis

Because I didn't have to write them. I just used hashkey.

And even if I use the gnu products (which should not be ethically used
in commercial products) the forensic problems of C remain. Because it
provides zero protection against memory leaks, aliasing, and the
simultaneous usage of certain non-reentrant library functions, I would
have to test the code doing the hashing thoroughly IN ADDITION to the
code written to solve the actual problem. Whereas forensically I am
within my rights to assume hashkey works.

All functions in libc which aren't thread safe are marked as so.
turns out the *_r() variants ARE thread safe. And as "Moi" pointed
out, GNU LIBC is licensed under LGPL which doesn't attach a license to
your linked image.

And where you get this idea that C# or Java even don't have memory
management issues... Have you never seen a tomcat server taking 5GB of
ram to handle a few 100 connections? Just because there IS a GC in
Java doesn't mean it's used effectively. Point is, if you really did
have "40 years of experience" (impressive since you stated this began
38 years ago, how did you make up the two years?) you'd be comfortable
with pointers, heaps, array indecies, etc... If not from C, from any
of the dozens of languages that came out around/before it.

With all your experience though you don't seem to get testing and
verification. In an ideal model you assume libc is working and reduce
things to that. Then you prove your libraries are working, and reduce
things to them, and so on. Under your line of thinking you'd have to
go to the customers site and prove that electrons are moving on mains,
that the computer fans spin clock wise, etc and so on. NO
ASSUMPTIONS!

Tom
 
T

Tom St Denis

C# comes WITH a tuned hash library whereas the libraries that "come
with" C in the sense of being visible are a joke: snprintf being an
example.

I don't get what your point is. There is a ton of 3rd party Java/C#/
PHP/etc code out there. If developers only stayed with what the core
C# provided it'd be no better [in that regard].
Many arrays in fact fill up in production. And don't waste my time: I
raised this issue right after I posted the original code, pointing out
that the average probe time would go to shit and describing how to fix
it...the point being that none of this was necessary with C Sharp.

No, all this proves is after analyzing a problem that YOU invented,
YOU came up with an inappropriately inferior solution. It proves that
you don't really know what you're talking about, or at least are not
honest enough to make a point properly.
Talk about bloatware: on the one hand one guy complains that C Sharp
executables are too large, here a C fan suggests wasting memory to
make a crude algorithm perform. It might be a good idea in certain
circumstances, but it fails to demonstrate that C doesn't suck. The
linked list that I suggested (but haven't implemented thisthread) is
better.

That's how you improve hashing though. More memory. If you have
unbounded inputs use a tree and learn how to splay. OH MY GOD,
COMPUTER SCIENCE!!!

Tom
 
T

Tom St Denis

The trash talking in my case is in all instances a response to the
annoying lower middle class habit of always transforming technical
issues into issues of personal standing, a habit that is based on
rather well-deserved feelings of inadequacy.

It'd help if you stopped trying to explain away your ignorance in
terms of other peoples failures.
You forgot to disclose the modification you made, and the performance
improvement of C simply doesn't warrant using it, because C as such
presents the forensic problem that any C program can do nasty things
that a managed C Sharp program cannot.

Dude, learn how to design testable and verifyable software.
Besides, we know that the C program performs poorly because the table
fills up, resulting in longer and longer searches for holes. I was the
first, in fact, to point this out, because I'm not here to destroy
others by concealing information unfavorable to my case.

No, *YOUR* program performed poorly because you designed an
inappropriate solution. If you wrote the same algorithm in C# it'd be
just as inefficient.
However, only an order of magnitude would be significant, because an
order of magnitude would indicate that the C Sharp code was
interpreted, which it is not despite urban legends. With the slower
speed (and the "larger" executable and putative "code bloat", the C
Sharp executable being 5120 bytes and the C executable being...hey
wait a minute...7680 bytes (because MSIL is not prolix) one gets
freedom from The Complete Nonsense of C which stunts the mind and rots
the spirit down to the level of the Troglodyte.

Nobody here is saying that C# is not executed as bytecode. And if
they are they're wrong. I also wouldn't compare executable sizes
unless you also consider the C# DLL baggage [on top of the C runtime
ironically..]

Here's a helpful hint: If you're trying to make a point, consider it
from all angles first. If three seconds of thinking can shoot down
every point you're trying to make you're in the wrong business. Maybe
trolling is not for you, have you considered travel instead?

Tom
 
S

spinoza1111

It'd help if you stopped trying to explain away your ignorance in
terms of other peoples failures.

The process has consistently started with "regs" here questioning my
bonafides, since I write markedly better (more consistently excellent
grammar and spelling, wider vocabulary, more knowledge) than the regs.
Their defensive response has been to start attacks, and newbies and
fools read the attacks only. This causes them to join what constitutes
a cybernetic mob.

In fact, people here are mostly failures whose programme is to show
that other people are liars and failures. For example, recently,
Richard Heathfield, the editor of an unsuccessful book on C from a
publisher with a bad reputation, claimed I'd never posted to the well-
regarded and tightly moderated group comp.risks. The charge was a
malicious lie and libel under UK law and refutable with laughable
ease, and it proved that Heathfield is both stupid and malicious.

Whether you like it or not, and whether or not you've been
oversocialized to not defend yourself, I defend myself, and this
creates the illusion that I'm making trouble.

Do your homework.
Dude, learn how to design testable and verifyable software.

Dude, learn how to spell. The first step is to select a quality tool.
C is not a quality tool since it exists in incompatible forms and was
never properly designed.
No, *YOUR* program performed poorly because you designed an
inappropriate solution.  If you wrote the same algorithm in C# it'd be
just as inefficient.

The Lower Middle Class Parent inhabits too many posters here, and he
transforms all speech into the minatory register, and nothing gets
learnt.

But that's a good idea. I shall indeed do so ASAP. We need to see how
C sharp performs for the ignorant programmer who doesn't know hashset.
My prediction is poorly, for as I was the first to say in this thread,
the algorithm used in the C program slows down as the table fills.
However, only an order of magnitude would be significant, because an
order of magnitude would indicate that the C Sharp code was
interpreted, which it is not despite urban legends. With the slower
speed (and the "larger" executable and putative "code bloat", the C
Sharp executable being 5120 bytes and the C executable being...hey
wait a minute...7680 bytes (because MSIL is not prolix) one gets
freedom from The Complete Nonsense of C which stunts the mind and rots
the spirit down to the level of the Troglodyte.

Nobody here is saying that C# is not executed as bytecode.  And if
they are they're wrong.  I also wouldn't compare executable sizes
unless you also consider the C# DLL baggage [on top of the C runtime
ironically..]

We don't know whether the .Net runtime is in C, since the particular
implementation I use is closed source. But there is no necessary
connection between C and the .Net runtime, or Windows and .Net.
The .Net runtime can be written in anything you like, such as
unmanaged C Sharp.
Here's a helpful hint:  If you're trying to make a point, consider it
from all angles first.  If three seconds of thinking can shoot down
every point you're trying to make you're in the wrong business.  Maybe
trolling is not for you, have you considered travel instead?

The next step, from the minatory register, for the Lower Middle Class
Parent, is the abstract recommendation that one shape up. I have in
this thread considered things from different points of view, for I was
the first to note that the C algorithm slows down as the table fills
up (and to find a solution).

Here's a helpful hint: shove your lectures up your ass, and confine
yourself to on-topic technical remarks. Use Ben Bacarisse's posts as
an example. He's no friend of mine, but he focuses on technical points
almost exclusively and is very intelligent as a techie.
 
S

spinoza1111

They are probably licenced under the LGPL.


You are wrong. C offers 100% protection against all evil scenarios.
Poppycock.

Remember: it is your code, if it is wrong you can fix it. Its behavior is

No, it belongs to the organization paying your salary.

"Those Bolshevists are trying to take our factories?"
"Your factories? You don't even own the smoke!"

- International Workers of the World cartoon ca. 1915

The fact is that in many environments, the suits don't give
programmers enough time to do quality assurance. This means that using
such an unconstrained language as C is professional malpractice.

The suits encourage a form of self-defeating programmer machismo such
that no programmer ever admits to not having enough time. Instead,
based on the macho culture, he'll destroy his health working extra,
unpaid hours to "prove" he's a "man", and the suits laugh all the way
to the bank...since he's lowered his salary.


dictated in the standard.

OTOH the dotnet runtime offers no such guarantee. It does what it does.
It may or may not leak, it may or may not be reentrant, it may suddenly start
the garbage collector, without you knowing it. It may even call home to invoke
real programmers. And its internals may change at any time, without you
knowing it.

This is all true, but highly unlikely.
Most clc regulars will code a trivial hashtable like this in about half an hour
(and probably another to get it right); and most of us have some existing code
base (or knowlefge) to lend from.

In fact, the regs you so admire almost never post new code, for
they're afraid of making errors. Keith Thompson, Seebs and Heathfield
specialize in enabling campaigns of personal destruction against
people who actually accomplish anything, and this started with Seebs'
adolescent mockery of Schildt.

Whereas I've posted C code despite the fact that I think C sucks, and
last used it at the time I was asked at Princeton to assist a Nobel
prize winner with C.

[Damn straight I'll repeat myself about Nash. I find the same lies
about Schildt and myself year in and year out, and this will continue
until it ends and/or Heathfield loses his house in a libel suit.]
Your time savings were in the second part: getting it right. Instead you relied
on Bill G. and Steve B. to get it right for you.

You're lying. I was the first in this thread to show how the C code
slows down as the table fills up, since I first read of the algorithm
in 1972 and implemented it first in 1976 in PL.1. I wrote the code at
the start of the week before boarding the ferry to work. On the ferry
I realized that I would have to explain a fact about performance that
isn't obvious, so I plugged in my Vodaphone thingie and got back on
air. Nobody else had commented on the issue at the time. You're lying,
punk.
 
S

spinoza1111

No fair enough, you don't get to post an inefficient algorithm compare
it to optimized C# and make conclusions, then when told basically say
"fair enough."  Admit you were wrong.


It's been done in C too though.  Did you miss the part where I pointed
out that there are libraries for C out there that do trees, heaps,
queues, etc?

I addressed this point before you mentioned it. I said that I don't
want to use virtual slave labor (which is what open source is) at GNU
nor do I want to go to some creepy site.
You seem to assume that all C developers start with no external APIs
and write everything from scratch themselves.  That's both naive and
ignorant.

False. I posted the example to show how a real problem is typically
solved. In C, the default is to hack new code. In C Sharp the default
is to use a tool. The fact is most C programmers are deficient at
reuse.

Ok, but what I'm trying to say is your argument that C sucks because
it doesn't /come with/ a hash library is dishonest.  Finding a hash
library for C is not hard, and they're not hard to use.  You were
being dishonest when you claimed that only C# provides such
functionality.

The problem isn't that C doesn't come with a hash library. The problem
is that it comes with too many.

There's no way (except perhaps consulting some Fat Bastard at your
little shop, or one of the regs here, such as the pathological liar
Heathfield) of telling which library actually works, and this is a
serious matter, because statistically, C programs are less likely to
work than C Sharp independent of programmer skill: this is a
mathematical result of the ability to alias and the fact that other
people change "your" code.

Ok, but there are other libraries out there.  Point is I found that
with a 2 second google search.  So for you to claim that there are no
suitably licensed data management libraries out there is lazy and
dishonest.

You're searching toxic waste.
You keep claiming that people have to embed 100s of messy C lines in
all of their code to get anything done in C.  First it was messy
string functions, now it's hashes, what next malloc and free?  My
point was you're missing the part where you put those algorithms in
functions that users can then call without embedding 100s of lines of
messy C all over their programs.

....only to find for example that simultaneous calls fail because
global data cannot be hidden properly in C. There's no static nesting
whatsoever, have you noticed? Even PL.1 had this!

The result? If the library function has a state it cannot be called by
a handler handling its failure. This is well known for malloc() but
unknown and unpredictable for any arbitrary "solution" recommended by
some Fat Bastard, recommended by some pathological liar, or found in a
Google search.

You keep claiming that you're this "old timer" programmer from back in
the day, like that matters.  Even if it were true, that doesn't
preclude the possibility that even with all that time to get
experience you STILL have no idea what you're talking about.

It is true that corporate life is an eternal childhood. However, I
also worked independent of the corporation, for example as the author
of Build Your Own .Net Compiler (buy it now or I will kill this dog)
and the programmer of its (26000 line) exemplary compiler.
If you want to impress me with your credentials you'd stop spouting
obvious lies, better yet, you'd stop trolling usenet.  Better yet,
you'd post with your real name...

It is well known that I nyah ha ha am Bnarg, the Ruler of the Galaxy,
posting from my Lair on the Planet Gazumbo.

Seriously, it is well known to the regs here that I am Edward G.
Nilges.
 
T

Tom St Denis

The process has consistently started with "regs" here questioning my
bonafides, since I write markedly better (more consistently excellent
grammar and spelling, wider vocabulary, more knowledge) than the regs.
Their defensive response has been to start attacks, and newbies and
fools read the attacks only. This causes them to join what constitutes
a cybernetic mob.

I don't care about the "regs," in this thread I only care about what
YOU are trying to pass off as knowledge.
Whether you like it or not, and whether or not you've been
oversocialized to not defend yourself, I defend myself, and this
creates the illusion that I'm making trouble.

Defend yourself by being right then. If you're trying to make
arguments actually make sure they're sound and well reasoned instead
of just shotgunning stupidity and seeing what sticks.
Dude, learn how to spell. The first step is to select a quality tool.
C is not a quality tool since it exists in incompatible forms and was
never properly designed.

Which is ironic given how much of your day to day life is probably the
result of C programs...

So far it seems to be you saying C sucks, and nobody caring. I don't
care if you program in C or not. I'm only replying here because you
posted some nonsense comparison and are trying to pass it off as
science. You're a fraud.
But that's a good idea. I shall indeed do so ASAP. We need to see how
C sharp performs for the ignorant programmer who doesn't know hashset.
My prediction is poorly, for as I was the first to say in this thread,
the algorithm used in the C program slows down as the table fills.

So why bother the comparison? If you knew your algorithm in your C
program was not comparable why bother?

That'd be like comparing bubble sort in C# to qsort in C ...
We don't know whether the .Net runtime  is in C, since the particular
implementation I use is closed source. But there is no necessary
connection between C and the .Net runtime, or Windows and .Net.
The .Net runtime can be written in anything you like, such as
unmanaged C Sharp.

Well the Windows kernel has a C interface for the syscalls. So at
some point something has to call that. So chances are good that the
C# runtime is based on top of the C runtime.

Tom
 
T

Tom St Denis

I addressed this point before you mentioned it. I said that I don't
want to use virtual slave labor (which is what open source is) at GNU
nor do I want to go to some creepy site.

OSS is hardly slave labour. Many people in that scene get paid for
their work. It's just a community effort. Of course if this is how
you play ostrich then so be it.
False. I posted the example to show how a real problem is typically
solved. In C, the default is to hack new code. In C Sharp the default
is to use a tool. The fact is most C programmers are deficient at
reuse.

Citation needed.
The problem isn't that C doesn't come with a hash library. The problem
is that it comes with too many.

So first it's that there are no solutions in C to the problem. Now
there are too many?
There's no way (except perhaps consulting some Fat Bastard at your
little shop, or one of the regs here, such as the pathological liar
Heathfield) of telling which library actually works, and this is a
serious matter, because statistically, C programs are less likely to
work than C Sharp independent of programmer skill: this is a
mathematical result of the ability to alias and the fact that other
people change "your" code.

I don't get what your rant is. By your logic we shouldn't trust the
code you produced since we didn't write it.

Also, if I import version X of an OSS library then NOBODY is changing
it on me...
...only to find for example that simultaneous calls fail because
global data cannot be hidden properly in C. There's no static nesting
whatsoever, have you noticed? Even PL.1 had this!

Um, the stack of the threads is where you typically put cheap per-
thread data. Otherwise you allocate it off the heap. In the case of
the *_r() GNU libc functions they store any transient data in the
structure you pass it. That's how they achieve thread safety.

It's like in OOP where you have a fresh instance of a class per
thread. The class has public/private data members that are unique to
the instance. OMG C++ HAS NO THREAD SAFETY!!!
The result? If the library function has a state it cannot be called by
a handler handling its failure. This is well known for malloc() but
unknown and unpredictable for any arbitrary "solution" recommended by
some Fat Bastard, recommended by some pathological liar, or found in a
Google search.

malloc() is thread safe in GNU libc. It can fail/succeed in multiple
threads simultaneously. What's your point?
Seriously, it is well known to the regs here that I am Edward G.
Nilges.

I didn't know that (nor do I assume to know that, being the victim of
a joe-job myself I don't trust anything most people say w.r.t.
identities unless they prove it through other means). That being
said, why not reserve your posts for more positive or constructive
contributions instead?

Tom
 
T

Tom St Denis

In fact, the regs you so admire almost never post new code, for
they're afraid of making errors.

You say that like it's a bad thing. Real developers SHOULD be afraid
of "winging" it. Software [no matter the language] is hard to get
right all the time. A smart developer would re-use as much as
possible, such that if you had asked me to write, say a routine to
store GZIP'ed data, I'd call libz, I wouldn't re-invent a deflate/gzip
codec on the fly just to boast of what a wicked C coder I am...

In this case, I would have googled for a hash API, and wrote a program
based on it. The fact that GNU libc provides one as part of the
standard library means I would have used it.

The fact that these concepts are foreign to you just serves to prove
you know nothing of software development.

Tom
 
S

spinoza1111

In fact, the regs you so admire almost never post new code, for
they're afraid of making errors.

You say that like it's a bad thing.  Real developers SHOULD be afraid
of "winging" it.  Software [no matter the language] is hard to get
right all the time.  A smart developer would re-use as much as
possible, such that if you had asked me to write, say a routine to
store GZIP'ed data, I'd call libz, I wouldn't re-invent a deflate/gzip
codec on the fly just to boast of what a wicked C coder I am...

That's true. The exception: teachers, who have to "reinvent the wheel"
to explain how hash functions work or in Schildt's case, how C works.
And of course they're going to get it wrong. The best use the getting-
it-wrong to help students learn, as in the case where the teacher
allows the student to correct him, or find something he missed. The
most famous example being Knuth's checks to people who find bugs in
his books...hmm, maybe Seebach is mad at Herb for not cutting him a
check.

In my case I needed to illustrate an important fact about C Sharp:
that it's not interpreted. If it were, its execution time would be an
order of magnitude (ten times) higher. It wasn't.

I also intended to show that C Sharp avoids reinventing the wheel by
providing a canonical set of libraries that function in a truly
encapsulated way. In C Sharp, you just don't have to worry about re-
entrance UNLESS (and this is the only exception that comes to mind)
the routine you call uses disk memory for some silly reason, say
writing and reading to the infamous "Windows Registry". Whereas in C,
malloc() and other routines aren't re-entrant.

I've called these "forensic" concerns (my term of art) because they
have to do with things that you have to worry about, for which there
is no definable deliverable: you can deliver C code but you cannot
provide a forensic assurance that it's completely exposure free.

This is true even if you're as competent as the regs here fantasize
themselves to be, because truly great programmers in many cases make
more mistakes as part of being more productive, and more creative.
Knuth famously claimed that every time he does binary search he makes
the same mistake, and John von Neumann bragged about his errors (after
fixing them). Whereas here, technicians wait in Adorno's "fear and
shame" for their errors to be detected.

In this toxic environment, anyone who tries to teach or learn is
exposed to thuggish conduct from people who cannot teach and learn by
rote.
In this case, I would have googled for a hash API, and wrote a program
based on it.  The fact that GNU libc provides one as part of the
standard library means I would have used it.

The fact that these concepts are foreign to you just serves to prove
you know nothing of software development.

Guess not, Tommy boy. In fact, I've seen software development "mature"
to the point where nothing gets done because everybody equates
rationality with the ability to dominate conversations, if necessary
by trashing other people. I want very little to do with this type of
software development, which is why I left the field for English
teaching after a successful career that at thirty years was about
three times longer than the average.
 
T

Tom St Denis

That's true. The exception: teachers, who have to "reinvent the wheel"
to explain how hash functions work or in Schildt's case, how C works.
And of course they're going to get it wrong. The best use the getting-
it-wrong to help students learn, as in the case where the teacher
allows the student to correct him, or find something he missed. The
most famous example being Knuth's checks to people who find bugs in
his books...hmm, maybe Seebach is mad at Herb for not cutting him a
check.

USENET postings != published book. I'm not going to write from memory
the AES encryption algorithm every time a student asks about it in
sci.crypt. I'll answer questions about parts of it, and point them to
source if they need a reference.

Why would anyone here repeatedly post complicated and time consuming
code for every single newcomer who comes by to ask? If you want to
learn about hashing and data management, buy a book on the subject and/
or google.
In my case I needed to illustrate an important fact about C Sharp:
that it's not interpreted. If it were, its execution time would be an
order of magnitude (ten times) higher. It wasn't.

Depends on what you're doing. If your application is highly syscall
dependent [e.g. reading large files from disk] an interpreted program
might not be much slower than compiled. In this case though, all you
showed is that if you use the wrong algorithm in C you get slower
results than the right algorithm in C#.
I also intended to show that C Sharp avoids reinventing the wheel by
providing a canonical set of libraries that function in a truly
encapsulated way. In C Sharp, you just don't have to worry about re-
entrance UNLESS (and this is the only exception that comes to mind)
the routine you call uses disk memory for some silly reason, say
writing and reading to the infamous "Windows Registry". Whereas in C,
malloc() and other routines aren't re-entrant.

C# does ***NOT*** include every possible routine a developer would
ever need. If it did there wouldn't be third party C# libraries out
there. So your point is not only wrong and based on false pretenses,
but ignorant and naive.

Also as I posted elsewhere malloc() *is* thread safe in GNU libc.
It's even thread safe in the windows C runtime libraries.
I've called these "forensic" concerns (my term of art) because they
have to do with things that you have to worry about, for which there
is no definable deliverable: you can deliver C code but you cannot
provide a forensic assurance that it's completely exposure free.

You need to learn how to write testable and verifiable software.
This is true even if you're as competent as the regs here fantasize
themselves to be, because truly great programmers in many cases make
more mistakes as part of being more productive, and more creative.

Which is why they don't wing it in USENET posts, instead they reduce
things to proven quantities. If I spend the time and energy to test
and verify a hash library, I can then reduce my correctness to the
fact that I've already checked the library. I don't need to re-verify
everything.
In this toxic environment, anyone who tries to teach or learn is
exposed to thuggish conduct from people who cannot teach and learn by
rote.

You can't teach because you lack the humility to admit when you're
wrong.
Guess not, Tommy boy. In fact, I've seen software development "mature"
to the point where nothing gets done because everybody equates
rationality with the ability to dominate conversations, if necessary
by trashing other people. I want very little to do with this type of
software development, which is why I left the field for English
teaching after a successful career that at thirty years was about
three times longer than the average.

Resorting to insults won't win you any points. All you've "proven" in
this thread is you fundamentally don't understand computer science,
that you don't know C that well, that you can't teach, that you're not
that familiar with software engineering, and that you're genuinely not
a nice person.

Tom
 
K

Kenny McCormack

Tom St Denis said:
Resorting to insults won't win you any points. All you've "proven" in
this thread is you fundamentally don't understand computer science,
that you don't know C that well, that you can't teach, that you're not
that familiar with software engineering, and that you're genuinely not
a nice person.

Its so funny - all of those things are true to a tee about you.

Projecting much?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,273
Latest member
DamonShoem

Latest Threads

Top