Why C Is Not My Favourite Programming Language

  • Thread starter evolnet.regular
  • Start date
F

Francis Glassborow

CBFalconer said:
No? If your C system makes an executable file somewhere in the
25,000 to 250,000 byte range this means a bloat factor of between
1000 and 10,000 is involved.
No, not a factor (or at least not one that applies to any other program)
but largely an overhead paid by any C program created by the
implementation for that environment.
 
T

tomstdenis

CBFalconer said:
No? If your C system makes an executable file somewhere in the
25,000 to 250,000 byte range this means a bloat factor of between
1000 and 10,000 is involved. This shows why a Z80 with 64k of
memory could keep up for so long. Today the hardware is something
in that 1000 to 10,000 range faster than my 2 Mhz z80 was, and
memory is about 1000 times larger, with virtual memory about 10,000
times larger. However the bloat and tail-chasing have kept
performance in the same general ball park.

Um...

With or without debug symbols?
With or without relocation tables?
With or without optimizations?
With or without proper code factorization?

if you compile with "-g3 -O3 -funroll-all-loops" and bitch about
"bloat" then you got your head up your ass.

A stub empty main function on ELF-i386 with the latest GNU tools is
3.1KiB. Just the object is 729 bytes and the actual code amounts to 35
bytes. [this is with no optimizations or other flags].

The 35 bytes are

laptop ~ # ndisasm -b 32 test.bin
00000000 55 push ebp
00000001 89E5 mov ebp,esp
00000003 83EC08 sub esp,byte +0x8
00000006 83E4F0 and esp,byte -0x10
00000009 B800000000 mov eax,0x0
0000000E 83C00F add eax,byte +0xf
00000011 83C00F add eax,byte +0xf
00000014 C1E804 shr eax,0x4
00000017 C1E004 shl eax,0x4
0000001A 29C4 sub esp,eax
0000001C B800000000 mov eax,0x0
00000021 C9 leave
00000022 C3 ret

Which is a long drawn out process to make a stack frame. With -O3 and
no stack frame we get a 16 byte routine:

00000000 55 push ebp
00000001 31C0 xor eax,eax
00000003 89E5 mov ebp,esp
00000005 83EC08 sub esp,byte +0x8
00000008 83E4F0 and esp,byte -0x10
0000000B 83EC10 sub esp,byte +0x10
0000000E C9 leave
0000000F C3 ret

Which despite saying -fomit-frame-pointer still actually makes a
frame... I guess GCC can't optimize an empty function to well ;-)

My point is those images you are bitching about contain a lot more than
just code and they're not hand-tuned to a given task. Libraries have
to be fairly robust to get widespread use. So little "tricks" that
embedded developers use [and I know I used to be one] to shave a byte
here and there aren't very practical when you're writing code that a
thousand different people will call upon.

Tom
 
T

tomstdenis

I've been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I've realised lately
just how little progress it's made since then. I've increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here's my reasons. I'd
be interested to hear some intelligent advantages (not
rationalisations) for using C.

Python and Perl are not as fast as compiled C. ... and they never will
be.
No string type
--------------

You find that a problem. Personally I don't have that many problems
working with strings. When I need some sort of functionality I write a
function and use it. Not exactly challenging.
Functions for insignificant operations

Oh no, you have to include a header. END OF THE WORLD!

and Perl doesn't have "requires" ???
The encouragement of buffer overflows
-------------------------------------

This is exactly like saying kitchen knifes encourage stabbings.
Functions which encourage buffer overflows
------------------------------------------

They're unsafe but they don't encourage jacksquat. Any developer worth
their beans wouldn't use these functions, they won't get linked into
your application [at compile or runtime]. The only downside is that
they are available at all.
You see, even if you're not writing any memory you can still access
memory you're not supposed to. C can't be bothered to keep track of the
ends of strings; the end of a string is indicated by a null '\0'
character. All fine, right? Well, some functions in your C library,
such as strlen(), perhaps, will just run off the end of a 'string' if
it doesn't have a null in it. What if you're using a binary string?
Careless programming this may be, but we all make mistakes and so the
language authors have to take some responsibility for being so
intolerant.

It gives you control over the machine. Unorthodox things like that
were used for VGA programming back in the 90s [negative addressing
anyone?]. Just because you're too lazy to actually bounds check your
code doesn't mean others are.
No builtin boolean type
-----------------------
If you don't believe me, just watch:

$ cat > test.c
int main(void)
{
bool b;
return 0;
}

$ gcc -ansi -pedantic -Wall -W test.c
test.c: In function 'main':
test.c:3: 'bool' undeclared (first use in this function)

Not until the 1999 ISO C standard were we finally able to use 'bool' as
a data type. But guess what? It's implemented as a macro and one
actually has to include a header file to be able to use it!

So what? int == register size. It makes sense to use an int for
flags. If you're tight on memory use a bitfield but that usually will
end up making more code than you save on ram.
High-level or low-level?
------------------------

On the one hand, we have the fact that there is no string type and
little automatic memory management, implying a low-level language. On
the other hand, we have a mass of library functions, a preprocessor and
a plethora of other things which imply a high-level language. C tries
to be both, and as a result spreads itself too thinly.

Generally any significant program requires support libraries. E.g. a
networking lib, a threading lib, a crypto lib, a ... the standard C
library only provides the core essentials to get off the ground [I/O,
string ops, heap ops and a few things like bsearch and qsort]
The great thing about this is that when C is lacking a genuinely useful
feature, such as reasonably strong data typing, the excuse "C's a
low-level language" can always be used, functioning as a perfect
'reason' for C to remain unhelpfully and fatally sparse.

No the excuse "just write one and use it" comes to mind. I mean I
don't write an AES routine every time I need a cipher. I wrote
[well...ported] one long ago and I just use it whenever I want. Why is
that so hard?
The original intention for C was for it to be a portable assembly
language for writing UNIX. Unfortunately, from its very inception C
has

Wrong. The original use for C was a portable language that was low
enough to be somewhat mappable to CPU instructions. That's why you
don't see string ops, classes, oop, etc...
Integer overflow without warning
--------------------------------

Self explanatory. One minute you have a fifteen digit number, then try
to double or triple it and - boom - its value is suddenly
-234891234890892 or something similar. Stupid, stupid, stupid. How hard
would it have been to give a warning or overflow error or even just
reset the variable to zero?

It's possible at runtime todo this. Just the resulting code will be
much slower. If you know you're going to need big numbers just use a
bignum lib like GMP or LibTomMath.
This is widely known as bad practice. Most competent developers
acknowledge that silently ignoring an error is a bad attitude to have;
this is especially true for such a commonly used language as C.

Overflows aren't always errors. e.g. a rotate

x = ((x << 3) | (x >> 29)) & 0xFFFFFFFF;

Again this is about control. If I couldn't do this I would have to

x = (((x&something)<<3)|(x>29);

but if the rotate count is dynamic [say RC5] then the "something" has
to be calculated on the fly [re: slow].
Portability?!

Shut up. I write code that compiles in dozens of compilers on several
many dozen platforms. At most I have small issues from time to time
but they're rare. I personally maintain over 100k lines of code,
documentation, demos, etc in my LibTom series and most of this code is
used in production environments ranging from routers to desktops to
banking to console gaming on processors ranging from ARM, PPC, Sparc,
MIPS and x86 with compilers such as MSVC, Borland, Metrowerks and
flavours of GCC.

It's about the "middle" road. I don't use C99 features I can avoid and
I don't use platform specifics [e.g. pragma or others]. The resulting
code "just works".
C is unable to adapt to new conditions for the sake of "backward
compatibility", throwing away the opportunity to get rid of stupid,
utterly useless and downright dangerous functions for a nonexistent
goal. And yet C is growing new tentacles and unnecessary features
because of idiots who think adding seven new functions to their C
library will make life easier. It does not.

You don't have to call gets you know right?

<snip>

This post is just way too long. You're whining about things that just
sound like "oh I can't develop proper software and everyone else is to
blame".

C isn't the answer for everything. Anyone who says otherwise is a
liar. However, C does have very REAL uses and they're not just "low
level drivers" or such.

Tom
 
R

Richard Tobin

Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python.

And yet Python itself is written in C. Was that a mistake?

-- Richard
 
K

Kenny McCormack

Python and Perl are not as fast as compiled C. ... and they never will
be.

When you make this statement, you are ignoring the human factor.
For a programmer of middling ability, it is quite likely that their Perl
programs will run as fast or faster than their C programs. (You do the
math...)
 
M

Michael Mair

Kenny said:
When you make this statement, you are ignoring the human factor.
For a programmer of middling ability, it is quite likely that their Perl
programs will run as fast or faster than their C programs. (You do the
math...)

Yep. I remember that for some early perl 5 version, perl beat grep...
don't know whether this still holds.
Apart from that, when dealing with problems perl and Python have been
created for, you usually save enough development time for very many
runs of the program.
Nearly on-topic: "Linkers and Loaders" lets you, AFAIR, write your
own linker in perl as an exercise -- even if done very well, it
definitely cannot beat a good C version for speed.


Cheers
Michael
 
A

Antoine Leca

En (e-mail address removed), CBFalconer va escriure:
mov dx,msg
mov al,??
int ??

OK so far (should be ah).
mov al,??
int ??

Not needed. ret does the job. The OS pushes a zero on the top of the stack,
and returning to it is a way to exit.

Yet I fail to see the point of your post.


Antoine
 
E

evolnet.regular

Michael said:
Yep. I remember that for some early perl 5 version, perl beat grep...
don't know whether this still holds.
Apart from that, when dealing with problems perl and Python have been
created for, you usually save enough development time for very many
runs of the program.
Nearly on-topic: "Linkers and Loaders" lets you, AFAIR, write your
own linker in perl as an exercise -- even if done very well, it
definitely cannot beat a good C version for speed.

The initialisation time of a Perl or Python "interpreter" does mean
that, say, a Hello World written in C will run faster. However, for any
kind of enterprise-level applications, the greatly reduced devel time
needed to develop in Perl or Python greatly outweighs the marginal
once-only cost of interpretation and bytecode compilation.
 
M

Michael Mair

CBFalconer said:
No? If your C system makes an executable file somewhere in the
25,000 to 250,000 byte range this means a bloat factor of between
1000 and 10,000 is involved. This shows why a Z80 with 64k of
memory could keep up for so long. Today the hardware is something
in that 1000 to 10,000 range faster than my 2 Mhz z80 was, and
memory is about 1000 times larger, with virtual memory about 10,000
times larger. However the bloat and tail-chasing have kept
performance in the same general ball park.

Another comparison - disk storage has gone from 400k per floppy to
40 GB per hard disk. That's a factor of 100,000, and is the reason
we don't have storage space problems. The disks are staying ahead
of the bloat.

</rant>

Even though your rant is in and by itself fully justified, you entirely
fail to not miss my point ;-)
See Tom's post...

And yes, I have seen .25 M overhead for "Hello world!", too.


Cheers
Michael
 
M

Michael Mair

The initialisation time of a Perl or Python "interpreter" does mean
that, say, a Hello World written in C will run faster. However, for any
kind of enterprise-level applications, the greatly reduced devel time
needed to develop in Perl or Python greatly outweighs the marginal
once-only cost of interpretation and bytecode compilation.

So which essentially new thing did you say apart from drawing in
c.l.p.m?
It is bad style at best to crosspost to a new group in middiscussion
and not say so.

@comp.lang.perl.misc: "(e-mail address removed)" is playing the troll
in c.l.c; please do not react to his posts.

F'up2: c.l.c


Cheers
Michael
 
L

lawrence.jones

In said:
In that case, why is it that there are so many buffer overflows in so
many C programs written by presumably experienced coders and yet so few
in programs written in *any other language*?

Because there are so many commonly used programs written in C and so few
(relatively speaking) written in any other language.

-Larry Jones

All girls should be shipped to Pluto--that's what I say. -- Calvin
 
K

Keith Thompson

In that case, why is it that there are so many buffer overflows in so
many C programs written by presumably experienced coders and yet so few
in programs written in *any other language*?

First, you posted about 600 lines of stuff to comp.lang.c. That's not
necessarily a bad thing in itself, but as somebody else pointed out,
it appears to be essentially identical to an article posted by James A
C Joyce to Kuro5hin at

<http://www.kuro5hin.org/story/2004/2/7/144019/8872>

I see no mention of James A C Joyce's name in what you posted. Did
you plagiarize someone else's work, or are "James A C Joyce" and
"(e-mail address removed)" the same person, or is something else
going on? (I note that James A C Joyce has posted an e-mail address
on Kuro5hin, and it's not "(e-mail address removed)".)

Second, you introduced a gratuitous and unacknowledged cross-post to
comp.std.c, which did not see the original article. This is, at best,
rude.
 
B

Big K

Richard said:
And yet Python itself is written in C. Was that a mistake?

-- Richard

Computer languages are not written in other computer languages. A
computer language is a set of rules layed out by standards.

And if by "Python" you meant "Python Interpreter," then your statement
still wouldn't make sense because there are several Python
Interpreters, and you can create one yourself (in C, or another
language.)

It's even possible to bootstrap a compiler for a totally new language,
which would rely on nothing more than assembly.

I love C, but the claim that some other language was written in C still
sounds ridiculous to me.
 
J

Joona I Palaste

CBFalconer <[email protected]> scribbled the following
mov dx,msg
mov al,??
int ??
mov al,??
int ??
msg db 'Hello World$'
(I have forgotten a lot - but CP/M is almost exactly the same)

$C000 LDX #$0A
$C002 LDA $C00E,X
$C005 STA $0400,X
$C008 DEX
$C009 BNE $C002
$C00B RTS
$C00C msg db "Hello World"

There you go, 22 bytes. I don't have a real Commodore 64 any more to
try it out, but it should print "Hello World" at the top left corner
of the screen. Damn, someone beat me with 1 byte, and with a more
complicated machine, no less. =)
 
K

Keith Thompson

Big K said:
Computer languages are not written in other computer languages. A
computer language is a set of rules layed out by standards.

And if by "Python" you meant "Python Interpreter," then your statement
still wouldn't make sense because there are several Python
Interpreters, and you can create one yourself (in C, or another
language.)

It's even possible to bootstrap a compiler for a totally new language,
which would rely on nothing more than assembly.

I love C, but the claim that some other language was written in C still
sounds ridiculous to me.

Sure, strictly speaking that's true. But *as far as I know* there's
essentially only one implementation of Python -- or more precisely,
all existing implementations of Python are ports of the same code
base, which happens to be written in C. This can (but needn't) make
the dividing line between the language and the implementation a bit
vague. This isn't true for C; there are multiple independently
developed C implementations.

Note the emphasis on "as far as I know" above. There could well be
independent Python implementations that I don't know about.
 
S

Steven

CBFalconer said:
mov dx,msg
mov al,??
int ??
mov al,??
int ??
msg db 'Hello World$'

(I have forgotten a lot - but CP/M is almost exactly the same)

Pretty much:

mov dx, OFFSET msg
mov ah, 09h
int 21h
ret ;I use ret to save 4 bytes
msg db "Hello World!$"
 
R

Richard Tobin

Big K said:
Computer languages are not written in other computer languages. A
computer language is a set of rules layed out by standards.

A few programming languages are defined in advance by standards. Far
more are defined by implementations, some being standardized formally
or informally (eg by books) later. Python is still essentially
defined by a single implementation. Try asking in comp.lang.python
(as I did a couple of years ago) whether some obscure feature is
guaranteed to work, and see what you get. Try complaining about an
incompatibility between Python and Jython, and see whether you get a
reference to a standard.

But suppose you're right. It's still irrelevant to the point at hand:
It's even possible to bootstrap a compiler for a totally new language,
which would rely on nothing more than assembly.

Did you even *try* to understand what I was saying? The author of
Python could indeed have written it entirely in Python, and the fact
that he didn't, but instead used C, suggests that there are some
things that C is good for, contrary to the claims of the original
poster.
I love C, but the claim that some other language was written in C still
sounds ridiculous to me.

And yet, Python was. Perl too.

-- Richard
 
R

Richard Tobin

And yet Python itself is written in C. Was that a mistake?
[/QUOTE]
Any language can be implemented in any Turing-complete language.

You have missed what I was saying. See my other post for a longer
rant, but the point is that although it *could* have been written in
any language, it was *in fact* written in C. Which suggests that even
the author of Python thinks that C is good for something.

-- Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,123
Latest member
Layne6498
Top