Why C Is Not My Favourite Programming Language

  • Thread starter evolnet.regular
  • Start date
E

evolnet.regular

I've been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I've realised lately
just how little progress it's made since then. I've increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here's my reasons. I'd
be interested to hear some intelligent advantages (not
rationalisations) for using C.

No string type
--------------

C has no string type. Huh? Most sane programming languages have a
string type which allows one to just say "this is a string" and let the
compiler take care of the rest. Not so with C. It's so stubborn and
dumb that it only has three types of variable; everything is either a
number, a bigger number, a pointer or a combination of those three.
Thus, we don't have proper strings but "arrays of unsigned integers".
"char" is basically just a really small number. And now we have to
start using unsigned ints to represent multibyte characters.

What. A. Crock. An ugly hack.

Functions for insignificant operations
--------------------------------------

Copying one string from another requires including <string.h> in your
source code, and there are two functions for copying a string. One
could even conceivably copy strings using other functions (if one
wanted to, though I can't imagine why). Why does any normal language
need two functions just for copying a string? Why can't we use the
assignment operator ('=') like for the other types? Oh, I forgot.
There's no such thing as strings in C; just a big continuous stick of
memory. Great! Better still, there's no syntax for:

* string concatenation
* string comparison
* substrings

Ditto for converting numbers to strings, or vice versa. You have to use
something like atol(), or strtod(), or a variant on printf(). Three
families of functions for variable type conversion. Hello? Flexible
casting? Hello?

And don't even get me started on the lack of an exponentiation
operator.

No string type: the redux
-------------------------

Because there's no real string type, we have two options: arrays or
pointers. Array sizes can only be constants. This means we run the risk
of buffer overflow since we have to try (in vain) to guess in advance
how many characters we need. Pathetic. The only alternative is to use
malloc(), which is just filled with pitfalls. The whole concept of
pointers is an accident waiting to happen. You can't free the same
pointer twice. You have to always check the return value of malloc()
and you mustn't cast it. There's no builtin way of telling if a spot of
memory is in use, or if a pointer's been freed, and so on and so forth.
Having to resort to low-level memory operations just to be able to
store a line of text is asking for...

The encouragement of buffer overflows
-------------------------------------

Buffer overflows abound in virtually any substantial piece of C code.
This is caused by programmers accidentally putting too much data in one
space or leaving a pointer pointing somewhere because a returning
function ballsed up somewhere along the line. C includes no way of
telling when the end of an array or allocated block of memory is
overrun. The only way of telling is to run, test, and wait for a
segfault. Or a spectacular crash. Or a slow, steady leakage of memory
from a program, agonisingly 'bleeding' it to death.

Functions which encourage buffer overflows
------------------------------------------

* gets()
* strcat()
* strcpy()
* sprintf()
* vsprintf()
* bcopy()
* scanf()
* fscanf()
* sscanf()
* getwd()
* getopt()
* realpath()
* getpass()

The list goes on and on and on. Need I say more? Well, yes I do.

You see, even if you're not writing any memory you can still access
memory you're not supposed to. C can't be bothered to keep track of the
ends of strings; the end of a string is indicated by a null '\0'
character. All fine, right? Well, some functions in your C library,
such as strlen(), perhaps, will just run off the end of a 'string' if
it doesn't have a null in it. What if you're using a binary string?
Careless programming this may be, but we all make mistakes and so the
language authors have to take some responsibility for being so
intolerant.

No builtin boolean type
-----------------------

If you don't believe me, just watch:

$ cat > test.c
int main(void)
{
bool b;
return 0;
}

$ gcc -ansi -pedantic -Wall -W test.c
test.c: In function 'main':
test.c:3: 'bool' undeclared (first use in this function)

Not until the 1999 ISO C standard were we finally able to use 'bool' as
a data type. But guess what? It's implemented as a macro and one
actually has to include a header file to be able to use it!

High-level or low-level?
------------------------

On the one hand, we have the fact that there is no string type and
little automatic memory management, implying a low-level language. On
the other hand, we have a mass of library functions, a preprocessor and
a plethora of other things which imply a high-level language. C tries
to be both, and as a result spreads itself too thinly.

The great thing about this is that when C is lacking a genuinely useful
feature, such as reasonably strong data typing, the excuse "C's a
low-level language" can always be used, functioning as a perfect
'reason' for C to remain unhelpfully and fatally sparse.

The original intention for C was for it to be a portable assembly
language for writing UNIX. Unfortunately, from its very inception C has
had extra things packed into it which make it fail as an assembly
language. Its kludgy strings are a good example. If it were at least
portable these failings might be forgivable, but C is not portable.

Integer overflow without warning
--------------------------------

Self explanatory. One minute you have a fifteen digit number, then try
to double or triple it and - boom - its value is suddenly
-234891234890892 or something similar. Stupid, stupid, stupid. How hard
would it have been to give a warning or overflow error or even just
reset the variable to zero?

This is widely known as bad practice. Most competent developers
acknowledge that silently ignoring an error is a bad attitude to have;
this is especially true for such a commonly used language as C.

Portability?!
-------------

Please. There are at least four official specifications of C I could
name from the top of my head and no compiler has properly implemented
all of them. They conflict, and they grow and grow. The problem isn't
subsiding; it's increasing each day. New compilers and libraries are
developed and proprietary extensions are being developed. GNU C isn't
the same as ANSI C isn't the same as K&R C isn't the same as Microsoft
C isn't the same as POSIX C. C isn't portable; all kinds of machine
architectures are totally different, and C can't properly adapt because
it's so muttonheaded. It's trapped in The Unix Paradigm.

If it weren't for the C preprocessor, then it would be virtually
impossible to get C to run on multiple families of processor hardware,
or even just slightly differing operating systems. A programming
language should not require a C preprocessor so that it can run on both
FreeBSD, Linux or Windows without failing to compile.

C is unable to adapt to new conditions for the sake of "backward
compatibility", throwing away the opportunity to get rid of stupid,
utterly useless and downright dangerous functions for a nonexistent
goal. And yet C is growing new tentacles and unnecessary features
because of idiots who think adding seven new functions to their C
library will make life easier. It does not.

Even the C89 and C99 standards conflict with each other in ridiculous
ways. Can you use the long long type or can't you? Is a certain
constant defined by a preprocessor macro hidden deep, deep inside my C
library? Is using a function in this particular way going to be
undefined, or acceptable? What do you mean, getch() isn't a proper
function but getc() and getchar() are?

The implications of this false 'portability'
--------------------------------------------

Because C pretends to be portable, even professional C programmers can
be caught out by hardware and an unforgiving programming language;
almost anything like comparisons, character assignments, arithmetic, or
string output can blow up spectacularly for no apparent reason because
of endianness or because your particular processor treats all chars as
unsigned or silly, subtle, deadly traps like that.

Archaic, unexplained conventions
--------------------------------

In addition to the aforementioned problems, C also has various
idiosyncracies (invariably unreported) which not even some teachers of
C are aware of:

* "Don't use fflush(stdin)."
* "gets() is evil."
* "main() must return an integer."
* "main() can only take one of three sets of arguments."
* "main() can only return either EXIT_SUCCESS or EXIT_FAILURE."
* "You musn't cast the return value of malloc()."
* "fileno() isn't an ANSI compliant function."
* "A preprocessor macro oughtn't use any of its arguments more than
once."

....all these unnecessary and unmentioned quirks mean buggy code. Death
by a thousand cuts. Ironic when you consider that Kernighan thinks of
Pascal in the same way when C has just as many little gotchas that
bleed you to death gradually and painfully.

Blaming The Progammer
---------------------

Due to the fact that C is pretty difficult to learn and even harder to
actually use without breaking something in a subtle yet horrific way
it's assumed that anything which goes wrong is the programmer's fault.
If your program segfaults, it's your fault. If it crashes, mysteriously
returning 184 with no error message, it's your fault. When one single
condition you'd just happened to have forgotten about whilst coding
screws up, it's your fault.

Obviously the programmer has to shoulder most of the responsibility for
a broken program. But as we've already seen, C positively tries to make
the programmer fail. This increases the failure rate and yet for some
reason we don't blame the language when yet another buffer overflow is
discovered. C programmers try to cover up C's inconsistencies and
inadequacies by creating a culture of 'tua culpa'; if something's
wrong, it's your fault, not that of the compiler, linker, assembler,
specification, documentation, or hardware.

Compilers have to take some of the blame. Two reasons. The first is
that most compilers have proprietary extensions built into them. Let me
remind you that half of the point of using C is that it should be
portable and compile anywhere. Adding extensions violates the original
spirit of C and removes one of its advantages (albeit an already
diminished advantage).

The other (and perhaps more pressing) reason is the lack of anything
beyond minimal error checking which C compilers do. For every ten types
of errors your compiler catches, another fifty will slip through.
Beyond variable type and syntax checking the compiler does not look for
anything else. All it can do is give warnings on unusual behaviour,
though these warnings are often spurious. On the other hand, a single
error can cause a ridiculous cascade, or make the compiler fall over
and die because of a misplaced semicolon, or, more accurately and
incriminatingly, a badly constructed parser and grammar. And yet,
despite this, it's your fault.

To quote The Unix Haters' Handbook:

"If you make even a small omission, like a single semicolon, a C
compiler tends to get so confused and annoyed that it bursts into tears
and complains that it just can't compile the rest of the file since one
missing semicolon has thrown it off so much."

So C compilers may well give literally hundreds of errors stating that
half of your code is wrong if you miss out a single semicolon. Can it
get worse? Of course it can! This is C!

You see, a compiler will often not deluge you with error information
when compiling. Sometimes it will give you no warning whatsoever even
if you write totally foolish code like this:

#include <stdio.h>

int main()
{
char *p;
puts(p);
return 0;
}

When we compile this with our 'trusty' compiler gcc, we get no errors
or warnings at all. Even when using the '-W' and '-Wall' flags to make
it watch out for dangerous code it says nothing.

$ gcc -W -Wall stupid.c
$

In fact, no warning is given ever unless you try to optimise the
program with a '-O' flag. But what if you never optimise your program?
Well, you now have a dangerous program. And unless you check the code
again you may well never notice that error.

What this section (and entire document) is really about is the sheer
unfriendliness of C and how it is as if it takes great pains to be as
difficult to use as possible. It is flexible in the wrong way; it can
do many, many different things, but this makes it impossible to do any
single thing with it.

Trapped in the 1970s
--------------------

C is over thirty years old, and it shows. It lacks features that modern
languages have such as exception handling, many useful data types,
function overloading, optional function arguments and garbage
collection. This is hardly surprising considering that it was
constructed from an assembler language with just one data type on a
computer from 1970.

C was designed for the computer and programmer of the 1970s,
sacrificing stability and programmer time for the sake of memory.
Despite the fact that the most recent standard is just half a decade
old, C has not been updated to take advantage of increased memory and
processor power to implement such things as automatic memory
management. What for? The illusion of backward compatibility and
portability.

Yet more missing data types
---------------------------

Hash tables. Why was this so difficult to implement? C is intended for
the programming of things like kernels and system utilities, which
frequently use hash tables. And yet it didn't occur to C's creators
that maybe including hash tables as a type of array might be a good
idea when writing UNIX? Perl has them. PHP has them. With C you have to
fake hash tables, and even then it doesn't really work at all.

Multidimensional arrays. Before you tell me that you can do stuff like
int multiarray[50][50][50] I think that I should point out that that's
an array of arrays of arrays. Different thing. Especially when you
consider that you can also use it as a bunch of pointers. C programmers
call this "flexibility". Others call it "redundancy", or, more
accurately, "mess".

Complex numbers. They may be in C99, but how many compilers support
that? It's not exactly difficult to get your head round the concept of
complex numbers, so why weren't they included in the first place? Were
complex numbers not discovered back in 1989?

Binary strings. It wouldn't have been that hard just to make a
compulsory struct with a mere two members: a char * for the string of
bytes and a size_t for the length of the string. Binary strings have
always been around on Unix, so why wasn't C more accommodating?

Library size
------------

The actual core of C is admirably small, even if some of the syntax
isn't the most efficient or readable (case in point: the combined '? :'
statement). One thing that is bloated is the C library. The number of
functions in a full C library which complies with all significant
standards runs into four digit figures. There's a great deal of
redundancy, and code which really shouldn't be there.

This has knock-on effects, such as the large number of configuration
constants which are defined by the preprocessor (which shouldn't be
necessary), the size of libraries (the GNU C library almost fills a
floppy disk and its documentation, three) and inconsistently named
groups of functions in addition to duplication.

For example, a function for converting a string to a long integer is
atol(). One can also use strtol() for exactly the same thing. Boom -
instant redundancy. Worse still, both functions are included in the
C99, POSIX and SUSv3 standards!

Can it get worse? Of course it can! This is C!

As a result it's only logical that there's an equivalent pair of atod()
and strtod() functions for converting a string to a double. As you've
probably guessed, this isn't true. They are called atof() and strtod().
This is very foolish. There are yet more examples scattered through the
standard C library like a dog's smelly surprises in a park.

The Single Unix Specification version three specifies 1,123 functions
which must be available to the C programmer of the compliant system. We
already know about the redundancies and unnecessary functions, but
across how many header files are these 1,123 functions spread out? 62.
That's right, on average a C library header will define approximately
eighteen functions. Even if you only need to use maybe one function
from each of, say, five libraries (a common occurrence) you may well
wind up including 90, 100 or even 150 function definitions you will
never need. Bloat, bloat, bloat. Python has the right idea; its import
statement allows you to define exactly the functions (and global
variables!) you need from each library if you prefer. But C? Oh, no.

Specifying structure members
----------------------------

Why does this need two operators? Why do I have to pick between '.' and
'->' for a ridiculous, arbitrary reason? Oh, I forgot; it's just yet
another of C's gotchas.

Limited syntax
--------------

A couple of examples should illustrate what I mean quite nicely. If
you've ever programmed in PHP for a substantial period of time, you're
probably aware of the 'break' keyword. You can use it to break out from
nested loops of arbitrary depth by using an integer, like so:

for ($i = 0; $i < 10; $i++) {

for ($j = 0; $j < 10; $j++) {

for ($k = 0; $k < 10; $k++) {
break 2;
}
}

/* breaks out to here */

}

There is no way of doing this in C. If you want to break out from a
series of nested for or while loops then you have to use a goto. This
is what is known as a crude hack.

In addition to this, there is no way to compare any non-numerical data
type using a switch statement. Not even strings. In the programming
language D, one can do:

char s[];

switch (s) {

case "hello":
/* something */
break;

case "goodbye":
/* something else */
break;

case "maybe":
/* another action */
break;

default:
/* something */
break;

}

C does not allow you to use switch and case statements for strings. One
must use several variables to iterate through an array of case strings
and compare them to the given string with strcmp(). This reduces
performance and is just yet another hack.

In fact, this is an example of gratuitous library functions running
wild once again. Even comparing one string to another requires use of
the strcmp() function:

char string[] = "Blah, blah, blah\n";

if (strcmp(string, "something") == 0) {

/* do something */

}

Flushing standard I/O
---------------------

A simple microcosm of the "you can do this, but not that" philosophy of
C; one has to do two different things to flush standard input and
standard output.

To flush the standard output stream, the fflush() function is used
(defined by <stdio.h>). One doesn't usually need to do this after every
bit of text is printed, but it's nice to know it's there, right?

Unfortunately, fflush() can't be used to flush the contents of standard
input. Some C standards explicitly define it as having undefined
behaviour, but this is so illogical that even textbook authors
sometimes mistakenly use fflush(stdin) in examples and some compilers
won't bother to warn you about it. One shouldn't even have to flush
standard input; you ask for a character with getchar(), and the program
should just read in the first character given and disregard the rest.
But I digress...

There is no 'real' way to flush standard input up to, say, the end of a
line. Instead one has to use a kludge like so:

int c;

do {

errno = 0;
c = getchar();

if (errno) {
fprintf(stderr,
"Error flushing standard input buffer: %s\n",
strerror(errno));
}

} while ((c != '\n') && (!feof(stdin)));

That's right; you need to use a variable, a looping construct, two
library functions and several lines of exception handling code to flush
the standard
input buffer.

Inconsistent error handling
---------------------------

A seasoned C programmer will be able to tell what I'm talking about
just by reading the title of this section. There are many incompatible
ways in which a C library function indicates that an error has
occurred:

* Returning zero.
* Returning nonzero.
* Returning EOF.
* Returning a NULL pointer.
* Setting errno.
* Requiring a call to another function.
* Outputting a diagnostic message to the user.
* Triggering an assertion failure.
* Crashing.

Some functions may actually use up to three of these methods. (For
instance, fread().) But the thing is that none of these are compatible
with each other and error handling does not occur automatically; every
time a C programmer uses a library function they must check manually
for an error. This bloats code which would otherwise be perfectly
readable without if-blocks for error handling and variables to keep
track of errors. In a large software project one must write a section
of code for error handling hundreds of times. If you forget, something
can go horribly wrong. For example, if you don't check the return value
of malloc() you may accidentally try to use a null pointer. Oops...

Commutative array subscripting
------------------------------

"Hey, Thompson, how can I make C's syntax even more obfuscated and
difficult to understand?"

"How about you allow 5[var] to mean the same as var[5]?"

"Wow; unnecessary and confusing syntactic idiocy! Thanks!"

"You're welcome, Dennis."

Yes, I understand that array subscription is just a form of addition
and so it should be commutative, but doesn't it seem just a bit foolish
to say that 5[var] is the same as var[5]? How on earth do you take the
var'th value of 5?

Variadic anonymous macros
-------------------------

In case you don't understand what variadic anonymous macros are,
they're macros (i.e. pseudofunctions defined by the preprocessor) which
can take a variable number of arguments. Sounds like a simple thing to
implement. I mean, it's all done by the preprocessor, right? And
besides, you can define proper functions with variable numbers of
arguments even in the original K&R C, right?

In that case, why can't I do:

#define error(...) fprintf(stderr, ...)

without getting a warning from GCC?

warning: anonymous variadic macros were introduced in C99

That's right, folks. Not until late 1999, 30 years after development on
the C programming language began, have we been allowed to do such a
simple task with the preprocessor.

The C standards don't make sense
--------------------------------

Only one simple quote from the ANSI C standard - nay, a single footnote
- is needed to demonstrate the immense idiocy of the whole thing.
Ladies, gentlemen, and everyone else, I present to you...footnote 82:

All whitespace is equivalent except in certain situations.

I'd make a cutting remark about this, but it'd be too easy.

Too much preprocessor power
---------------------------

Rather foolishly, half of the actual C language is reimplemented in the
preprocessor. (This should be a concern from the start; redundancy
usually indicates an underlying problem.) We can #define fake
variables, fake conditions with #ifdef and #ifndef, and look, there's
even #if, #endif and the rest of the crew! How useful!

Erm, sorry, no.

Preprocessors are a good idea for a language like C. As has been
iterated, C is not portable. Preprocessors are vital to bridging the
gap between different computer architectures and libraries and allowing
a program to compile on multiple machines without having to rely on
external programs. The #define statement, in this case, can be used
perfectly validly to set 'flags' that can be used by a program to
determine all sorts of things: which C standard is being used, which
library, who wrote it, and so on and so forth.

Now, the situation isn't as bad as for C++. In C++, the preprocessor is
so packed with unnecessary rubbish that one can actually use it to
calculate an arbitrary series of Fibonacci numbers at compile-time.
However, C comes dangerously close; it allows the programmer to define
fake global variables with wacky values which would not otherwise be
proper code, and then compare values of these variables. Why? It's not
needed; the C language of the Plan 9 operating system doesn't let you
play around with preprocessor definitions like this. It's all just
bloat.

"But what about when we want to use a constant throughout a program? We
don't want to have to go through the program changing the value each
time we want to change the constant!" some may complain. Well, there's
these things called global variables. And there's this keyword, const.
It makes a constant variable. Do you see where I'm going with this?

You can do search and replace without the preprocessor, too. In fact,
they were able to do it back in the seventies on the very first
versions of Unix. They called it sed. Need something more like cpp? Use
m4 and stop complaining. It's the Unix way.
 
L

Luke Wu

I've been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I've realised lately
just how little progress it's made since then. I've increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here's my reasons. I'd
be interested to hear some intelligent advantages (not
rationalisations) for using C.

So your scripted programs offer the same speed as C programs? Wow,
those interpreters for your languages (most likely written in C) are
really good these days. Or maybe the credit falls with the OS/kernel
code (most likely written in C). Or maybe it's because you're C
programming skills are so horrible that it borders the inefficiency of
scripted languages.

I bet around 99% of all CPUs/MPUs have C compilers (or cross compilers)
that can target them (no other language is even close to that %). If
only those system implementers had read your post before, we might not
have this mess.
 
S

Steven

Question #1:
WHY DID YOU POST THIS?

If you want to be babied, fine, go through life just pointing and
grunting at things. All I see is a bunch of pathetic excuses that
explains your inability to program in C. Shell-scripts are as fast as
C? What is wrong with you! Why is power bad! Why do people still
program in assembly?
C++ is better? OOP might be good for somethings like GUI, but for
anything else it's overkill.
If you are unable to tough it up and accept your incompetence, fine,
don't post here and complain!
 
E

evolnet.regular

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.
 
I

infobahn

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

Then learn how to write C programs *properly*.
 
I

infobahn

Malcolm said:
Many other faults weren't mentioned. For instance one of the worst mistakes
K and R made was to use the same identifier for a "character" and a "byte",

What has K to do with this?
presumably because their character-mapped displays needed pokeing to squeeze
out decent performance. Now we're stuck with no easy way to move from ASCII.

I suspect that I'm not the only clc subscriber to have used C on
non-ASCII systems and have had no problems in this regard.
 
M

Malcolm

Steven said:
If you want to be babied, fine, go through life just pointing and
grunting at things. All I see is a bunch of pathetic excuses that
explains your inability to program in C. Shell-scripts are as fast as
C? What is wrong with you! Why is power bad! Why do people still
program in assembly?
C++ is better? OOP might be good for somethings like GUI, but for
anything else it's overkill.
If you are unable to tough it up and accept your incompetence, fine,
don't post here and complain!
The C language does have faults, some genuine ones which were mentioned,
like the 2D array problem.
Many other faults weren't mentioned. For instance one of the worst mistakes
K and R made was to use the same identifier for a "character" and a "byte",
presumably because their character-mapped displays needed pokeing to squeeze
out decent performance. Now we're stuck with no easy way to move from ASCII.

However most of the post was criticising C for not being a large, high-level
language with things like built in string types. A typical processor has no
instructions for manipulating strings directly. So to provide such
facilities as part of the language proper would mean that a C program could
no longer be translated directly to assembly.

I use C a lot, partly because it is so efficient (but much easier and more
portable than assembler), partly because it is small and therefore you spend
you time getting the program written rather than playing with language
features, and partly because, being so useful, it is widely available so you
can knock up a C routine without having to wonder how you install your Perl
interpreter / Java virtual machine / C++ standard template library on your
particular machine.
 
G

Goran Larsson

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

It is not the language that is prone to crashing constantly, if anything
is crashing it is your program. The solution is to write better programs.
Blaming ones own lack of skills on the tools are not productive.
 
D

Duck Dodgers

I would consider your arguments more seriously if you bothered to make
them seem objective. As it is, you just sound like you're whining.
Also, after reading the entire post, it's obvious that your list of
complaints is very short:

1) You want a string type
2) You don't like low level programming
3) You don't think C is portable

Come to think of it, those all sound like opinions rather than hard
facts. If you don't like C, don't use it. Nobody is forcing you. If you
don't want anyone else to use C, tough. It's impossible to use your own
opinions to change the opinions of others.
 
J

Jack Klein

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

Then feel free to write one. Other people have. But take your
whining someplace else. If you have sincere proposals for changes to
C, you're in the wrong group. Try comp.std.c. If you just want to
complain, start a blog.
 
C

CBFalconer

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

Ada and Pascal come to mind. (when properly implemented and meeting
standards). Many of the C problems arise from maintaining
compatibility, just as the pentia/AMDs have maintained 8088/6
compatibility. The cumulative software is worth considerably more
than the hardware.
 
E

Erik de Castro Lopo

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

I use:

C - for device drivers, low level library code.
Python - for small scripts.
O'caml - for large complex stuff that that the others
aren't good at.

I haven't done much handyman type stuff recently, but when
I do I use spanners, screwdrivers and hammers for whatever
those tools do best.

If you want to write in a high level language with
sophisticated features, that compiles to fast native
binaries and is also cross platform (*nix, win32 and
MacOSX), take a look at:

http://www.ocaml.org/

Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo (e-mail address removed) (Yes it's valid)
+-----------------------------------------------------------+
"I could never learn to use C++, because of the completely overwhelming
desire to redesign the language every time I tried to use it, but this
is the normal, healthy reaction to C++." -- Erik Naggum
 
W

Walter Roberson

:Yet more missing data types
:Complex numbers. They may be in C99, but how many compilers support
:that? It's not exactly difficult to get your head round the concept of
:complex numbers, so why weren't they included in the first place? Were
:complex numbers not discovered back in 1989?

I've worked in scientific programming for 20 years, including
spending years cleaning code that had been written by tens of
untrained programmers over the 20 years before that.

In that 20 years, the number of times that we've needed complex
numbers was... complex(0.0,0.0).

Seriously. there wasn't a need for them after 1st year university
courses.

I'm not saying that we never used formulae that involved complex
entities, but we rarely use them -as- complex numbers. Most of the
time our programs just treat them as multi-dimensional vectors... and
we use a couple of thousand dimensions in our vectors, so there is
nothing special, as far as our algorithms go, about 2 dimensions.
For us it'd be more important to do multiply-and-add operations
on arbitrarily high dimensional vectors. I must have missed the part
of your message where you railed against C not having that feature.
After all APL had it back in 1969, so why isn't it a ksh primitive?


:One thing that is bloated is the C library. The number of
:functions in a full C library which complies with all significant
:standards runs into four digit figures. There's a great deal of
:redundancy, and code which really shouldn't be there.

And the number of modules in Python 2.4's Global Module Index is 362.
If I didn't lose count, 97 of those are marked as non-portable.
This is progress, that 1/4 of your core modules are non-portable ??

The Python Package Index (pypi.org) has another 759 modules in it.
Speaking of scripting languages: Perl's CPAN has some 7500 modules
in it. Did Python "do it right" and the other 6750 modules in CPAN
are just 9 different variants on what Python does inherently? Or
is it just the case that successful languages attract large bodies
of useful libraries, and that it is considered worthwhile to
standardize the best of these to avoid having to re-invent the wheel?


:Why does this need two operators? Why do I have to pick between '.' and
:'->' for a ridiculous, arbitrary reason? Oh, I forgot; it's just yet
:another of C's gotchas.

There were two really big success stories in instruction
set architectures in the 1960's: The IBM 360 series and the DEC PDP
architecture. Both offered a number of different addressing modes,
with the PDP perhaps a bit cleaner. The PDP architecture ideals
begat the Motorola 6800, 6809, and 680x0 series, which were market
leaders in their own times. [Quite lot of Z80 and 8086 and 80x86 chips
got sold too, but everything after the 8080 was an ugly grafting
of wart upon wart, which by now has gotten so bad that the Pentium
had to be implimented as a RISC chip internally.] Anyhow, I cannot
think of -any- successful CPU architecture that offered just one
addressing mode [e.g., that all operations accept move to and from
memory were register-to-register only, with all pointer handling
done by loading the base pointer and doing arithmetic operations upon it.]
Why then should we expect that there is only one memory accessor
operator in a programming language?


:Unfortunately, fflush() can't be used to flush the contents of standard
:input.

What exactly does "flushing the contents of standard input" *mean*
when standard input might be from a terminal, a pipe, a socket,
a fifo, a pty, a mmap'd shared memory segment, a tape drive,
a file, a raw disk block, or other arcane possibilities -- each
of which might or might not be in binary mode, and each of which
might or might not be record-oriented instead of stream oriented?


:Yes, I understand that array subscription is just a form of addition
:and so it should be commutative, but doesn't it seem just a bit foolish
:to say that 5[var] is the same as var[5]? How on earth do you take the
:var'th value of 5?

But it's not foolish that in ksh if you refer to an array name
without a subscript then the reference is taken as being to element 0
of the array, rather than to the array as a collection?
 
E

Erik de Castro Lopo

Walter said:
I've worked in scientific programming for 20 years, including
spending years cleaning code that had been written by tens of
untrained programmers over the 20 years before that.

In that 20 years, the number of times that we've needed complex
numbers was... complex(0.0,0.0).

Seriously. there wasn't a need for them after 1st year university
courses.

I do a lot of programming of Digitial Signal Processing code
and therefore use complex numbers all the time. Even C99's
complex numbers are insufficient which forces me to use C++
for this code instead even though I otherwise dislike C++
immensely.

Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo (e-mail address removed) (Yes it's valid)
+-----------------------------------------------------------+
"The object-oriented model makes it easy to build up programs by
accretion. What this often means, in practice, is that it provides
a structured way to write spaghetti code." -- Paul Graham
 
J

Julian V. Noble

[ snipped ]
I bet around 99% of all CPUs/MPUs have C compilers (or cross compilers)
that can target them (no other language is even close to that %). If ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Forth.

only those system implementers had read your post before, we might not
have this mess.


--
Julian V. Noble
Professor Emeritus of Physics
(e-mail address removed)
^^^^^^^^^^^^^^^^^^
http://galileo.phys.virginia.edu/~jvn/

"For there was never yet philosopher that could endure the
toothache patiently."

-- Wm. Shakespeare, Much Ado about Nothing. Act v. Sc. 1.
 
G

Guillaume

I'm not asking to be "babied", but I'd like a language with as much
"power" and "functionality" as C that isn't as prone to crashing
constantly.

Huh?

1. A language doesn't crash. Your program does.

2. If your program crashes, there is something wrong with it.

3. If you can't use a hammer without crushing your fingers, maybe
you shouldn't be nailing at all.

4. Last but not least, I actually think "crashes" are good. They
make us see we made a mistake. Any engineer with some sort of ethics
wants to be able to see his errors so he can correct them. Of course,
most of the "crashing" should occur while testing. Not after the
product has shipped.

5. Testing is good.

6. Education-wise, I think the right step is to go from a very simple
language (like Basic), then to Pascal to learn procedural programming
the right way, then to C, when one is ready. A lot of older programmers
have followed this path (for historical reasons) and they are some of
the best programmers out there. I, for example, haven't started studying
C until my Master's degree. I already had developed a lot of Pascal
applications before. It helped tremendously.

7. Nobody's perfect, but I wouldn't trust a programmer that couldn't
write a single middle-sized program in C that was robust enough never to
crash at all. To me, that's a good way of evaluating their sense of
rigor.

8. If you really don't like C, I suggest Ada.

9. Remember there are millions of ways bugs can manifest themselves.
Like I said, crashes are often the "nicest" way. The worst is
definitely when you think everything is going right, whereas your
program is a total wreck inside. Even in case of a safety device, I'd
rather see one device "crash", immediately backed up by a redundant
device that could see something went wrong in a timely manner - instead
of the device running bogus code forever, leading to a disaster.

10. I, and many of us, have learnt many languages besides C. I've been
using C for over 12 years, in all kind of settings and areas. And it's
still my favorite language. Go figure.

11. Go to 1. ;-)
 
E

evolnet.regular

It's all well and good saying this, but if the language itself is
packed with booby traps and requires every element of the entire body
of code to be meticulously planned out to avoid bugs, it's inevitable
that no large program in the language will avoid introducing at least
one bug.
 
E

Eric Schmidt

I've been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I've realised lately
just how little progress it's made since then. I've increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here's my reasons. I'd
be interested to hear some intelligent advantages (not
rationalisations) for using C.
>
> [snip long long rant-like thing]

This is all (except the opening paragraph) taken from
http://www.kuro5hin.org/story/2004/2/7/144019/8872

The OP copied the whole thing and pretended it was his reasons.
 
W

Walter Roberson

:It's all well and good saying this, but if the language itself is
:packed with booby traps and requires every element of the entire body
:eek:f code to be meticulously planned out to avoid bugs, it's inevitable
:that no large program in the language will avoid introducing at least
:eek:ne bug.

I don't recall hearing that people had gotten very far with
automated proof of correctness in *any* Turing-equivilent
programming language. Proofs have a way of exploding badly
with even very small programs, due to branch points -- each
branch point potentially doubles the size of the proof,
and loops can imply indefinite branching.

Consider for example, the amount of effort needed to prove
the correctness of these few lines:

while ( md5(x) != 0 ) x = md5(x);

Thus it is pretty much inevitable that large programs in *any*
useful programming language will have at least one bug. Even in
very highly refined and constrained languages that are built as
academic projects in programming and metaprogramming to avoid bugs.
 
G

G Patel

Eric said:
I've been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I've realised lately
just how little progress it's made since then. I've increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here's my reasons. I'd
be interested to hear some intelligent advantages (not
rationalisations) for using C.

[snip long long rant-like thing]

This is all (except the opening paragraph) taken from
http://www.kuro5hin.org/story/2004/2/7/144019/8872

The OP copied the whole thing and pretended it was his reasons.

If you are the real Eric Schmidt, please get your developers to fix up
google groups' treatment of leading blank spaces in usenet posts.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top