J
jacob navia
In the thread "When will C have an object model?", I found a
message from Mr Heathfield that warrants a deeper discussion.
That thread was (for a change) an interesting one.
Richard Heathfield a écrit :
Software changes not because the people that write it need changes just
for the sake of it. Requirements change. Hardware evolves, people come
and go, the machines and environment where software exists changes.
Only dead things are fixed forever and can't evolve. A computer language
MUST change or it simply dies.
Yes. What *I* see as disadvantages of change is when it is change in the
wrong direction. For instance, let's take the example of the C++
language that was originally designed to improve C and change it for the
better.
In a recent paper, the creator of the language acknowledged that after
3 years (more or less) working in an improvement of C++ it was
impossible for him to realize the requested modification.
The concrete details are not important here. Briefly, it was the
introduction of "concepts" as descriptions of template arguments so that
those arguments could be checked. A reasonable proposal.
The problem is that C++ has grown so complex, that even the creator
and principal force behind C++ was unable to do that change in a
time frame of years.
Change in the wrong direction is, then, a change that complexifies
more than what it is required, without any big simplification to
gain for the effort of introducing a feature.
The advantage of operator overloading is that it is a change that
simplifies the language, allowing it to get rid of "ad hoc" stuff.
C99 introduced complex numbers into the language itself. This was
a mistake since not every C user needs complex numbers as a permanent
feature.
With operator overloading, only the people that need (or want) to use
the feature will have complex numbers. What's more, they can use
different implementations of complex numbers that better suit the
needs of their applications instead of being forced to use a single
implementation provided by the compiler vendor that should be perfect
for all usages,what is impossible.
Operator overloading simplifies the writing of counted strings packages
(using the natural '[' ']' syntax already used for strings) instead
of forcing the people that want a diffferent and more advanced string
type to use a horrible syntax that makes porting to the new package
much more difficult.
Operator overloading simplifies the language by adding a framework
where the users themselves can add new numeric types to the application
without requiring a language change for each type. Features like
complex numbers then, can be eliminated, making the resulting language
smaller.
When changing things you must strike a delicate balance between the need
to preserve the language as it is, and the need to change it, to adapt
it to a new environment.
The core features of C (speed of generated code, simplicity, and
transparency) should be maintained at all costs. Operator overloading
makes the language less transparent since it is not obvious when you
read
a=b*c;
that you are using ints or doubles as it is now. As long as the user
of the new feature uses it in the intended manner (for numeric data)
everything will work. That is why, in my implementation of operator
overloading, I explicitely did NOT overload addition to "add" strings.
Normally, a + b <==> b + a. Addition is transitive.
If you use strings
"abc" + "def" != "def" + "abc"
and this is not acceptable. Besides, concatenating strings is not an
addition. It is just that: concatenation and the reader of the program
understands immediately what is going on when he sees:
strcat(buf,"abc");
C99 did not bring any real change into the core problems of the C
language.
It did NOT address:
(1) The problem of a completely obsolete standard library with functions
like gets() or asctime() still there.
(2) The problem of the absence of a counted string data type
(3) The problem of the lack of a standard way of using containers like
lists, hash tables, or similar.
There wasn't any INCENTIVE for the compiler writers to implement the
changes because many of them (complex numbers, syntactic innovations
or similar stuff) didn't bring any substantial improvement.
The problem was not that the changes were difficult to implement but
that they did not bring anything really new that customers were
really asking for.
Mr Heathfield's pet subject: C89. Yes, that is the common denominator
and many people are unable to see beyond that and realize that 30
years later you just can't go on like if nothing has happened!
This conservative conclusion is the opinion of some of people in
this discussion group.
Obviously this leads to the complete destruction of C as a living
language. It ceases to evolve, and it is considered by most people
as a dead language.
I am completely and fundamentally opposed to that point of view. I
think C is a better language than C++ and many others precisely because
of the simplicity it has. Because the speed of the generated code.
Because you see what is going on in your program and within your
software.
But obviously the C++ people will not believe that, and the fossilized
group of programmers that are clustered around heathfield will not
believe that either.
What to do?
I do not know.
message from Mr Heathfield that warrants a deeper discussion.
That thread was (for a change) an interesting one.
Richard Heathfield a écrit :
>
> It is perhaps because software is so malleable that we are so tempted
> to change it. I doubt whether *any* computer language exists that
> *exactly* meets the needs of more than a handful of people, so it is
> natural for people to want to change the language(s) they use to suit
> their needs more closely.
>
Software changes not because the people that write it need changes just
for the sake of it. Requirements change. Hardware evolves, people come
and go, the machines and environment where software exists changes.
Only dead things are fixed forever and can't evolve. A computer language
MUST change or it simply dies.
> Changing a language can have advantages, but it also has
> disadvantages.
>
Yes. What *I* see as disadvantages of change is when it is change in the
wrong direction. For instance, let's take the example of the C++
language that was originally designed to improve C and change it for the
better.
In a recent paper, the creator of the language acknowledged that after
3 years (more or less) working in an improvement of C++ it was
impossible for him to realize the requested modification.
The concrete details are not important here. Briefly, it was the
introduction of "concepts" as descriptions of template arguments so that
those arguments could be checked. A reasonable proposal.
The problem is that C++ has grown so complex, that even the creator
and principal force behind C++ was unable to do that change in a
time frame of years.
Change in the wrong direction is, then, a change that complexifies
more than what it is required, without any big simplification to
gain for the effort of introducing a feature.
> One obvious advantage is that the language can be made more
> expressive. For example, there is no doubt (in my mind, at least)
> that operator overloading would make bignums much more palatable. So
> it's natural to want to adapt the language to incorporate changes
> like this.
>
The advantage of operator overloading is that it is a change that
simplifies the language, allowing it to get rid of "ad hoc" stuff.
C99 introduced complex numbers into the language itself. This was
a mistake since not every C user needs complex numbers as a permanent
feature.
With operator overloading, only the people that need (or want) to use
the feature will have complex numbers. What's more, they can use
different implementations of complex numbers that better suit the
needs of their applications instead of being forced to use a single
implementation provided by the compiler vendor that should be perfect
for all usages,what is impossible.
Operator overloading simplifies the writing of counted strings packages
(using the natural '[' ']' syntax already used for strings) instead
of forcing the people that want a diffferent and more advanced string
type to use a horrible syntax that makes porting to the new package
much more difficult.
Operator overloading simplifies the language by adding a framework
where the users themselves can add new numeric types to the application
without requiring a language change for each type. Features like
complex numbers then, can be eliminated, making the resulting language
smaller.
> But there are disadvantages, too. If every proposal for change were
> accepted, we'd soon have a complete mess, a self-contradictory
> language with no consistency in its constructs or syntax. So, even if
> we're in favour of language change, we have to be selective if we're
> going to prevent a descent into chaos.
>
When changing things you must strike a delicate balance between the need
to preserve the language as it is, and the need to change it, to adapt
it to a new environment.
The core features of C (speed of generated code, simplicity, and
transparency) should be maintained at all costs. Operator overloading
makes the language less transparent since it is not obvious when you
read
a=b*c;
that you are using ints or doubles as it is now. As long as the user
of the new feature uses it in the intended manner (for numeric data)
everything will work. That is why, in my implementation of operator
overloading, I explicitely did NOT overload addition to "add" strings.
Normally, a + b <==> b + a. Addition is transitive.
If you use strings
"abc" + "def" != "def" + "abc"
and this is not acceptable. Besides, concatenating strings is not an
addition. It is just that: concatenation and the reader of the program
understands immediately what is going on when he sees:
strcat(buf,"abc");
> And even if we /are/ selective, a high rate of change makes it
> difficult for implementors to keep up. Even the comparatively modest
> changes introduced by C99 have yet to filter through, in their
> entirety, to the real world, ten years after the changes were
> introduced to the Standard.
C99 did not bring any real change into the core problems of the C
language.
It did NOT address:
(1) The problem of a completely obsolete standard library with functions
like gets() or asctime() still there.
(2) The problem of the absence of a counted string data type
(3) The problem of the lack of a standard way of using containers like
lists, hash tables, or similar.
There wasn't any INCENTIVE for the compiler writers to implement the
changes because many of them (complex numbers, syntactic innovations
or similar stuff) didn't bring any substantial improvement.
The problem was not that the changes were difficult to implement but
that they did not bring anything really new that customers were
really asking for.
> The inevitable result will be
> fragmentation, with implementors picking and choosing the changes
> they want to include. Portability suffers enormously, unless you
> stick to the lowest common denominator (if it exists and is
> sufficiently powerful, which - fortunately - is currently the case
> for C).
>
Mr Heathfield's pet subject: C89. Yes, that is the common denominator
and many people are unable to see beyond that and realize that 30
years later you just can't go on like if nothing has happened!
> So it's a trade-off. Every feature you introduce, every change you
> make, has a cost as well as (presumably) a benefit.
>
> Personally, I am of the opinion that anyone who wants to make major
> changes to the language would serve the world better either by
> switching to an existing language that has the desired features or,
> if that isn't possible, by designing a new language, and giving it a
> different name.
>
This conservative conclusion is the opinion of some of people in
this discussion group.
Obviously this leads to the complete destruction of C as a living
language. It ceases to evolve, and it is considered by most people
as a dead language.
I am completely and fundamentally opposed to that point of view. I
think C is a better language than C++ and many others precisely because
of the simplicity it has. Because the speed of the generated code.
Because you see what is going on in your program and within your
software.
But obviously the C++ people will not believe that, and the fossilized
group of programmers that are clustered around heathfield will not
believe that either.
What to do?
I do not know.