Boost process and C

I

Ian Collins

jacob said:
Because everyone agrees that C is dead and should NOT be developed any
further. It should be left for embedded systems with small RAM footprint
where C++ can never run.
C isn't dead, it's mature, there is a difference.
Still, C has a big potential of growth with some minor additions like
operator overloading, something that is accepted by more conservative
languages like fortran for instance.
If you want overloading, use C++.
This small change would make possible to write good string libraries,
good numerical libraries, etc.

Another feature is the overloaded functions feature that could allow a
limited amount of generic programming.
Same here, these features exist elsewhere, if you want them, go there.
And that is all. Small but essential changes that would make C a very
good language without loosing the simplicity, what is its greatest
asset. The problem of C++'s complexity is known. C with those minor
modifications would be a very useful language.
It already exists, but it isn't called C.
 
M

Michael Mair

Chris said:
Michael Mair said:
As C99 is largely ignored[*] in the embedded community ...
[*] This is my perception and may be wrong.

One data point: Wind River (which sells in the "embedded" market)
is moving towards full C99 support, however slowly. It is at least
a "checkbox item", if not one of the high priority ones.

I am looking forward to the discussions at my working place ;-)
("You see? _They_ started it. Now we should think about moving
in this direction...")
Is there any date by which Wind River wants to have arrived at
full C99 support?

Best regards
Michael
 
I

Ian Collins

jacob said:
Ian Collins a écrit :

Why?
Because it offers what you are looking for.
Why should I swallow that big fat language?

I just want a few specific features that are part of many programming
languages, from fortran to visual basic...

Operator overloading is a well known technique, no need to swallow
all C++ to get it. Thank you
If you want a chop do you eat the entire pig? Just use the bits you
want and ignore the rest.
Same thing. Why take all that machinery when it is not needed?
The problem with ultra FAT languages like C++ is their incredible
complexity!
Which you don't have to use.
Constructors and destructors?

Who needs them?
Do I detect a rant? Obviously you don't require them, so don't use them.
Object oriented programming can be nice in *some* situations but why
should it be FORCED into everyone?

Who said anything about OO? The subject was function and operator
overloading, which is a complexity C can do without, but other languages
offer.
 
J

jacob navia

Ian Collins a écrit :
Because it offers what you are looking for.



If you want a chop do you eat the entire pig? Just use the bits you
want and ignore the rest.

This is not possible. For some operators, C++ decides that they
need to be defined only within a class, and there you are. You are
forced to define classes, constructors, destructors, copy constructors,
and all the stuff.

Besides, there are things that the C++ operator overloading
implementation gets wrong:

1) There is no overloading possible for higher dimensional arrays

array[2][3] is just impossible using overloaded operators.

2) There is no way to distinguish between assignment and reading when
accessing an array. This is specially important when you want to
implement read only data areas.
 
J

jacob navia

Keith Thompson a écrit :
ANSI pulled together the largest common subset of all the incompatible
C's out there, then made a few tough choices, and in fact with C89
discarded the original K&R prototypes and other nonsense to make it a
practically useable language that compiler implementers could resonably
support.

[...]

What "original K&R prototypes" are you talking about?

K&R1 didn't have anything called "prototypes". It did have an older
style of function declarations:

int main(argc, argv)
int argc;
char **argv;
{
...
}

but that's still supported in both C90 and C99.

If sizeof(int) == 16 and sizeof(void *) == 32 you needed to declare:

extern char *fn();

before you used it, if not, wrong code would be generated since an int
returning function would be assumed

jacob
 
J

jacob navia

Ian Collins a écrit :
That's because they only make sense in the context of an object, you
have to (for example) add something to something else. The are no
restrictions on function overloading. You could use a simple struct.

well, that's my point. In C++ I czan't avoid classes, constructors,
destructors and the whole complexity, even if I want to use a simple
thing like

typef struct __int128 { int32_t part1,part2,part3,part4} INT128;


INT128 operator+(INT128 *a,INT128 *b)
{
....
}

INT128 operator+(INT128 *a,long long b)
{
....
}
Besides, there are things that the C++ operator overloading
implementation gets wrong:

1) There is no overloading possible for higher dimensional arrays

array[2][3] is just impossible using overloaded operators.

Indeed.

It should be possible to do, I mean higher dimensional arrays are not
something new...
There is a very simple technique known as proxy objects to solve this
problem, but that's a bit to OT for this forum.

No, it is not OT because that confirms what I said:

You can't just take a small piece. YOU GET THE HOLE PIG AT EACH SERVING!

:)


jacob
 
C

Chris Torek

I am looking forward to the discussions at my working place ;-)
("You see? _They_ started it. Now we should think about moving
in this direction...")
Is there any date by which Wind River wants to have arrived at
full C99 support?

Nobody ever tells me that sort of thing. :)

We have two compilers, though: Diab and GCC. GCC supports "whatever
GCC supports"; Diab's C99 is "getting closer but still not all that
close" as far as I know ("restrict" support just went in recently,
for instance). The RTP libraries in 6.x are from Dinkumware and
should be fully C99, to whatever extent the C compilers get C99
right.

(I do not work on either the compilers or the libraries, except to
whatever extent we run into problems with I/O, including 64-bit
file sizes and such.)
 
W

websnarf

Keith said:
The people in this newsgroup who have imposed their will that this
newgroup simply not discuss this issue are part of the problem of
course. If there is no clear place where the evolution of C can be
discussed, then it won't be, and C will not evolve.
[...]

There is. It's called comp.std.c.

I and well known security expert tried to discuss TR 24731 there. Its
no use. That laughable embarassment is almost certainly going to be
included in the next C standard. The people in that newsgroup, some of
whom are presumably standards people are irrationally obstinant about
their positions. I mean, I was "wrong" because I didn't put together a
counter proposal to TR 24731 and Microsoft did (even if I did, I'm not
sure if that's what it takes to *delete* a proposal by a highly funded
contributor ...). So they are "right" because they happened to put
effort (read: money) into it.

I also lurked for a while and saw one of the regulars bite the head off
of a technical suggestion to clarify the description of one of the
functions -- objectively speaking this change is required, since the
current language technically allows for things clearly not intended. I
mean what the hell is that? Is there no concern for technical
excellence? Perhaps they are worried that producing the documentation
to make this change would take too much time (i.e., a cost that nobody
was volunteering to pick up.)

It seems to me that that group is all about weeding and filtering
people out. Perhaps, if I got myself hired at a system vendor and flew
half way around the world to their meetings, or something like that
maybe they might listen to me. But I don't even have a *stake* in this
-- *I* can program high performance bignums, multithreading, pool based
self-checking heaps, graphics, device drivers and safe strings without
the standard's help. So they undermine themselves because they cannot
be receptive to my comments, because I am unwilling to jump through
their hoops to make them listen (just posting doesn't count.)
 
C

Chris Hills

Michael Mair said:
I am aware of the difference between what I called "Embedded C" --
BTW: You can find exactly this TR

I have the TR and some of the drafts.
and articles about it by searching
for "Embedded C" -- and C on embedded systems. I called the thing
by the name by which it was introduced to me (via CUJ, Embedded
Systems and friends).



Don't I know it...
Especially Japanese customers ask for MISRA compliance even in
generated code -- and, ideally, always all versions at once.

There are only 2 versions. You can not comply with both at once.
 
J

jacob navia

Bill Pursell a écrit :
jacob navia wrote:



Operator overloading is, IMHO, a really, really bad idea. I've only
been coding C for just under a year, and 11 months ago I was really
bent out of shape that I couldn't write:
struct foo A,B,C;
A = B + C;
but I'm really glad now that I can't, and I would hate to see operater
overloading be expanded in C. Operator overloading in C is the root
cause of a very large number of bugs already. How many bugs are a
result of "3+4" being different that "3.0 + 4.0"?

Sorry but 3+4 is identical to 3.0+4.0 : SEVEN in BOTH cases, only the
representtion of the number changes.

Maybe you meant 3/4 and 3.0/4.0 ???
Those bugs would
have been avoided had the programmer been required to type
"int_add(3,4)" or "float_add(3,4)".

Aaaahhh what an easy syntax...

What would you do with

z = sqrt((a+b)/(b-d) +0.5*d/(a*d-b*d);

???????

Yes, you have to COMPILE THEM by hand producing this incredible GIBBERISH!

tmp1=float_add(a,b);
tmp2=float_sub(b,d);
tmp3=float_div(tmp1,tmp2);
tmp4=float_mul(0.5,d);
tmp5=float_mul(a,d);
tmp6=float_mul(b,d);
tmp7=float_sub(tmp5,tmp6);
tmp8=float_div(tmp4,tmp7);
tmp9=float_add(tmp3,tmp8);
z=sqrt(tmp9);


IF you manage to compile that expression without making ANY MISTAKE.
That is MUCH MORE ERROR PRONE THAN

z = sqrt((a+b)/(b-d) +0.5*d/(a*d-b*d);


Besides there isn't a single maintenance programmer that can understand
that GIBBERISH without decompiling it (by hand) into
z = sqrt((a+b)/(b-d) +0.5*d/(a*d-b*d)
again!

And if you change the type of the data from float to long double you
have to edit all those function calls!

Now, I'm not arguing that the '+'
symbol be dropped for arithmetic on basic numeric types, but expanding
the language to allow '+' as an infix operator on user-defined structs
is just asking for trouble. The only gain is (arguably) cleaner code,
but quite frankly "A = foo_add(B,C)" is more informative than "A =
B+C" and less prone to error.


You are dreaming. LESS PRONE TO ERROR??????????
 
J

jacob navia

Ian Collins a écrit :
No, how would you differentiate between read and write to an array in C?

lcc-win32 proposes the [] operator for reading, and the []= for assignment
 
J

jacob navia

Bill Pursell a écrit :
As I said, I'm not suggesting that the basic arithmetic operators be
removed from the language, and this is an excellent example of their
utility. Can you come up with a similar example that doesn't rely on
fundamental types? I've never seen an object in any language that was
prone to this type of calculation, and certainly never seen a structure
in a C program to which this would apply. Any such calculation should
be performed by a function anyway, so rather than forcing the
maintenance programmer to parse:
z = sqrt((a+b)/(b-d) +0.5*d/(a*d-b*d);
that same programmer would have to parse:
z = determinant( A);
or the like.

lcc-win32 offers qfloats, 350 bit floating pint. Those numbers are easy
to use. To change your precision from double (64 bits) to qfloat (350
bits) you just #define double qfloat and do not have to change anything
in the code. That's the point of all this stuff. Numerical formulae and
computations remain the same even if the precision of the computations
goes up.

Besides, making a function for every moderately complex statement is an
overkill. The equation above is just one line, but in any serious
mathematical software there are literally hundredths of them.

Of course you can misuse operator overloading, as you can misuse anything.

For instance the famous overloading of the "+" operator for character
strings that C++ promotes is WRONG (this is not "shouting" is just
emphasis!)

Why?

Because + is commutative in maths: a+b is equal to b+a, and that is not
the case with a+b when a and b are strings:

"Hello" + " World" is different from
"World" + "Hello"

Ingeneral operator overloading is good with numbers because it allows
people to create their own types of numbers.
Which would be implemented as an appropriately named function.

But that function would be a lot of GIBBERISH and completely unmaintenable.

The gibberish would be encapsulated but it would be a pain to maintain
anyway!
 
E

extrudedaluminiu

One of the very best things that the original ANSI-ification of C did
was to write a standard that described a language that was in use, as
opposed to legislating a language from on high. That process should be
a model for all standardization attempts, everywhere - too many other
standardization processes attempt to go an invent something completely
new, which leads to standards that don't parallel reality.

Now, where did those features and ideas that came into common use after
K&R's first book was published come from? It seems that they just
popped up in compilers, everybody started using them, and so they made
sense to standardize.

These days, what groups are testing/working on new features or
extensions to the language? Comp.std.c is a great place for discussing
the current C language standards, but the C tradition seems to have new
features coming from places other than high committees.

I don't know what features would be great to have in the C of 2010 -
maybe a more powerful strings toolkit, maybe a collection of data
structures, maybe even quadragraphs. Who knows? But what is important
that someone is considering these things and asking these questions.
The Boost group seems to be doing that for C++; who is doing it here?
 
A

Andrew Poelstra

One of the very best things that the original ANSI-ification of C did
was to write a standard that described a language that was in use, as
opposed to legislating a language from on high. That process should be
a model for all standardization attempts, everywhere - too many other
standardization processes attempt to go an invent something completely
new, which leads to standards that don't parallel reality.

Now, where did those features and ideas that came into common use after
K&R's first book was published come from? It seems that they just
popped up in compilers, everybody started using them, and so they made
sense to standardize.

These days, what groups are testing/working on new features or
extensions to the language? Comp.std.c is a great place for discussing
the current C language standards, but the C tradition seems to have new
features coming from places other than high committees.

I don't know what features would be great to have in the C of 2010 -
maybe a more powerful strings toolkit, maybe a collection of data
structures, maybe even quadragraphs. Who knows? But what is important
that someone is considering these things and asking these questions.
The Boost group seems to be doing that for C++; who is doing it here?
Quite frankly, a lot of us are happy with the C of 1980. In terms of
forward-looking features, C++ seems the place to go, since C itself will
(should) never use the object-oriented paradigm.

Try posting some ideas in comp.std.c; see for yourself what reaction
you'll get.
 
J

jacob navia

Ben C a écrit :
Some of the side-effects are less agreeable though.

You end up needing something like C++ references, in order to provide a
way of overloading the operators in expressions like a = b = c.

In expressions like a = b + c, where a, b and c are all large instances,
you really want to copy the result of b + c directly into a. What you
end up with is operator+(b, c) returning an instance, and then
operator=(a, b) copying the instance into a. Compilers have clever ways
to work around this, while still having to call the constructor for the
temporary b + c and a's assignment operator (or copy ctor if it's an
initialization), as if it were doing add followed by copy, since those
could have arbitrary sideeffects.


Since there are no constructors neither copy constructors in C the
optimization is muc h simpler than in C++.

True, we need the references, what is a good improvement anyway.
Whether this nightmare is acceptable or not is a matter of opinion-- but
it strikes me as a whole new class of nightmare that C never had to deal
with before. Like anything, C has its strenths and weaknesses and one of
the strengths has always been the relative lack of nasty surprises.

No surprise here either, if you do not use this feature. Contrary to C++
all this is completely optional. The behavior of old programs and of
programs written not using this features is not affected.
Another thing about C is the fact that in general the cost of every
operation is fairly intuitively obvious in what you type. If you type an
expression with operators, you will get a bit of machine arithmetic. If
you need to call a function, you will see the function call. If you copy
a lot of data (rather than just one or two words) you will have to call
memcpy or friends (with the exception of struct instance assignment).
You lose this with operator overloading.

Sorry but in a 32 bit machine:
unsigned long long a,b,c;

...

a = b/c;

is very likely to invoke a function call...

Many operations are implemented as function calls behind the scenes,
specially in 16 bit compilers that need to implement 64 bit types.

Builtin types and user-defined types are very different things in C, and
these are sensible lines along which to design a machine-oriented
language that runs on machines which have memory for data, and registers
and ALUs for operations on builtin types.

Truncated addition is present in many CPUs since several years but since
C has not operator overloading nobody can do that in C.

In truncated addition

char a=200,b=200,c;

c = a+b;

c is now 255. The highest value acceptable. No wrap around.

This has been available in ALL PCs since years with the MMX instruction
set. But nobody can use them because since operators can't be
overloaded, this would need

c = truncated_addition(a,b);
In "higher-level" languages
which are further abstracted from the implementation, it's attractive to
remove this distinction-- Python for example achieves this well. But I'm
not convinced of the wisdom of the hybrid, C with operator overloading.

I am certain that the conservative option just puts brakes to the
development of the language
 
E

extrudedaluminiu

The greatest strength of C I've seen is not the fact that it can run on
small machines (even though this is nice). The greatest strength is how
simply and cleanly we can construct arbitrary data structures in C. It
is very often clear how to construct whatever structure is desired, be
it a simple linked list or something like VLists. This is because there
are no artificial binds on how you can use/abuse memory.
In terms of forward-looking features, C++ seems the place to go, since C itself will (should) never use the object-oriented paradigm.

Not all forward-looking features are object-oriented. Take for example
gcc's nested functions or Java's for-each loops. Or glib's set of data
structures. And there are no reasons why C89 can't be used to write
clean OO code, should you want to write OO code.

Stuff I'd like to see? I'd love to have a small set of portable data
structures - cons cells, lists, arrays. I'd also love to have Java's
for-each loop and Lisp's defstruct. But I'm hardly qualified to know
what is worth suggesting.

What I don't think should be the case is that the future of C be
dictated by only what extensions compilers come up with. Instead I
think that it'd be nice to have a high-quality set of libraries (a la
Boost), developed by a group for the purpose of asking questions about
the future of C, that offer innovative features. Maybe 99% of these new
features fail the test of general approval. That 1% that makes it
through would be worth it.
 
I

Ian Collins

The greatest strength of C I've seen is not the fact that it can run on
small machines (even though this is nice). The greatest strength is how
simply and cleanly we can construct arbitrary data structures in C. It
is very often clear how to construct whatever structure is desired, be
it a simple linked list or something like VLists. This is because there
are no artificial binds on how you can use/abuse memory.
I agree, a bit of therapeutic C is a good thing after a solid week of
scripting language programming :)
Not all forward-looking features are object-oriented. Take for example
gcc's nested functions or Java's for-each loops. Or glib's set of data
structures. And there are no reasons why C89 can't be used to write
clean OO code, should you want to write OO code.
Conversely, not all C++ features are OO. The standard library is
anything but. So one can write clean procedural code in C++.
Stuff I'd like to see? I'd love to have a small set of portable data
structures - cons cells, lists, arrays. I'd also love to have Java's
for-each loop and Lisp's defstruct. But I'm hardly qualified to know
what is worth suggesting.
There are two subsets of wishes there, standard library extensions and
language extensions.
What I don't think should be the case is that the future of C be
dictated by only what extensions compilers come up with. Instead I
think that it'd be nice to have a high-quality set of libraries (a la
Boost), developed by a group for the purpose of asking questions about
the future of C, that offer innovative features. Maybe 99% of these new
features fail the test of general approval. That 1% that makes it
through would be worth it.
Again, consider the distinction between libraries and language features.
Boot is dedicated to the former, using the standard language as its base.

I think C would benefit form an active library group which, like boot
had members of the standard's library subcommittee as active
participants.

C is probably one of the only mainstream programming languages that
lacks an evolving library. Maybe C programmers just enjoy reinventing
wheels, if you don't believe me, look back though this group at all the
linked list problem posts.
 
J

jacob navia

(e-mail address removed) a écrit :
About libclc - how is that doing? It seems to have just dried up
without notice.

If a C programmer is writing a reasonably large program these days,
what are the first libraries that are commonly used for collections of
data structures, better strings, etc?

Nowhere. Since the C standard comitee refuses to improve the language,
there are a lot of libraries but all of them incompatible.

Basically, the opinion here is that data structures are too much of an
intellectual effort for C programmers... :)
 
R

Rouben Rostamian

I think C would benefit form an active library group which, like boot
had members of the standard's library subcommittee as active
participants.

C is probably one of the only mainstream programming languages that
lacks an evolving library. Maybe C programmers just enjoy reinventing
wheels, if you don't believe me, look back though this group at all the
linked list problem posts.

An attempt toward creating a set of library tools was made
by some participants of this newsgroup a few years ago but
it fell by the wayside. See:

http://libclc.sourceforge.net/

The failure of libclc to take root is not surprising.
Minimalism has been a characteristic of C from the very
beginning. Practitioners of C tend to like that aspect of the
language otherwise they would be programming in something else.
It's no wonder that attempts to burden the language with
additional constructs/libraries/extensions have met with
resistance. The lukewarm reception of C99 is symptomatic of
that view.
 
J

jacob navia

Bill Pursell a écrit :
If the 350 bit floating point type is a data structure that is known to
the compiler, it seems that it's not really a user defined type.

No, it is a user defined type, using operator overloading precisely.

The compiler acepts it in two ways special ways though:

1) Constants:

qfloat pi = 3.14159265358979323846264338327950288419716939Q

I have not had the time to define this as a user defined function.

It could be done if I would implement something like

#pragma number_suffix(qfloat,'Q',asctoqfloat)

i.e. if I would allow the user to implement an association between a
numeric suffix (in this case 'Q') to a function (that should be
implemented in a dll or in some way accessible to the compiler at
compile time). I am planning to do this this sommer, when I will review
the whole implementation of operator overloading in lcc-win32
I think my concern is that I've seen a lot of horribly written C++
code, and slightly less bad C, and it seems that allowing operator
overloading will just increase the amount of bad C code that's out
there.

Yes, the danger is clear, it can be misused. But *anything* can be misused.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top