Change in C

F

Flash Gordon

jacob said:
cognacc a écrit :

There are two problems with that library:

(1) The more serious one:
http://library.gnome.org/devel/glib/2.20/glib-Memory-Allocation.html
<quote>
Memory allocation functions.
---------------------------
Description
These functions provide support for allocating and freeing memory.
Note
If any call to allocate memory fails, the application is terminated.
This also means that there is no need to check if the call succeeded.
<end quote>

This is completely unacceptable and actually it is a horrible idea.
This precludes any use of the whole library since all functions
that *potentially* allocate memory are tainted.

I agree, and from previous discussions here I believe a lot of others
here do as well.
(2) Another problem (or maybe not) is:
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.

I do not know if linking with gtk forces you to disclose your
source code...

Try reading the license. In any case, I'm shre that I've pointed out in
the past that it does NOT mean you have to disclose your source code,
you can keep your application closed source and sell it. Many companies
do this.
In any case the glist functions interface are very similar to
what I propose or to many other lists packagesa around.

Well, I've seen servers hitting 100% CPU and *also* hammering the disk
interface, and in code like that we don't want to add more indirections
than we need. This is code running on high spec server, not embedded
systems, so 8GB RAM minimum, high performance CPU etc.
 
W

websnarf

It brought a number of changes that were directly related to core
problems which were significant to implementors.

For instance, restrict pointers,

This is just so some implementors could claim their code was compliant
and still win some esoteric benchmark. It also allowed implementors
to throw the responsibility of aliasing issues to the developers
rather than the system vendor.
[...] mixed declarations and code,

That is so retarded. This *ONLY* applies to C++ code. If I see mixed
code and declarations, I always assume its either wrong or its C++,
with the expected differences in semantics. Mixing the declarations
and code in C never serves any useful purpose.
[...] and variable-length arrays;

In which you broke compatibility with gcc, which is the mostly widely
used compiler that implement variable length arrays. Good job.
[...] all of these address real-world problems
which made life harder for compiler writers and coders looking at
high-performance systems.

Which explains why oh so many people were clamoring for these features
as can be evidenced by the massive volume of technical articles and
books describing them and the need for the transition and ... oh
wait. None of that happened. Most real world developer don't even
have a clue about C99, what its substantive differences are, not could
they imagine why they would need or want its features.

In the exact same period of time security (the various viruses, worms,
buffer overflow exploits etc), portability (as evidenced by Java,
HTML, Perl, Samba and other portability solutions), and unicode (now
that its clear that the Chinese and the rest of the world will adopt
it) were become extremely hot topics that system level programmers and
application developers were really focusing on. Furthermore there was
a general transition from 32 bit systems to 64 bit systems looming.
Take a look at the C99 standard in light of that.
That problem is not likely to get addressed.  Quite simply, the functions
are there to support existing code, but you don't have to use them.

You don't get out much do you? Microsoft's latest compiler actually
issues a warning if you use those functions. gcc issues a linker
warning if you use gets(). The implementors are going beyond the
standard, because the issue is too important.
The library DID add several needed functions (snprintf among them) which
make it more possible or practical to write safe and efficient code for
a number of real-world circumstances.

None of the printf class of functions should ever be put in the same
sentence as the word "efficient". If you want efficiency, you need to
do compile, or code-time expansion of such code directly. If you put
printf-like semantics in the *PRE-PROCESSOR* then we could discuss
efficiency.
I solved that problem once.  It took me maybe a month to do a library of
self-managing string-like objects, and it's worked fine ever since.

Yeah, I've seen it. You use memory patterns in the contents to detect
the countedness. Its like you don't even understand why \0
termination is such a bad idea in the first place.
[...] I'm not sure the language needs it, though I wouldn't object.

Well, technically it doesn't. The Better String Library, is a
complete implementation which is portable to any C or C++ compiler I
have encountered. Just download it and use it, and stop worrying
about it: http://bstring.sf.net/ . The important point is to stop
using the old functions.
I don't think this is a problem to be solved at the language level,
although I could be persuaded otherwise if someone had a good demo.

[Comment withheld]
Only they did.  Who do you think proposed those?

*Particular* vendors with an agenda. Not surprisingly, gcc was slow
with the uptake. Does the standard committee even take feedback from
gcc?
Uh.

On the one hand, we have representatives paid for and sent by Sun, SGI,
and IBM.  On the other hand, we have you.  The people from the large companies
with active compiler development groups and hundreds or thousands of
customers said they needed to provide these features for their customers,
and they wanted standardized forms so customers could use these features
without freaking out.

Lol! Do you buy all the products you see in all the TV commercials
too?

Those vendors representative are paid to *RIG* the language so that
they can win a benchmark against their competitors, and that is all.
The "customers" that they are telling you about are selected as those
which happen to be the authors of said benchmarks which helps to
highlight the performance of these particular systems.

Do you have people on your committee that are from Metrowerks,
Borland, WATCOM or the Free Software Foundation? No, they are from
IBM, Intel and Sun. While both classes of entities make compilers,
one class has a blatantly obvious conflict of interest (i.e., they are
willing to harm the language if it harms their competitors more than
it harms themselves). And guess which you are taking advice from?
Honestly, if it didn't evolve, it would still remain useful for quite a
while yet.  Most of what I do can still be done reasonably in C89.

What that's an understatement. In fact there is nothing you can do in
C99 that you couldn't do in C89.

The C99 committee didn't even try to make the language better. There
are obvious candidates like a ranged memory expansion (like realloc,
but resizes to a size in a given range, to improve the chances of a
non-moving realloc) and widening multiply (multiply two 32 bit inputs
to get a 64 bit result, or two 64 bits to get a 128, etc) which you
didn't bother to consider (and would have truly made the language more
powerful). No coroutines (that would have been nice -- alas, I see it
show up in language like Lua and Python, first). No attempt at type
safety (see gcc) or lambda capabilities in the pre-processor.
 
C

Chris M. Thomasson

Seebs said:
From a design standpoint: What if anything goes in "data" to make it
part of the list? Does the list container maintain a separate linked list
of items with pointers to its data elements? If so, consider the malloc
overhead; you're now adding an additional malloc overhead to each item
in the list, by having a list element in the container. The indirections
are expensive. How do I get from the data element to the container, or
can the same data element be in several containers?

From a practical standpoint: There's no way I'd use this. I am familiar
with code using things like the list operations in the BSD kernel, which
are carefully structured to allow extremely efficient list operations.
Compared to them, for small data elements (the normal case, as it happens,
for most of what they're working on), your implementation looks like it
has a minimum cost of 20-30% or so for data, and probably closer to 50%
for execution time.

I am quite fond of the method used in the Linux kernel. I will do a simple
stack using the technique:

< pseudo-code typed in news reader; please forgive any typos ;^o >
____________________________________________________________________
#define CONTAINS(mp_this, mp_struct, mp_member) ((mp_struct*)( \
((unsigned char*)(mp_this)) - offsetof(mp_struct, mp_member) \
))


struct slist
{
struct slist* next;
};


#define SLIST_INIT { NULL }


void
slist_init(struct slist* const self)
{
self->next = NULL;
}


void
slist_push(struct slist* const self,
struct slist* node)
{
node->next = self->next;
self->next = node;
}


struct slist*
slist_pop(struct slist* const self)
{
struct slist* node = self->next;

if (node)
{
self->next = node->next;
}

return node;
}
____________________________________________________________________




You can even create "generic" functions to operate on the stack; example:
____________________________________________________________________
void
slist_reverse(struct slist* const self)
{
struct slist* n;
struct slist rslist = SLIST_INIT;

while ((n = slist_pop(self)))
{
slist_push(&rslist, n);
}

self->next = rslist.next;
}
____________________________________________________________________




This `slist_reverse()' procedure will work no matter what type of data the
user stores into the stack. You can use everything like this:
____________________________________________________________________
static struct slist g_slist = SLIST_INIT;


struct data
{
int blah;
struct slist node;
char blah;
};


void
foo(void)
{
size_t i;
struct data* d;
struct slist* n;

for (i = 0; i < 10; ++i)
{
if ((d = malloc(sizeof(*d))))
{
slist_push(&g_slist, &d->node);
}
}

slist_reverse(&g_slist);

while ((n = slist_pop(&g_slist)))
{
d = CONTAINS(n, struct data, node);

free(d);
}
}
____________________________________________________________________




Pretty straight forward and fairly "efficient" right?

;^)
 
C

Chris M. Thomasson

[...]
You can use everything like this:
____________________________________________________________________ [...]
____________________________________________________________________




Pretty straight forward and fairly "efficient" right?

;^)


You can even store the same data in more than one stack; something like
this:
____________________________________________________________________
static struct slist g_slist[2] = { SLIST_INIT, SLIST_INIT };


struct data
{
int a;
struct slist node1;
char b;
struct slist node2;
double c;
};


void
foo(void)
{
struct slist* n;
struct data d;

slist_push(&g_slist[0], &d.node1);
slist_push(&g_slist[1], &d.node2);

n = slist_pop(&g_slist[1]);
assert(CONTAINS(n, struct data, node2) == &d);

n = slist_pop(&g_slist[0]);
assert(CONTAINS(n, struct data, node1) == &d);
}
____________________________________________________________________




This is a intrusive container which means there are some caveats. The object
size will be increased for each stack it must be stored in simultaneously.
If objectA needs to be stored in N stacks, then it will need at least N
embedded `struct slist' objects. It sounds like Jacob is writing about
non-intrusive stacks, which are more flexible, but have their tradeoffs as
well. Which one is better? Well it depends on what you're going to use it
for...
 
S

Seebs

This is just so some implementors could claim their code was compliant
and still win some esoteric benchmark. It also allowed implementors
to throw the responsibility of aliasing issues to the developers
rather than the system vendor.

I don't buy this analysis. Do you have some kind of basis for it?
The impression I got was that, through the magic of separate compilation,
it could be just plain impossible to determine while compiling a given
module whether or not aliasing of arguments was possible, but that, if you
had the qualifier, you could then optimize some code better.
That is so retarded. This *ONLY* applies to C++ code. If I see mixed
code and declarations, I always assume its either wrong or its C++,
with the expected differences in semantics. Mixing the declarations
and code in C never serves any useful purpose.

Not true:

* VLAs actually care.
* Other stuff could actually care, although it probably usually doesn't.

(And actually, the for-loop-declarator was done pretty much at the same
time, and for the same reasons.)

So, while you may not think the benefit is particularly *significant* outside
of C++, it's still useful to some developers.
[...] and variable-length arrays;
In which you broke compatibility with gcc, which is the mostly widely
used compiler that implement variable length arrays. Good job.

Yeah, that was definitely a clusterfsck. Back then, there was some
general distrust of standardization in GNU land. (This was back when
the gcc docs claimed that some exceptionally bogus behavior was
mandated by ANSI, so far as I can tell simply out of spite; it wasn't,
and never had been, mandated by ANSI.) As a result, there were no gcc
implementors active in the standards effort. I do agree that it woulda
been better to be more aggressive about researching that, but to be
fair, I believe we did have other implementations to look at... Which
had, of course, decided things differently.
Which explains why oh so many people were clamoring for these features
as can be evidenced by the massive volume of technical articles and
books describing them and the need for the transition and ... oh
wait. None of that happened.

I didn't say these features were demanded by the vast majority of users, only
that there were users who pushed for them.
Most real world developer don't even
have a clue about C99, what its substantive differences are, not could
they imagine why they would need or want its features.

Probably true.
In the exact same period of time security (the various viruses, worms,
buffer overflow exploits etc), portability (as evidenced by Java,
HTML, Perl, Samba and other portability solutions), and unicode (now
that its clear that the Chinese and the rest of the world will adopt
it) were become extremely hot topics that system level programmers and
application developers were really focusing on. Furthermore there was
a general transition from 32 bit systems to 64 bit systems looming.
Take a look at the C99 standard in light of that.

Okay. Let's see. We adopted UCNs in identifiers and string literals,
gave reasonable support for UTF-8, 16, and 32, and dramatically improved the
quality of support for character sets uther than 7-bit ASCII. C99 not only
added a 64-bit type, but added a massive rework of the integral promotion
system to make it scalable and durable in the face of possibly-larger types,
clarified and standardized how to store pointers in an integer of large
enough size if possible, reserved a suitable namespace with suitable patterns
for expressing concepts such as "at least 32 bits" or "exactly 32 bits" or
"fastest thing you have with at least 16 bits". Security, and buffer
overruns? snprintf solves a HUGE chunk of that problem space, if used
competently. (VLAs actually help a lot with some common use cases, too.)

In short, a whole lot of these were issues that got specific attention and
improvements. The Unicode support and better support for larger bit sizes
and systems with a broader range of types both looked like serious work
that addressed those problems, apparently well enough to be of use to
people.

Also, some of the limit-raising has made life a lot easier for people writing
portable code.
You don't get out much do you?
Heh.

Microsoft's latest compiler actually
issues a warning if you use those functions. gcc issues a linker
warning if you use gets(). The implementors are going beyond the
standard, because the issue is too important.

That's fine, and that's what I'd expect. It's a quality-of-implementation
issue. (That said, you're arguably wrong; gcc doesn't issue that warning in
and of itself, that's managed by, or not managed by, the system C library
and additional linker magic.)

You can't take those things out without at the very least deprecating them
first. You can, however, encourage implementors to issue diagnostics.
Say, by deprecating gets, and calling it "obsolescent", warning about it
in future language directions, and so on.
None of the printf class of functions should ever be put in the same
sentence as the word "efficient". If you want efficiency, you need to
do compile, or code-time expansion of such code directly. If you put
printf-like semantics in the *PRE-PROCESSOR* then we could discuss
efficiency.

Compare snprintf to what you had to do before it existed. It's substantially
more efficient, even if it's not efficient "enough".
Yeah, I've seen it. You use memory patterns in the contents to detect
the countedness. Its like you don't even understand why \0
termination is such a bad idea in the first place.

It's not a perfect design, certainly, but on the other hand, it does
noticably reduce the *likelihood* of collisions. (In fact, the
"memory patterns" thing has no impact unless you want to try to take
advantage of the magic implicit conversions; if you don't use those,
it isn't an issue.)
*Particular* vendors with an agenda. Not surprisingly, gcc was slow
with the uptake. Does the standard committee even take feedback from
gcc?

If they offer it, absolutely! At the time, the people working on gcc
weren't that interested -- this was pre-egcs, remember.
Lol! Do you buy all the products you see in all the TV commercials
too?
Nope.

Those vendors representative are paid to *RIG* the language so that
they can win a benchmark against their competitors, and that is all.

Your cynicism fascinates me, but it does not persuade me. I spent a
while talking to these people and seeing what they thought was important.
I am pretty sure they were not trying to partake in some huge crazy
conspiracy.
Do you have people on your committee that are from Metrowerks,
Borland, WATCOM or the Free Software Foundation? No, they are from
IBM, Intel and Sun.

Anyone can participate. I did it out of my own pocket for about a decade
just because I thought it was fun -- and I got just as much of a vote,
and I got listened to. If someone from the FSF wanted to go, I don't
think anyone would complain at all.
While both classes of entities make compilers,
one class has a blatantly obvious conflict of interest (i.e., they are
willing to harm the language if it harms their competitors more than
it harms themselves).

You say this, but again, I see no evidence. The people I saw participating
were, consistently, devotees of C who really wanted it to succeed.
What that's an understatement. In fact there is nothing you can do in
C99 that you couldn't do in C89.

Well, strictly speaking, there's nothing I can do in any language that I
can't do in any other.

But there's a ton of stuff that works enough better in C99 than C89 that
I've mostly switched to specifying that and using those features.
The C99 committee didn't even try to make the language better. There
are obvious candidates like a ranged memory expansion (like realloc,
but resizes to a size in a given range, to improve the chances of a
non-moving realloc) and widening multiply (multiply two 32 bit inputs
to get a 64 bit result, or two 64 bits to get a 128, etc) which you
didn't bother to consider (and would have truly made the language more
powerful).

Fascinating to hear from someone who wasn't there what we did or didn't
consider.
No coroutines (that would have been nice -- alas, I see it
show up in language like Lua and Python, first).

I've seen them implemented successfully often enough that I'm not sure this
is as fatal as it might seem.
No attempt at type
safety (see gcc) or lambda capabilities in the pre-processor.

There was some discussion of the pre-processor thing, which I personally
liked, but there was a pretty strong consensus that the pre-processor was
bad enough laready.

.... Aren't you the guy who made it to Chapter 7 in the IAQ suggesting
"corrections"? (chromatic.com?) If so, I remain stunned; it's not just
the failure to recognize the jokes, it's that in several cases the answers
were technically correct (albeit carefully phrased poorly), and my
correspondant "corrected" them with errors.

I bring this up partially because I still wonder to this day whether the
epic set of corrections was intended as some kind of joke, and also because
it would be relevant to the question of whether I should trust your statements
about other peoples' intent...

-s
 
P

Phil Carmody

Richard Heathfield said:
My apologies for not being clearer. I meant any single proposal along
the lines that the OP was suggesting - i.e. fighting to get ISO to
endorse a standard linked list interface would itself be a tough
battle. Then you've got hash tables, BSTs (balanced/no?
red-black/AVL/something else? Just binary? How about quad-trees and
oct-trees?), tries, B+-trees, sparse arrays...

Given the not-necessarily-quick nature of qsort, there is precedent
for them providing an abstract interface with no specification of
internal implementation. The same could happen here. On creating
the structure you could specify whether you minded/wanted reading
being a potentially-writing operation, for example. If you had done,
it might construct a splay-tree instead of an AVL, for example.

Phil
 
H

Herbert Rosenau

Jacob naiva proves once again that he is a twit.

Software changes not because the people that write it need changes just
for the sake of it. Requirements change. Hardware evolves, people come
and go, the machines and environment where software exists changes.

Now he will tell us that softare must be changed because it must be
changed because ....

I use lots of tools written 25 years ago. I will use the same software
without any change the rest of my life because it does exactly what it
should do. No change needed.
Only dead things are fixed forever and can't evolve. A computer language
MUST change or it simply dies.

Look naiva buy monthly all it hand tools because he needs news becaus
the old does not work anymore because it is old.

Yes, really some of the software I have in daylky use is really old 16
bit - but it works well, it does it was designed for 5 generations of
the OS prior the current 32 bit one - but it works, it fullifies the
needs perfectly like the day it went GA.

Why should it be replaced? Only because it is old? Only because it was
originally written for 16 bit even as it runs very well natively now
inside my 32 bit OS? Yes, it does because the developers of my OS
decided some versions ago that it should be natively 100% compatible
to 16 bit because portability requirements to avoid senseless
rewriting anything usefull.

Yes, some tools gets replaced multiple times in a year because there
are new versions with less bugs, resolved security problems and
sometimes extended functionality. So there are 2 different kinds of
software:
-one that is written and tested once - and works absolutely unchanged
until compatible processors does not more exist.
- one that needs dayly rewritten or changed because security
requirements teeds that or extended functionality makes sense.

But jacob naiva thinks that even a simple knive must be replaced
because it is old, even as it is new like it was builded 1 minute ago.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
T

Tech07

jacob said:
In the thread "When will C have an object model?", I found a
message from Mr Heathfield that warrants a deeper discussion.

That thread was (for a change) an interesting one.

Richard Heathfield a écrit :

That is blather. Or are you calling me stupid? "You talkin' to me?"

I don't "doubt that" and haven't for umpteen years.
that
Propoganda!

Nothing beyond your propaganda bomb is relevant. Hello! Propaganda bombing
doesn't work or someone is a rapist or dumb. Are you a rapist or dumb?
 
T

Tech07

jacob said:
In the thread "When will C have an object model?", I found a
message from Mr Heathfield that warrants a deeper discussion.

That thread was (for a change) an interesting one.

Richard Heathfield a écrit :

Noted: you want to masturbate.
 
T

Tech07

Richard said:
I see no reason why I should want to call you stupid. You seem
perfectly capable of informing people of that fact, all by yourself.

<snip>
Leave me alone.
 
T

Tech07

Richard said:
I take it you are offering to leave me alone, too?

whoa on that... "gag me with a spoon". If you're raping, I'm all over you.
It's a yes or no "question".
 
T

Tech07

Richard said:
So you expect from others behaviour which you are not prepared to
observe yourself. Okay, noted.

As for leaving you alone: you have the right to start threads here,
and you have the right to reply to articles posted here. I have the
same rights. But of course I have no particular desire to get on your
case (whereas the reverse does not appear to be true).

<nonsense snipped>

lighten up dude.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top