Change in C


J

jacob navia

In the thread "When will C have an object model?", I found a
message from Mr Heathfield that warrants a deeper discussion.

That thread was (for a change) an interesting one.

Richard Heathfield a écrit :
>
> It is perhaps because software is so malleable that we are so tempted
> to change it. I doubt whether *any* computer language exists that
> *exactly* meets the needs of more than a handful of people, so it is
> natural for people to want to change the language(s) they use to suit
> their needs more closely.
>

Software changes not because the people that write it need changes just
for the sake of it. Requirements change. Hardware evolves, people come
and go, the machines and environment where software exists changes.

Only dead things are fixed forever and can't evolve. A computer language
MUST change or it simply dies.
> Changing a language can have advantages, but it also has
> disadvantages.
>

Yes. What *I* see as disadvantages of change is when it is change in the
wrong direction. For instance, let's take the example of the C++
language that was originally designed to improve C and change it for the
better.

In a recent paper, the creator of the language acknowledged that after
3 years (more or less) working in an improvement of C++ it was
impossible for him to realize the requested modification.

The concrete details are not important here. Briefly, it was the
introduction of "concepts" as descriptions of template arguments so that
those arguments could be checked. A reasonable proposal.

The problem is that C++ has grown so complex, that even the creator
and principal force behind C++ was unable to do that change in a
time frame of years.

Change in the wrong direction is, then, a change that complexifies
more than what it is required, without any big simplification to
gain for the effort of introducing a feature.
> One obvious advantage is that the language can be made more
> expressive. For example, there is no doubt (in my mind, at least)
> that operator overloading would make bignums much more palatable. So
> it's natural to want to adapt the language to incorporate changes
> like this.
>

The advantage of operator overloading is that it is a change that
simplifies the language, allowing it to get rid of "ad hoc" stuff.
C99 introduced complex numbers into the language itself. This was
a mistake since not every C user needs complex numbers as a permanent
feature.

With operator overloading, only the people that need (or want) to use
the feature will have complex numbers. What's more, they can use
different implementations of complex numbers that better suit the
needs of their applications instead of being forced to use a single
implementation provided by the compiler vendor that should be perfect
for all usages,what is impossible.

Operator overloading simplifies the writing of counted strings packages
(using the natural '[' ']' syntax already used for strings) instead
of forcing the people that want a diffferent and more advanced string
type to use a horrible syntax that makes porting to the new package
much more difficult.

Operator overloading simplifies the language by adding a framework
where the users themselves can add new numeric types to the application
without requiring a language change for each type. Features like
complex numbers then, can be eliminated, making the resulting language
smaller.
> But there are disadvantages, too. If every proposal for change were
> accepted, we'd soon have a complete mess, a self-contradictory
> language with no consistency in its constructs or syntax. So, even if
> we're in favour of language change, we have to be selective if we're
> going to prevent a descent into chaos.
>

When changing things you must strike a delicate balance between the need
to preserve the language as it is, and the need to change it, to adapt
it to a new environment.

The core features of C (speed of generated code, simplicity, and
transparency) should be maintained at all costs. Operator overloading
makes the language less transparent since it is not obvious when you
read
a=b*c;
that you are using ints or doubles as it is now. As long as the user
of the new feature uses it in the intended manner (for numeric data)
everything will work. That is why, in my implementation of operator
overloading, I explicitely did NOT overload addition to "add" strings.

Normally, a + b <==> b + a. Addition is transitive.

If you use strings
"abc" + "def" != "def" + "abc"
and this is not acceptable. Besides, concatenating strings is not an
addition. It is just that: concatenation and the reader of the program
understands immediately what is going on when he sees:
strcat(buf,"abc");
> And even if we /are/ selective, a high rate of change makes it
> difficult for implementors to keep up. Even the comparatively modest
> changes introduced by C99 have yet to filter through, in their
> entirety, to the real world, ten years after the changes were
> introduced to the Standard.

C99 did not bring any real change into the core problems of the C
language.

It did NOT address:
(1) The problem of a completely obsolete standard library with functions
like gets() or asctime() still there.
(2) The problem of the absence of a counted string data type
(3) The problem of the lack of a standard way of using containers like
lists, hash tables, or similar.

There wasn't any INCENTIVE for the compiler writers to implement the
changes because many of them (complex numbers, syntactic innovations
or similar stuff) didn't bring any substantial improvement.

The problem was not that the changes were difficult to implement but
that they did not bring anything really new that customers were
really asking for.
> The inevitable result will be
> fragmentation, with implementors picking and choosing the changes
> they want to include. Portability suffers enormously, unless you
> stick to the lowest common denominator (if it exists and is
> sufficiently powerful, which - fortunately - is currently the case
> for C).
>

Mr Heathfield's pet subject: C89. Yes, that is the common denominator
and many people are unable to see beyond that and realize that 30
years later you just can't go on like if nothing has happened!
> So it's a trade-off. Every feature you introduce, every change you
> make, has a cost as well as (presumably) a benefit.
>
> Personally, I am of the opinion that anyone who wants to make major
> changes to the language would serve the world better either by
> switching to an existing language that has the desired features or,
> if that isn't possible, by designing a new language, and giving it a
> different name.
>

This conservative conclusion is the opinion of some of people in
this discussion group.

Obviously this leads to the complete destruction of C as a living
language. It ceases to evolve, and it is considered by most people
as a dead language.

I am completely and fundamentally opposed to that point of view. I
think C is a better language than C++ and many others precisely because
of the simplicity it has. Because the speed of the generated code.
Because you see what is going on in your program and within your
software.

But obviously the C++ people will not believe that, and the fossilized
group of programmers that are clustered around heathfield will not
believe that either.

What to do?

I do not know.
 
Ad

Advertisements

S

Seebs

C99 did not bring any real change into the core problems of the C
language.

It brought a number of changes that were directly related to core
problems which were significant to implementors.

For instance, restrict pointers, mixed declarations and code, and
variable-length arrays; all of these address real-world problems
which made life harder for compiler writers and coders looking at
high-performance systems.
It did NOT address:
(1) The problem of a completely obsolete standard library with functions
like gets() or asctime() still there.

That problem is not likely to get addressed. Quite simply, the functions
are there to support existing code, but you don't have to use them.

The library DID add several needed functions (snprintf among them) which
make it more possible or practical to write safe and efficient code for
a number of real-world circumstances.
(2) The problem of the absence of a counted string data type

I solved that problem once. It took me maybe a month to do a library of
self-managing string-like objects, and it's worked fine ever since. I'm
not sure the language needs it, though I wouldn't object.
(3) The problem of the lack of a standard way of using containers like
lists, hash tables, or similar.

I don't think this is a problem to be solved at the language level,
although I could be persuaded otherwise if someone had a good demo.
There wasn't any INCENTIVE for the compiler writers to implement the
changes because many of them (complex numbers, syntactic innovations
or similar stuff) didn't bring any substantial improvement.

Only they did. Who do you think proposed those?
The problem was not that the changes were difficult to implement but
that they did not bring anything really new that customers were
really asking for.

Uh.

On the one hand, we have representatives paid for and sent by Sun, SGI,
and IBM. On the other hand, we have you. The people from the large companies
with active compiler development groups and hundreds or thousands of
customers said they needed to provide these features for their customers,
and they wanted standardized forms so customers could use these features
without freaking out.
Obviously this leads to the complete destruction of C as a living
language. It ceases to evolve, and it is considered by most people
as a dead language.

Honestly, if it didn't evolve, it would still remain useful for quite a
while yet. Most of what I do can still be done reasonably in C89.

-s
 
K

Keith Thompson

jacob navia said:
But obviously the C++ people will not believe that, and the fossilized
group of programmers that are clustered around heathfield will not
believe that either.
[snip]

jacob, is it not possible for you to discuss the language without
personal insults?

"Why not getting positive for a change?"
 
S

Seebs

jacob, is it not possible for you to discuss the language without
personal insults?

Oh, geeze. I just realized we're in comp.lang.c., not
alt.richard.heathfield.die.die.die. I bet he was making the same
mistake. Easy to understand, eh?

-s
p.s.: I actually like Richard. I'm not sure I agree with him about
a lot of things, but he's polite, informed, and pleasant to interact with.
 
B

BGB / cr88192

jacob navia said:
In the thread "When will C have an object model?", I found a
message from Mr Heathfield that warrants a deeper discussion.

That thread was (for a change) an interesting one.

Richard Heathfield a écrit :

Software changes not because the people that write it need changes just
for the sake of it. Requirements change. Hardware evolves, people come
and go, the machines and environment where software exists changes.

Only dead things are fixed forever and can't evolve. A computer language
MUST change or it simply dies.


Yes. What *I* see as disadvantages of change is when it is change in the
wrong direction. For instance, let's take the example of the C++
language that was originally designed to improve C and change it for the
better.

In a recent paper, the creator of the language acknowledged that after
3 years (more or less) working in an improvement of C++ it was impossible
for him to realize the requested modification.

The concrete details are not important here. Briefly, it was the
introduction of "concepts" as descriptions of template arguments so that
those arguments could be checked. A reasonable proposal.

The problem is that C++ has grown so complex, that even the creator
and principal force behind C++ was unable to do that change in a
time frame of years.

Change in the wrong direction is, then, a change that complexifies
more than what it is required, without any big simplification to
gain for the effort of introducing a feature.


I would much rather have a simpler modular language.


in this case, the core language can remain fairly well fixed, and fairly
simple, but implementations are free to dump in "extensions".

it would be nice if many of these extensions were themselves specified, but
essentially optional from the POV of the implementation (much like the
situation with 3rd party libraries).


for the core language, I would actually like a "simpler" core than C, such
as:
context independent and unambiguous syntax;
dropping a few older-style syntax conventions;
....

so, an "extension" could be the addition of a C like pointer and
type-system, but with an implementation free to refuse this (allowing only
opaque types).

another such "extension" could be allowing dynamic types or an object
system.


means could be provided to allow the implementation to cleanly extend the
core syntactic elements, absent having to resort to terrible syntax (for
example, as for many GCC'isms or MSVC'isms).


so, the core language would be simple, and with a few bolted on extensions,
it could transform to be like other languages: C, GLSL, Java, ECMAScript,
....

granted, if done entirely cleanly though, it is unlikely the language could
strictly implement any of them (and, like anything else, the addition of
extensions greatly increases the risk of an inflexible tangled mess).

it would also require specifying the language a lot more at the level of the
parser and compiler internals than is usually done with languages like C,
which if not done well could greatly increase the complexity of an
implementation.


note, the idea would not be here that these extensions would exist "within"
the language (as is the ideal with C++), rather, that the compiler machinery
would be "pluggable" (in a formally defined but likely implementation
dependent manner...).


I don't expect such an idea to generate much interest though.

more something for some of us VM implementors to fantasize about, after
facing the horrors of implementing a compiler framework and attempting to
extend it to handle several different languages.

at nearly every level (from the machine code on up), there are layers upon
layers of tangled mess, little if anything having been cleanly designed
(some things are just ill-designed, others are blatent examples at prior
attempts at over-engineering, ...).

so, the question is how to have clean and flexible designs without adding
unnecessary complexity (or over-engineering things). people have tried, and
all their efforts simply add more to the pre-existing mess (as, like string,
each piece of complexity tangles itself into knots with the other
complexities of other peoples' overengineering...).


I am not sure if, in the larger sense, these problems are actually
solvable...

we just continue to live in a world of knotted string...

(and, in the end, such a language would likely turn into a bigger mess than
that which it was intended to replace...).


<snip, not much to comment>

I will note that operator overloading is a feature, but it is far from being
a silver bullet...
 
J

jacob navia

Richard Heathfield a écrit :
The problem is not the lack of a good demo, but the existence of many,
many, many, many, many many many good demos. All different. Choose
any one to standardise, and you cheese off everyone else, because
they will have had good reasons for rejecting it. (One man's meat,
and all that.)

<snip>

This is just not true. Yes, there are a lot of libraries but there is
no need to make them obsolete, to the contrary. The committee should
just define a common interface, where all those libraries could
communicate. Libraries that are already being used can be used
in the future with no change.

It is the need to rewrite again and again the same list management
functions, hash table management functions, etc, that makes people
leave C as development language. Yes, you can have a library of
your own, or use some library but since there is no established
interface you will need to make adjustements when you have to
interface with code that uses a slightly different convention.

Many standard library functions use the convention:
function(destination,source)
like memcpy, strcpy or similar. This is easy to remember and
improves learning.

lcc-win proposes a (rather primitive, granted) containers library
using a function table approach.

Each container has as first element a table of functions that
can be used to store the functions that apply to that
container. This allows two things:
1) Naming of the function pointers can be simple words like
Append, Remove, whatever, without impacting the user name
space since they are just members of a structure
2) Function pointers can be subclasses, i.e. if some library
function doesn't fill your needs you put at any time
your own that can do some partial job before, or after
or instead of the normal library function.

If you do not like
list->Insert(list,item,position)
because of some reason, you store the existing function pointer
somewhere, do some preprocessing, call the old function pointer and
do some postprocessing. Or you can replace the library function
entirely.

This approach is very flexible and can be done in standard C. There are
NO language modifications needed at all!
 
Ad

Advertisements

S

Seebs

This is just not true. Yes, there are a lot of libraries but there is
no need to make them obsolete, to the contrary. The committee should
just define a common interface, where all those libraries could
communicate. Libraries that are already being used can be used
in the future with no change.

Uh, no.

The problem is that they are designed to solve the problem in
substantially different ways.
Each container has as first element a table of functions that
can be used to store the functions that apply to that
container. This allows two things:
1) Naming of the function pointers can be simple words like
Append, Remove, whatever, without impacting the user name
space since they are just members of a structure
2) Function pointers can be subclasses, i.e. if some library
function doesn't fill your needs you put at any time
your own that can do some partial job before, or after
or instead of the normal library function.

And

3) Overhead of extra storage (the table) for every single container,
plus overhead of indirect function calls (more expensive on many systems),
plus a huge number of potential problems. Is the table actually stored
in each object? Then it's a huge waste of space. Do we store only a
pointer to the table? Then we have allocation issues with the risk of
dangling pointers or leaked memory, and we have to establish some kind
of pattern for determining whose job it is to allocate the storage.
Oh, and of course, if it's a pointer to the table, you've just added
ANOTHER indirection.
This approach is very flexible and can be done in standard C. There are
NO language modifications needed at all!

So go write it and pitch it to people. My guess is that you'll find that
it's sufficiently inefficient and introduces enough extra complexity that
people will prefer to write and use their own.

In short, not a good candidate for standardization.

-s
 
J

jacob navia

Seebs a écrit :
And

3) Overhead of extra storage (the table) for every single container,

It depends how do you implement the containers. The overhead in
lcc-win's implementation is one pointer for each container and
one table for every container type.

For a list of 30 elements (say) you have only one pointer
and one table. Since the table is the same for all lists, you
have only one table floating around, so the actual overhead is
one pointer per container what is absolutely nothing. For long
lists (100 elements) the actual size of the objects stored will
overwhelm easily the size of the container structure.
plus overhead of indirect function calls (more expensive on many systems),

this is very cheap, making it 2-3 instructions more maybe, in most
machines.
plus a huge number of potential problems.

Sure, there are infintely many *potential* problems. A meteorite can
strike the programmer when he is using the library function, his
mother can call him to remind him to wash the dishes and he can
get nervous and use the library function incorrectly...
and

:)
Is the table actually stored
in each object?

No, of course!

Then it's a huge waste of space.

Do we store only a
pointer to the table?
Yes.

Then we have allocation issues with the risk of
dangling pointers or leaked memory, and we have to establish some kind
of pattern for determining whose job it is to allocate the storage.

There is no allocated memory for the pointer! What are you speaking
about? The pointer points to a static global table for all containers
of the same type.
Oh, and of course, if it's a pointer to the table, you've just added
ANOTHER indirection.

Yes, that allows to replace the whole table easily.
So go write it and pitch it to people.

Then, after having written it, heathfield says that I am spamming :)

My guess is that you'll find that
it's sufficiently inefficient and introduces enough extra complexity that
people will prefer to write and use their own.

Yes, since there is no support for it in the standard library. C++ could
manage such a leverage because they standardised the containers and
provide code for the most common ones. A similar thing can be done in
C, but we are too busy keeping it "as it is", and rewriting and
debugging thousand times the same stuff!
In short, not a good candidate for standardization.

You just raised some potential problems that aren't actually there.
 
J

jacob navia

Richard Heathfield a écrit :
<software advertisement snipped - if you want to talk about this,
fine, but please do it without spamming>
Sure sure, this allows you to avoid all the technical points of my
proposal and qualify it as "spam".

Very easy.
 
S

Seebs

It depends how do you implement the containers. The overhead in
lcc-win's implementation is one pointer for each container and
one table for every container type.

Who owns allocation of the table?

When and where do you do the sanity checks (table pointer valid,
entries in table valid)?
For a list of 30 elements (say) you have only one pointer
and one table. Since the table is the same for all lists, you
have only one table floating around, so the actual overhead is
one pointer per container what is absolutely nothing. For long
lists (100 elements) the actual size of the objects stored will
overwhelm easily the size of the container structure.

So the table's in the container, not the elements? So that doesn't
work for a lightweight list (no container object, just list members).

And if you look at a lot of the list implementations out there,
there's no "container" separate from list elements. (And think
about it; in practice, it seems you're going to need to have a pointer
back to the container in every element, or you're going to have to
pass the container around along with the elements...)
this is very cheap, making it 2-3 instructions more maybe, in most
machines.

If there's no sanity checks, sure. But then it's pretty risky and
error-prone.
There is no allocated memory for the pointer! What are you speaking
about? The pointer points to a static global table for all containers
of the same type.

What prevents someone from allocating a table at runtime?

If you don't let people allocate (or modify) the table at runtime, you're
killing flexibility -- the ability to dynamically load a new container
from a library would be awfully useful. If you do, you had better do
sanity checks ALL THE TIME.
Yes, that allows to replace the whole table easily.

And adds more cycles.
Then, after having written it, heathfield says that I am spamming :)

If you're advertising it, and some users would have to pay money to use
it, you probably are.
Yes, since there is no support for it in the standard library. C++ could
manage such a leverage because they standardised the containers and
provide code for the most common ones. A similar thing can be done in
C, but we are too busy keeping it "as it is", and rewriting and
debugging thousand times the same stuff!

Compare what you're looking at to, say, the queue implementations in the
BSD kernel headers.

You're not gonna get anywhere claiming that the performance difference
doesn't matter.
You just raised some potential problems that aren't actually there.

Container items are not necessarily a good implementation at all in C. You've
described something which adds multiple indirections to every operation.
Elements presumably need a pointer back to the container to be used; that
means an extra pointer for every single item, and the pointer back to the
container doesn't necessarily solve everything. There's a number of
questions to look at. Can you figure out the next element given only an
element, or do you need its container, too? If you need the container,
where is the "next element" information stored, in the element or in
the container? Are we going to have the container hold a linked list of
separately allocated items with pointers to their corresponding elements?

Once you add the container, anything you do is going to have pretty
significant costs. You might say "only a few cycles", but... Compared to
a traditional lightweight C implementation of a linked list, suitable
for inlining, you could easily be doubling or tripling the cost of
list operations, and for some lists, you might well be increasing data
storage by 20% or more.

One of the reasons I'm using C is that I get to pick the data structure which
fits the problem requirements most closely. An abstract data type which can
do anything is likely to be too expensive. (If it's not, I'm probably writing
in Ruby anyway.)

-s
 
C

Chris McDonald

jacob navia said:
Richard Heathfield a écrit :
Sure sure, this allows you to avoid all the technical points of my
proposal and qualify it as "spam".
Very easy.


I agree; I did not view the technical discuss as spam because I
"subconsciously" replaced

"...in my lcc-win..."
with
"...in a development compiler that I use"....

It's a shame that more readers, possibly writers, can't do the same.
 
Ad

Advertisements

J

jacob navia

Seebs a écrit :
Who owns allocation of the table?

The library. For each type of container there is a static table
of pointers containing the address of each of the functions for that
container. When you create a container for a certain type (a list
for instance) the creation function initializes the pointer in the
container to point to this table that is shared by all containers of the
same type. There is NO allocation.
When and where do you do the sanity checks (table pointer valid,
entries in table valid)?

The table pointer should be valid at all times, and if you replace
the table with a table of your own, you have to provide an equivalent
prototype to the replaced function.
So the table's in the container, not the elements? So that doesn't
work for a lightweight list (no container object, just list members).

Exactly. If you want no overhead, do not use this. The list container
contains a cursor, a pointer to the last element and a pointer to the
start of the list. This allows for fast access at the expense of a few
pointers. All this can be made optional if not needed.
And if you look at a lot of the list implementations out there,
there's no "container" separate from list elements. (And think
about it; in practice, it seems you're going to need to have a pointer
back to the container in every element, or you're going to have to
pass the container around along with the elements...)

You pass the container, not the elements for most functions. You can
iterate over the container, and the callback receives a pointer to
each element.
If there's no sanity checks, sure. But then it's pretty risky and
error-prone.

Absolutely not if you do not mess up with the pointer established by
the creation function.
What prevents someone from allocating a table at runtime?

Nobody since you can replace that static table or parts of it
at runtime, as I explained before.
If you don't let people allocate (or modify) the table at runtime, you're
killing flexibility -- the ability to dynamically load a new container
from a library would be awfully useful. If you do, you had better do
sanity checks ALL THE TIME.

There is not a lot of things the program can do if the pointer in a
container is wrong besides aborting. A crash is better since it will
point exactly to the point of failure.

typedef struct tagListVtable {
int (AppendElement)(List *,void *);
// other function pointers
} ListVtable;

typedef struct tagList {
listVtable *vTable;
// Other fields
} List;

At any time you can replace a table with your own table in a single list
container and the behavior will be modified IN THAT LIST, not
everywhere.

Or you can replace a pointer in the table itself, what will modify the
behavior of ALL lists. As you like.
And adds more cycles.
maybe 2-3 cycles. Please, let's be realistic.

In most processors today, we have the computing power of dream
machines just a few years ago. It is obvious that in an
embedded system TODAY we are using 32 bits CPUs and quite
a lot of RAM by yesterday's standards. Obviously in your
eternal coffee machine there aren't cycles and memory
available for this kind of software so they will just

NOT USE IT!

Why do we have to always make the worst machine the absolute
requirement of the WHOLE language?

Let's introduce optional packages that are used in MOST
situations and improve the language in MOST situations. Since
they are optional (as printf for instance) users in microcontrollers
are NOT penalized. They do not use printf/complex numbers/ or
the logarithmic gamma function.
If you're advertising it, and some users would have to pay money to use
it, you probably are.

I am distributing it free with the source code included!
I even posted the source code here in this group!
Compare what you're looking at to, say, the queue implementations in the
BSD kernel headers.

You're not gonna get anywhere claiming that the performance difference
doesn't matter.


Container items are not necessarily a good implementation at all in C. You've
described something which adds multiple indirections to every operation.

This is never as expensive as a call to printf or fread or many other
functions of the standard library! For instance, to implement a good
printf you have to be able to implement an extended precision package
to be able to correctly print denormalized floating point numbers.

Obviously if you program in a microcontroller with 16K of RAM you can't
use printf. But nobody is complaining here that printf should be
banned.

Elements presumably need a pointer back to the container to be used;

No. I do not have any backpointers in each element, THAT would be
too intrusive and expensive of course. sizeof(element) MUST be the
same size than the size the user thinks it has, without ANY overhead.
that
means an extra pointer for every single item, and the pointer back to the
container doesn't necessarily solve everything.

Surely not. I do not see where you get that. My implementation
works without that.

There's a number of
questions to look at. Can you figure out the next element given only an
element,
No.

> or do you need its container, too?

Yes. Lists, however do have a "Next" pointer. Flexible arrays not.

If you need the container,
where is the "next element" information stored, in the element or in
the container?

For lists, it is tored in each element since the elemnts of a list are
ordered by their "next" pointers, that's the essence of a list.

For arrays (or flexible arrays) that information is implicit. If you
receive a pointer to some type T, you know that T+1 in C will yield
the next element, and T-1 the previous.
Are we going to have the container hold a linked list of
separately allocated items with pointers to their corresponding elements?

In my implementation the container manages its own memory and copies the
received data. If you do not do this, the container wont be able to
reorganize the data as needed. Obviously it *could* be done, but it
would be complicated to free the elements when some element is deleted.
Once you add the container, anything you do is going to have pretty
significant costs.

A function call and 2 indirections? That is a "pretty significant
cost"?

That's nothing in most machines. In any case, if you use your
OWN functions, the only thing you gain are just 2 indirections
since you have to make a function call anyway.

You might say "only a few cycles", but... Compared to
a traditional lightweight C implementation of a linked list, suitable
for inlining, you could easily be doubling or tripling the cost of
list operations, and for some lists, you might well be increasing data
storage by 20% or more.

For a linked list, you pay container costs. They are:
1 pointer for each list
1 table for all lists you are going to create.

Suppose you have a list with 100 elemnts of 10 bytes each. (1000
bytes of storage). The overhead per list is 8 bytes for a pointer
(in a 64 bits machine) or 4 bytes for a pointer in a 32 bit machine.

This isn't even 1% of overhead. For longer lists the overhead is even
LESS!

I do not see how you arrive at 20%.
One of the reasons I'm using C is that I get to pick the data structure which
fits the problem requirements most closely.

Sure. But sometimes you need to change from a list to a table because
lists are expensive. Now you have to rewrite everything. Using this
approach you can keep MOST of your code intact since

foo->lpVtable->Add(foo,(void *)&data);

will stay the same for the new container!

An abstract data type which can
do anything is likely to be too expensive.

"can do anything" probably not. But there isn't any expensive
operations there.
(If it's not, I'm probably writing
in Ruby anyway.)

It would be interesting to see the ratio of my library
to ruby...

Always speaking about performance and then... a ruby interpreter!

:)
 
J

jacob navia

Chris McDonald a écrit :
I agree; I did not view the technical discuss as spam because I
"subconsciously" replaced

"...in my lcc-win..."
with
"...in a development compiler that I use"....

It's a shame that more readers, possibly writers, can't do the same.

I posted the whole source code for the list container here.
There was a discussion about containers and I proposed this solution.

Why can't I say where that solution is implemented? There isn't even
a license, all the source code is shipped with each distribution.
 
N

Nick Keighley

Chris McDonald a écrit :

I agree too. If a library is being offered as a possible addition
to the standard I see no harm in mentioning where the example
implementation came from.

I posted the whole source code for the list container here.
There was a discussion about containers and I proposed this solution.

Why can't I say where that solution is implemented? There isn't even
a license, all the source code is shipped with each distribution.

can anyone use the container source for any purpose (including
commercial and educational purposes)?
 
J

jacob navia

Nick Keighley a écrit :
can anyone use the container source for any purpose (including
commercial and educational purposes)?

There is no license, so you can do as you wish.

If interest is there I will eliminate all lcc-win specific
stuff (again) and post it here (again).
 
Ad

Advertisements

J

jacob navia

cognacc a écrit :
What about glib form the gtk - gnome "suite"
Couldnt that be used as base for some standardisation?

link to page with source code download:
http://www.cs.tut.fi/ohj/VARG/TVT/GTKsources.html


after unpacking, go insode directory glib/
forexample something called glist.h|c as an example.


mic

Hi mic

There are two problems with that library:

(1) The more serious one:
http://library.gnome.org/devel/glib/2.20/glib-Memory-Allocation.html
<quote>
Memory allocation functions.
---------------------------
Description
These functions provide support for allocating and freeing memory.
Note
If any call to allocate memory fails, the application is terminated.
This also means that there is no need to check if the call succeeded.
<end quote>

This is completely unacceptable and actually it is a horrible idea.
This precludes any use of the whole library since all functions
that *potentially* allocate memory are tainted.

(2) Another problem (or maybe not) is:
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.

I do not know if linking with gtk forces you to disclose your
source code...

In any case the glist functions interface are very similar to
what I propose or to many other lists packagesa around.
 
R

REH

In a recent paper, the creator of the language acknowledged that after
3 years (more or less) working in an improvement of C++ it was
impossible for him to realize the requested modification.

The concrete details are not important here. Briefly, it was the
introduction of "concepts" as descriptions of template arguments so that
those arguments could be checked. A reasonable proposal.

The problem is that C++ has grown so complex, that even the creator
and principal force behind C++ was unable to do that change in a
time frame of years.

Concepts was not removed because of any impossibility. It was removed
because the standard committee believed they won't be able to complete
the standard by the deadline with it in. Concepts have not been
dropped. They will be reintroduced in a TR, or possiblity the next
revision of the standard.

REH
 
J

jacob navia

REH a écrit :
Concepts was not removed because of any impossibility. It was removed
because the standard committee believed they won't be able to complete
the standard by the deadline with it in.

In other words, after some *years* working in the problem, it was
impossible to introduce that change in the given time frame.
Concepts have not been
dropped. They will be reintroduced in a TR, or possiblity the next
revision of the standard.

Well, the future is very long. Many things may happen.

I brought that into this discussion because is an example of
where C++ has gone.

It was an example of the
utter complexity of the language, a complexity of such proportions
that people that should be *the* experts in the matter are just unable
to introduce any further change because of the implications can't be
known, the complexity has grown beyond what mere humans are able
to handle.

If the creators of the language are now in such a situation, what
happens with the normal users?

I have this problems very much in my mind when I propose changing C.
C++ is an example that showas us how NOT to change things.

True, it is a big language, but (in my opinion) it is just too big.
 
Ad

Advertisements

C

Chris M. Thomasson

jacob navia said:
Seebs a écrit : [...]
Is the table actually stored
in each object?

No, of course!

Then it's a huge waste of space.

Do we store only a
pointer to the table?
Yes.

Then we have allocation issues with the risk of
dangling pointers or leaked memory, and we have to establish some kind
of pattern for determining whose job it is to allocate the storage.

There is no allocated memory for the pointer! What are you speaking
about? The pointer points to a static global table for all containers
of the same type.

Are you doing something like this:

http://clc.pastebin.com/f52a443b1
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top