linked-list entirely in macros

G

Guest

| (e-mail address removed) a écrit :
|> | (e-mail address removed) a écrit :
|> |> Right. That was pointed out earlier. So I will change the licensing.
|> |> The question I'm still trying to resolve is which one to go with. For
|> |> libraries and programming aid code, it will be one of the permissive
|> |> licenses, like BSD or MIT. I still haven't decided between those or one
|> |> of the others.
|> |>
|> |
|> | What's wrong with GPL ? Or, maybe better suited, LGPL ?
|>
|> For the final version of my libraries, I do not care to encumber them with
|> a requirement that anything combined with them, whether it be a main program
|> that just links, or customization, be released in source form. I do not see
|> the LGPL as adequately permissive. I don't even care about the requirement
|> to be credited in any derived binary product, although I do want my notices
|> to remain present in any source code that is distributed, modified or not.
|> Even BSD seems to be more restrictive than I'd like (my my reading of it, it
|> requires this credit in all circumstances).
|>
|
| You're too kind ;)
| So why don't you write your own license ? As I understand you, it could
| be something as simple as "This author grants the permission to anybody
| to do anything with this piece of code, including but not limiting to
| copying, modifying, distributing, getting patternity, etc."
| I don't see why you spend so much energy in trying to find a license
| that would fit your needs when you want to completely give your work away.

Putting a whole new license on the scene means it has to be vetted in the
open source community before it will be accepted. Were it not for this I
would make my own. But, alas, I have chosen to use the form of ISC license
that OpenBSD uses. ISC is derived from BSD, and OpenBSD recently moved to
it with a minor tweak. Looks good to me for the permissive projects.
 
K

Kaz Kylheku

I have an implementation of linked lists coded entirely in macros, using
GNU C extensions __typeof__() and statement expressions.

A sort of de-facto standard for this is the BSD queue.h header.
 
G

Guest

|> I have an implementation of linked lists coded entirely in macros, using
|> GNU C extensions __typeof__() and statement expressions.
|
| A sort of de-facto standard for this is the BSD queue.h header.

I'm amazed that not only have I not heard of that, but most C programmers
have not, either ... OR ... something is wrong with it, given so many list
implementations I see in C programs over the years.

But I most certainly would not recommend using queue.h at all because it
has a basic (and too common among macro implementations) flaw of not being
multi-evaluation safe. My implementation is multi-evaluation safe.
 
K

Kaz Kylheku

|> I have an implementation of linked lists coded entirely in macros, using
|> GNU C extensions __typeof__() and statement expressions.
|
| A sort of de-facto standard for this is the BSD queue.h header.

I'm amazed that not only have I not heard of that, but most C programmers
have not, either ... OR ... something is wrong with it, given so many list
implementations I see in C programs over the years.

C programmers will sometimes write something from scratch even if it's already
available, suitably licensed, and they've heard of it. Not Invented Here
Syndrome.

Just because someone wrote something from scratch doesn't mean there is
anything wrong with the alternatives.
But I most certainly would not recommend using queue.h at all because it
has a basic (and too common among macro implementations) flaw of not being
multi-evaluation safe. My implementation is multi-evaluation safe.

The standard getc macro may also evaluate the stream more than once, and
there is also fgetc, yet programmers still use getc.

Safe as it may be, a C macro library is still a C macro library, sitting in a
language that has undefined behaviors lurking in every nook and cranny.

Side effects /can/ sneak in even if you are careful. Suppose you write FOO(X).
FOO is obviously a macro from the naming convention, so you'd never write
FOO(X++). But suppose X is itself a macro which expands to y++.
So I would not make the argument ``just don't do that''.

Years ago (1999?) I wrote a way to trap such problems. You can write macros
that evaluate multiple times, and yet detect when arguments to these macros
have side effects. (The detection has a run-time cost, but can be compiled out
once you've achieved the desired code coverage).

Insead of:

#define unsafe_double(x) ((x) + (x))

you would write:

#define unsafe_double(x) (sfx_debug(x) + x)

sfx_debug generates code which only returns the value of x, but also passes the
stringified expression "x" to a function which parses it and detects whether x
has, or might have, side effects (hence ``sfx'').

So if you have code like

unsafe_double(a++);

which is reached at run-time, the program bails, and you know why.
If you're happy with the coverage, you turn of sfx_debug and recompile.

Of course, you still have the disadvantage that you /can't/ have side effects
where you'd sometimes like to, but at least the risk of accidents is mitigated.

This should be a compiler extension. GCC should have something like
__builtin_pure(EXPR). If EXPR isn't a pure expression with no side
effects (contains no assignments, function calls, or accesses to volatile
objects), a diagnostic is raised. Then a macro like sfx_debug could be
targetted to this kind of thing, wherever available.
 
U

user923005

C programmers will sometimes write something from scratch even if it's already
available, suitably licensed, and they've heard of it. Not Invented Here
Syndrome.

Just because someone wrote something from scratch doesn't mean there is
anything wrong with the alternatives.


The standard getc macro may also evaluate the stream more than once, and
there is also fgetc, yet programmers still use getc.

Safe as it may be, a C macro library is still a C macro library, sitting in a
language that has undefined behaviors lurking in every nook and cranny.

Side effects /can/ sneak in even if you are careful. Suppose you write FOO(X).
FOO is obviously a macro from the naming convention, so you'd never write
FOO(X++).  But suppose X is itself a macro which expands to y++.
So I would not make the argument ``just don't do that''.

Years ago (1999?) I wrote a way to trap such problems. You can write macros
that evaluate multiple times, and yet detect when arguments to these macros
have side effects. (The detection has a run-time cost, but can be compiled out
once you've achieved the desired code coverage).

Insead of:

 #define unsafe_double(x) ((x) + (x))

you would write:

 #define unsafe_double(x) (sfx_debug(x) + x)

sfx_debug generates code which only returns the value of x, but also passes the
stringified expression "x" to a function which parses it and detects whether x
has, or might have, side effects (hence ``sfx'').

So if you have code like

  unsafe_double(a++);

which is reached at run-time, the program bails, and you know why.
If you're happy with the coverage, you turn of sfx_debug and recompile.

Of course, you still have the disadvantage that you /can't/ have side effects
where you'd sometimes like to,  but at least the risk of accidents is mitigated.

This should be a compiler extension. GCC should have something like
__builtin_pure(EXPR). If EXPR isn't a pure expression with no side
effects (contains no assignments, function calls, or accesses to volatile
objects), a diagnostic is raised. Then a macro like sfx_debug could be
targetted to this kind of thing, wherever available.

With the ability of modern compilers to inline functions (which isn't
always the best thing to do, but in those cases the compilers will
usually detect that too) we are forced to ask the question:
"Why use function-like macros at all?"
The existence and use of function-like macros is a kludge inserted
because we often want to get the tiniest little fragment of speed out
of our code.

True, there is one additional benefit of function-like macros -->
automatic type conversion to the input types. This is both a blessing
and a cursing. We can apply the macro to a dozen different types, and
automatically, the macro operates as we expect because it's really
nothing more than an inline expansion of the expression we created.
At the same time, if five of our dozen types are not appropriate for
inputs to this particular macro, then we have a big problem because
function-like macros are not type safe.

So here is our quandry:
Function-like macros are neat because they allow generic code and
force inlining.
Function-like macros are evil because they are not type safe and are
fragile with respect to side-effects.

In C++ this quandry is eliminated through the use of templates. A
template allows inlining and generic code. A template is type safe
and robust with respect to side effects.

We don't have anything like this in C. So, in C, we have two choices:
1. Use function macros and use them very, very, very carefully (and
hope that a future maintainer of the code also follows this
discipline)
2. Recode each use of the function macro by type and let the compiler
inline the code.

I see a down-side with both 1 and 2. Option 1 means that correctness
is on an indentical level to how careful we are. If we are super-
careful, then that will be OK, as long as someone less careful does
not maintain the code in the future. However, this expectation is
unrealistic, because someday the code will be maintained by someone
else and we are probably not in any position to determine their level
of meticulous care. Option 2 means duplication of code which is bad
on several fronts. First, if I code a simple algorithm, why should I
have to code the identical algorithm again, and again, and again?
Second, if I make 13 copies of an algorithm, and then discover a bug,
I have 13 different places to update the code.

If there were an ideal solution to this situation in C, I would be
glad to hear about it.

I think that (probably) the closest thing to a solution is to write
generic algorithms that use function pointers for methods that operate
on the data, and void pointers to the data objects (but this
introduces a new set of difficulties).
 
C

cognacc

Linux has it to, dont know about its qualities:

Benfits of using (at least the bsd variant) is extensive testing and
verry careful implementation since
the whole system (OS)uses it.

Fast, low overhead.

easy to change list type

Using macro evaluation that is "old fashioned", by this i mean its
portable to use even if
your compiler is 20 old, or for other reason doesnt support "THE NEW
THING" (TM).


It has been growing in the last couple years, its recommended by many,
but not everywhere
YET i see :)

I dont understand this:
What do you mean by it?
when is multi evaluation used?
What makes the sys/queue macros not multi evalution safe, why is it
useful?
can you give examples in your code how and why its multievaluation
safe?


thats a couple of questions :)

The queue macros (and tree) seems very simple to me, they seem to use
mostly basic
text substitution.

if we keep to the sys/queue.h question, just follow the manpage, they
are very easy to use.
With the ability of modern compilers to inline functions (which isn't
always the best thing to do, but in those cases the compilers will
usually detect that too) we are forced to ask the question:
"Why use function-like macros at all?"

as said before portability, can be applied "everywhere".

The existence and use of function-like macros is a kludge inserted
because we often want to get the tiniest little fragment of speed out
of our code.

same as above and above.
True, there is one additional benefit of function-like macros -->
automatic type conversion to the input types.  This is both a blessing
and a cursing.  We can apply the macro to a dozen different types, and
automatically, the macro operates as we expect because it's really
nothing more than an inline expansion of the expression we created.
At the same time, if five of our dozen types are not appropriate for
inputs to this particular macro, then we have a big problem because
function-like macros are not type safe.

So here is our quandry:
Function-like macros are neat because they allow generic code and
force inlining.
Function-like macros are evil because they are not type safe and are
fragile with respect to side-effects.

The queue macros are designed among other things to simulate a
"next element pointer" inside your data type to create a set of them.

If there were an ideal solution to this situation in C, I would be
glad to hear about it.

Unless your are designing highly critical systems.
i see no reason to not use the VERY welltested macros provided and
used by the system.

I think that (probably) the closest thing to a solution is to write
generic algorithms that use function pointers for methods that operate
on the data, and void pointers to the data objects (but this
introduces a new set of difficulties).

And what benfit do you see for this.

mic
 
G

Guest

|>|> I have an implementation of linked lists coded entirely in macros, using
|>|> GNU C extensions __typeof__() and statement expressions.
|>|
|>| A sort of de-facto standard for this is the BSD queue.h header.
|>
|> I'm amazed that not only have I not heard of that, but most C programmers
|> have not, either ... OR ... something is wrong with it, given so many list
|> implementations I see in C programs over the years.
|
| C programmers will sometimes write something from scratch even if it's already
| available, suitably licensed, and they've heard of it. Not Invented Here
| Syndrome.

Yes, they do. OTOH, that was not the case here for me because I was not
aware of the existing implementation. So it was not a case of wanting to
do my own because it would be "invented here". If a good implementation
were available (open source required) and I knew of it, I would have just
used it. If a good implementation were available, I think I would have
heard of it because I would see it being used instead of a lot of other
"roll your own" implementations.

OTOH, for the particular implementation being compared in this thread, the
BSD originated queue.h, had I known of it, I might well have implemented
my own a lot earlier. For me to do what I did, I had to realize it COULD
BE DONE entirely as macros. I had my doubts in the past and did not try.
Had I seen queue.h I would have seen that it could be done, and combined
that with experience using GNU extensions for __typeof__() and statement
expressions to make macros that are multiple-evaluation safe, to make an
implementation.


| Just because someone wrote something from scratch doesn't mean there is
| anything wrong with the alternatives.

Each implementation should be considered on its own merits. Being first
does not mean better. Being most recent does not mean better, either.
Just because someone wrote something from scratch doesn't mean there is
anything wrong with the new one, either. In this case, look to see which
is better ... at least for your needs.


|> But I most certainly would not recommend using queue.h at all because it
|> has a basic (and too common among macro implementations) flaw of not being
|> multi-evaluation safe. My implementation is multi-evaluation safe.
|
| The standard getc macro may also evaluate the stream more than once, and
| there is also fgetc, yet programmers still use getc.

We are stuck with the DEFINITION of it being "may also evaluate the stream more
than once" and must code for that. We must treat it as multiple-evaluation
unsafe. We can't change that even though all new implementations should have
no trouble achieving multiple-evaluation safety.

Programmers also use gets(). We can't fix its interface now, either.


| Safe as it may be, a C macro library is still a C macro library, sitting in a
| language that has undefined behaviors lurking in every nook and cranny.

Just because programmers must be aware that some macros are unsafe, that does
not mean we have to dismiss the idea of making new ones unsafe. Having safe
ones is easier to use in more places. It's a good idea where it can be done.
It can't be done on legacy macros like getc(). It can be done on entirely new
things.


| Side effects /can/ sneak in even if you are careful. Suppose you write FOO(X).
| FOO is obviously a macro from the naming convention, so you'd never write
| FOO(X++). But suppose X is itself a macro which expands to y++.
| So I would not make the argument ``just don't do that''.
|
| Years ago (1999?) I wrote a way to trap such problems. You can write macros
| that evaluate multiple times, and yet detect when arguments to these macros
| have side effects. (The detection has a run-time cost, but can be compiled out
| once you've achieved the desired code coverage).
|
| Insead of:
|
| #define unsafe_double(x) ((x) + (x))
|
| you would write:
|
| #define unsafe_double(x) (sfx_debug(x) + x)
|
| sfx_debug generates code which only returns the value of x, but also passes the
| stringified expression "x" to a function which parses it and detects whether x
| has, or might have, side effects (hence ``sfx'').
|
| So if you have code like
|
| unsafe_double(a++);
|
| which is reached at run-time, the program bails, and you know why.
| If you're happy with the coverage, you turn of sfx_debug and recompile.
|
| Of course, you still have the disadvantage that you /can't/ have side effects
| where you'd sometimes like to, but at least the risk of accidents is mitigated.
|
| This should be a compiler extension. GCC should have something like
| __builtin_pure(EXPR). If EXPR isn't a pure expression with no side
| effects (contains no assignments, function calls, or accesses to volatile
| objects), a diagnostic is raised. Then a macro like sfx_debug could be
| targetted to this kind of thing, wherever available.

How well does this work when you do all your testing on the few architectures
you have in your lab, and someone tries to use the same sourc code on a new
architecture where something you've found to be safe on all your archs is not
safe on the new arch?
 
G

Guest

| With the ability of modern compilers to inline functions (which isn't
| always the best thing to do, but in those cases the compilers will
| usually detect that too) we are forced to ask the question:
| "Why use function-like macros at all?"
| The existence and use of function-like macros is a kludge inserted
| because we often want to get the tiniest little fragment of speed out
| of our code.
|
| True, there is one additional benefit of function-like macros -->
| automatic type conversion to the input types. This is both a blessing
| and a cursing. We can apply the macro to a dozen different types, and
| automatically, the macro operates as we expect because it's really
| nothing more than an inline expansion of the expression we created.
| At the same time, if five of our dozen types are not appropriate for
| inputs to this particular macro, then we have a big problem because
| function-like macros are not type safe.
|
| So here is our quandry:
| Function-like macros are neat because they allow generic code and
| force inlining.
| Function-like macros are evil because they are not type safe and are
| fragile with respect to side-effects.

Type checking can be done. For example my swap macro looks like:

#define swap(a,b) (void)({ \
__typeof__(a) *swap__a; \
__typeof__(b) *swap__b; \
__typeof__(a) swap__t; \
(void) ( swap__a == swap__b ); \
swap__a=&(a); \
swap__b=&(b); \
swap__t=*swap__a; \
*swap__a=*swap__b; \
*swap__b=swap__t; \
})

Try it with two different types as the two arguments (lvalues).

And sometimes you want to deal explicitly with the types:

#define safe_index(s,c) ((__typeof__((s)))(index((s),(c)))
#define safe_rindex(s,c) ((__typeof__((s)))(rindex((s),(c)))
#define safe_memchr(s,c,n) ((__typeof__((s)))(memchr((s),(c),(n)))
#define safe_memrchr(s,c,n) ((__typeof__((s)))(memrchr((s),(c),(n)))
#define safe_strchr(s,c) ((__typeof__((s)))(strchr((s),(c)))
#define safe_strchrnul(s,c) ((__typeof__((s)))(strchrnul((s),(c)))
#define safe_strrchr(s,c) ((__typeof__((s)))(strrchr((s),(c)))
#define safe_strrchrnul(s,c) ((__typeof__((s)))(strrchrnul((s),(c)))
#define safe_strpbrk(s1,s2) ((__typeof__((s1)))(strpbrk((s1),(s2)))
#define safe_strstr(s1,s2) ((__typeof__((s1)))(strstr((s1),(s2)))
#define safe_strrstr(s1,s2) ((__typeof__((s1)))(strrstr((s1),(s2)))
#define safe_strcasestr(s1,s2) ((__typeof__((s1)))(strcasestr((s1),(s2)))
#define safe_strrcasestr(s1,s2) ((__typeof__((s1)))(strrcasestr((s1),(s2)))
#define safe_strcaserstr(s1,s2) ((__typeof__((s1)))(strcaserstr((s1),(s2)))
#define safe_wmemchr(s,c,n) ((__typeof__((s)))(wmemchr((s),(c),(n)))
#define safe_wmemrchr(s,c,n) ((__typeof__((s)))(wmemrchr((s),(c),(n)))
#define safe_wcschr(s,c) ((__typeof__((s)))(wcschr((s),(c)))
#define safe_wcschrnul(s,c) ((__typeof__((s)))(wcschrnul((s),(c)))
#define safe_wcsrchr(s,c) ((__typeof__((s)))(wcsrchr((s),(c)))
#define safe_wcsrchrnul(s,c) ((__typeof__((s)))(wcsrchrnul((s),(c)))
#define safe_wcspbrk(s1,s2) ((__typeof__((s1)))(wcspbrk((s1),(s2)))
#define safe_wcsstr(s1,s2) ((__typeof__((s1)))(wcsstr((s1),(s2)))
#define safe_wcsrstr(s1,s2) ((__typeof__((s1)))(wcsrstr((s1),(s2)))
#define safe_wcscasestr(s1,s2) ((__typeof__((s1)))(wcscasestr((s1),(s2)))
#define safe_wcsrcasestr(s1,s2) ((__typeof__((s1)))(wcsrcasestr((s1),(s2)))
#define safe_wcscaserstr(s1,s2) ((__typeof__((s1)))(wcscaserstr((s1),(s2)))

I'll leave it up to the reader to realize which unnamed storage class
(or lack thereof) is being carried from an argument to the return value.


| In C++ this quandry is eliminated through the use of templates. A
| template allows inlining and generic code. A template is type safe
| and robust with respect to side effects.

That's nice for C++.


| We don't have anything like this in C. So, in C, we have two choices:
| 1. Use function macros and use them very, very, very carefully (and
| hope that a future maintainer of the code also follows this
| discipline)

I have more trust in "future maintainers" who know C than among those
that do not.


| 2. Recode each use of the function macro by type and let the compiler
| inline the code.

C does not allow doing this under the same name. Not that it is too hard
to use different names for different types. We are already accustomed to
it in the Standard C Library.


| I see a down-side with both 1 and 2. Option 1 means that correctness
| is on an indentical level to how careful we are. If we are super-
| careful, then that will be OK, as long as someone less careful does
| not maintain the code in the future. However, this expectation is
| unrealistic, because someday the code will be maintained by someone
| else and we are probably not in any position to determine their level
| of meticulous care.

Among C coders, a greater proportion are in the "very very very careful"
category than among certain higher level languages that make it easier to
just code anything.

| Option 2 means duplication of code which is bad
| on several fronts. First, if I code a simple algorithm, why should I
| have to code the identical algorithm again, and again, and again?
| Second, if I make 13 copies of an algorithm, and then discover a bug,
| I have 13 different places to update the code.

Solving this is one of the reasons I'm working on a meta preprocessor as
mentioned in another thread.


| If there were an ideal solution to this situation in C, I would be
| glad to hear about it.

There is no ideal solution. Some other languages are certainly more
elegant about it ... some to obscene perfection. But these others often
carry some enormous baggage that isn't desired for those things for which
C is the preferred language. It depends on the application.


| I think that (probably) the closest thing to a solution is to write
| generic algorithms that use function pointers for methods that operate
| on the data, and void pointers to the data objects (but this
| introduces a new set of difficulties).

I balance what I do depending on what it is I'm doing. I've done things
both ways. You're never going to find one uniform solution to all problems
in C (or in any language, for that matter).
 
B

Ben Pfaff

Type checking can be done. For example my swap macro looks like:

#define swap(a,b) (void)({ \
__typeof__(a) *swap__a; \
__typeof__(b) *swap__b; \
__typeof__(a) swap__t; \
(void) ( swap__a == swap__b ); \
swap__a=&(a); \
swap__b=&(b); \
swap__t=*swap__a; \
*swap__a=*swap__b; \
*swap__b=swap__t; \
})

I don't think you can do this without GCC extensions.

You can do certain useful, interesting forms of type checking
without extensions though:

/* Cast POINTER to TYPE, issuing a warning on many compilers if
the cast changes almost anything other than an outermost
"const" or "volatile" qualifier. */
#define CONST_CAST(TYPE, POINTER) \
((void) sizeof ((TYPE) (POINTER) == (POINTER)), (TYPE) (POINTER))
 
G

Guest

| (e-mail address removed) writes:
|
|> Type checking can be done. For example my swap macro looks like:
|>
|> #define swap(a,b) (void)({ \
|> __typeof__(a) *swap__a; \
|> __typeof__(b) *swap__b; \
|> __typeof__(a) swap__t; \
|> (void) ( swap__a == swap__b ); \
|> swap__a=&(a); \
|> swap__b=&(b); \
|> swap__t=*swap__a; \
|> *swap__a=*swap__b; \
|> *swap__b=swap__t; \
|> })
|
| I don't think you can do this without GCC extensions.

Right. The extension __typeof__() is right in there to define working
variables.


| You can do certain useful, interesting forms of type checking
| without extensions though:
|
| /* Cast POINTER to TYPE, issuing a warning on many compilers if
| the cast changes almost anything other than an outermost
| "const" or "volatile" qualifier. */
| #define CONST_CAST(TYPE, POINTER) \
| ((void) sizeof ((TYPE) (POINTER) == (POINTER)), (TYPE) (POINTER))

This requires the programmer to specify the type, rather than extracting
it from the given lvalue.
 
L

lawrence.jones

Kaz Kylheku said:
C programmers will sometimes write something from scratch even if it's already
available, suitably licensed, and they've heard of it. Not Invented Here
Syndrome.

Or just engineering judgement. I've never seen a linked list package
that wasn't more trouble and work to use than just doing it yourself in
open code (assuming you know what you're doing).
 
G

Guest

On Mon, 10 Aug 2009 18:03:13 -0400 (e-mail address removed) wrote:
|>
|> C programmers will sometimes write something from scratch even if it's already
|> available, suitably licensed, and they've heard of it. Not Invented Here
|> Syndrome.
|
| Or just engineering judgement. I've never seen a linked list package
| that wasn't more trouble and work to use than just doing it yourself in
| open code (assuming you know what you're doing).

If you know what you are doing, linked lists are not hard. Usually these
kinds of implementations are customized for the task at hand and have no
more features than actually needed. BTDT. But I still find value in now
having a linked list already implemented and ready to go, in addition to
many other things like a binary tree (not all macros ... at least not yet).
It saves time. And in this case, I know my linked list code won't have any
multiple evaluation effects on the arguments.

Right now I'm exploring three aspects of function and function-macro design.
The first is the traditional library which is a separate project producing
a shared memory mappable object library image, a static linkable library
archive, and one or more header files. The second is a set file files that
are organized for dropping into an existing project so the build process
can compile these files and the project files together. The third is a
more succint form, intended for a single compile unit and just included in
the one source file by means of #include directives. Not everything in the
library of the first aspect would be considered for the third aspect. But
many things are (including binary tree code, and linked list code).
 
P

Phil Carmody

Or just engineering judgement. I've never seen a linked list package
that wasn't more trouble and work to use than just doing it yourself in
open code (assuming you know what you're doing).

I've never seen linked list primitives as being any more complex
than opaque-but-undoubtedly-2-lines-long helper functions.

Anyone who can't make next->next point to next shouldn't be
programming.

Phil
 
G

Guest

| (e-mail address removed) writes:
|>>
|>> C programmers will sometimes write something from scratch even if it's already
|>> available, suitably licensed, and they've heard of it. Not Invented Here
|>> Syndrome.
|>
|> Or just engineering judgement. I've never seen a linked list package
|> that wasn't more trouble and work to use than just doing it yourself in
|> open code (assuming you know what you're doing).
|
| I've never seen linked list primitives as being any more complex
| than opaque-but-undoubtedly-2-lines-long helper functions.
|
| Anyone who can't make next->next point to next shouldn't be
| programming.

It all depends on what the requirements are. Many needs are very simple
and only the simplest of code to manage it gets the job done. Still, many
programmers prefer to compartmentalize it to reusable code despite how
simple it is. Then you have functions or macros to do the job. At that
point, I prefer having one set do enough for everything, than having many
different sets to handle the various situations. This preference is why
I created my macros. I'm sure it was involved in creating the legacy ones.
Programs I have seen that do linked lists mostly implement their own set
of functions, specific to the data struct they use, although a few do just
code the link handling directly.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,602
Members
45,182
Latest member
BettinaPol

Latest Threads

Top