Reusable source code

  • Thread starter James Dow Allen
  • Start date
K

Kenny McCormack

usually when a newb comes into a group [any group] and says things
like "why not we do everything while standing on our heads?" it's
because they don't know what they're talking about.

Yup.  James Dow Allen is a newb here.  Never seen him post before, no
siree!  He's got some nerve comin' in here and stirin' things up!

Thanks for proving my point.

If you had actually read what I wrote, I said we [most regulars]
**DO** read what they write. We just don't have to agree with it
because you say so.[/QUOTE]

There is no "we".

There is no committee.

There is no organization.

There is no central authority.

There is no funding body.

--
No, I haven't, that's why I'm asking questions. If you won't help me,
why don't you just go find your lost manhood elsewhere.

CLC in a nutshell.
 
T

Tom St Denis

There is no "we".

There is no committee.

There is no organization.

There is no central authority.

There is no funding body.

There are regulars, and regular trolls [e.g., you].

Also I have to question what you think you're fighting against. I
pity thee.

Tom
 
K

Kenny McCormack

Richard said:
Could it be your withering put down of James' posts and an attempt to
belittle him?

Well, yeah, there was that. But, like with Republicans, it's always:
Don't look there. Look over here. Don't look at that!
In case you forgot what you said in reply here it is in all its "regs"
glory:-

,----
| usually when a newb comes into a group [any group] and says things
| like "why not we do everything while standing on our heads?" it's
| because they don't know what they're talking about.
|
| This doesn't mean we don't listen to the newbs, it just means we don't
| follow them. It's not our fault you can't tell the difference.
`----

Indeed. Frankly, I was *amazed* at the extent to which TSD stepped in shit
with that comment.
and as for this nonsense :-

,----
| Also, sometimes paying a small penalty in performance to make
| something 100x generic/scalable is worth it in the end. I'd rather
| have a standard generic qsort() like in the C lib than a 100 different
| ones for all sorts of different data types.
`----

Yup. Toeing the party line.

--
"The anti-regulation business ethos is based on the charmingly naive notion
that people will not do unspeakable things for money." - Dana Carpender

Quoted by Paul Ciszek (pciszek at panix dot com). But what I want to know
is why is this diet/low-carb food author doing making pithy political/economic
statements?

Nevertheless, the above quote is dead-on, because, the thing is - business
in one breath tells us they don't need to be regulated (which is to say:
that they can morally self-regulate), then in the next breath tells us that
corporations are amoral entities which have no obligations to anyone except
their officers and shareholders, then in the next breath they tell us they
don't need to be regulated (that they can morally self-regulate) ...
 
T

Tom St Denis

Could it be your withering put down of James' posts and an attempt to
belittle him?

My reply was to Kenny not James.
In case you forgot what you said in reply here it is in all its "regs"
glory:-

,----
| usually when a newb comes into a group [any group] and says things
| like "why not we do everything while standing on our heads?" it's
| because they don't know what they're talking about.
|
| This doesn't mean we don't listen to the newbs, it just means we don't
| follow them.  It's not our fault you can't tell the difference.
`----

If they can't have some humility it's not my fault. All I'm saying
here is I listen to people even if I don't agree with them. Calling a
novice a "newb" isn't derogatory. It's just a way of saying someone
who doesn't know what they're doing [yet]. And despite what you're
trying to read into this I'm not trying to silence or humiliate
anyone.
,----
| Also, sometimes paying a small penalty in performance to make
| something 100x generic/scalable is worth it in the end.  I'd rather
| have a standard generic qsort() like in the C lib than a 100 different
| ones for all sorts of different data types.
`----

Pfft. People C for efficiency. And if someone is using C to implement
something which talks to HW and they want every cpu cycle to count then
a specific processor version is EXACTLY what they want. Since there is
probably about, err, ONE (maybe 2..) CPU *most* people code for, your
concerns are neither here not there. And how to meet these type of
issues is WHY this group is here.

Ok, stop reading into it. I never said finding ways to optimize the
standard C library is off topic nor un-interesting. All I said is
that under most circumstances I'd rather have standard portable code,
even if it's marginally slower, than highly platform specific [or
bloated] code. Not everyone calling qsort() is doing so on a time-
critical operations. And I'd rather have developers use qsort() for
their fallback sorting than rolling their own [and most likely getting
it wrong].

In this particular case I disagree. I think the callback method that
qsort uses is fine enough for most purposes. In highly time sensitive
operations qsort() might not even be the most optimal solution anyways
[regardless of whether your compare function is inlined or not]. So
premature optimization [re: making the 100s of qsort variants part of
the C standard] is a bad idea.

Tom
 
T

Tom St Denis

Well, yeah, there was that.  But, like with Republicans, it's always:
Don't look there.  Look over here.  Don't look at that!

Except my reply was to you not James. Do you even read your own
posts?
Indeed.  Frankly, I was *amazed* at the extent to which TSD stepped in shit
with that comment.

If that's stepping in shit ... Wow.

I guess you take all of your life advice from 4 yr olds too right?

You: "Son, what should we have for dinner?"
Son: "An entire bowl of ice cream and skittles!"
You: "I listen to everyone egalitarian-like therefore ice cream and
skittles for dinner!"

Sounds about right.

So let me get this right, in your world I should follow the advice of
EVERYONE who is capable of posting to usenet?
Yup.  Toeing the party line.

If by party line you mean that of people who "write software for a
living" then sure. I guess I am.

Tom
 
M

Malcolm McLean

 I'd rather
| have a standard generic qsort() like in the C lib than a 100 different
| ones for all sorts of different data types.
`----

Pfft. People C for efficiency. And if someone is using C to implement
something which talks to HW and they want every cpu cycle to count then
a specific processor version is EXACTLY what they want. Since there is
probably about, err, ONE (maybe 2..) CPU *most* people code for, your
concerns are neither here not there. And how to meet these type of
issues is WHY this group is here.
Generally you want to put the program together as quickly as possible
to prove that it works, does what the client wants, doesn't have
homunculi in the requirements specification, and so on. That means
plugging most of it together from standard parts.

Then it's very rare for every CPU cycle to count. Usually there are
ten or twenty of them in functions which are called once, as opposed
to several billion in functions in deep loops. So you focus
optimisation efforts on the loops. That's the point at which a call to
qsort() might be replaced with a hand-coded routine. Often there's
algorithmic improvement as well as micro-optimisation. Say, for
instnace, we have a camera moving slowly though a scene,, and we want
to sort the ploygons by depth. The vast majority of polygons won't
move from frame to frame, and those that do can only move a few
places. So we can store the results of the previous sort and use a
bubblesort. Whilst we're at it we can replace the byte copying loop in
qsort with a clean swap of pointers and fold in the indirected call to
the comparision function.
 
T

Tom St Denis

Generally you want to put the program together as quickly as possible
to prove that it works, does what the client wants, doesn't have
homunculi in the requirements specification, and so on. That means
plugging most of it together from standard parts.

Then it's very rare for every CPU cycle to count. Usually there are
ten or twenty of them in functions which are called once, as opposed
to several billion in functions in deep loops. So you focus
optimisation efforts on the loops. That's the point at which a call to
qsort() might be replaced with a hand-coded routine. Often there's
algorithmic improvement as well as micro-optimisation. Say, for
instnace, we have a camera moving slowly though a scene,, and we want
to sort the ploygons by depth. The vast majority of polygons won't
move from frame to frame, and those that do can only move a few
places. So we can store the results of the previous sort and use a
bubblesort. Whilst we're at it we can replace the byte copying loop in
qsort with a clean swap of pointers and fold in the indirected call to
the comparision function.

Agreed, though to clarify in context of my OP...

I didn't mean that discussion about optimized qsort instances is a bad
idea. I just meant that I wouldn't prefer that the standard C library
had 100s of qsort functions for every possible data type/structure.
That in many cases it's more worthwhile to build out of portable
constructs than fast ones.

Reminds me a bit of some work I did at a company a few years back
where their server would launch a daemon process that would prevent
multiple instances of services from running. To do this they attached
an ID to every service (at this point they only had one) and would
check that the ID doesn't exist before launching.

Sounds simple. Effective and gets the job done.

That is until a recent masters grad showed up, re-wrote the function
using a hash table which used inline assembler to perform a UMAC and a
C++ template to perform the hashing (they only ever instantiated the
class once with a single type). Not only was it incredible overkill
for a function that even if written with a linear search would take
less than a millisecond, but their assembler code was incorrect AND
produced different results with GCC [they used ICC at the time].
Because of such simple task, "check if application already running",
being optimized so fool-heartedly the porting effort to GCC was
stalled by weeks [to figure out why the hell the service would fail to
load].

Anyways, optimization is a good idea, it just can't be done a priori
which is what a standard C library is.
 
S

Seebs

Anyways, since you guys are just inventing the conversation this is my
last post in this thread.

I take it you've never encountered "Kenny" and "Richard Nolastname" before?
They're regular trolls. Kenny's particularly proud of the fact that he
cannot conceive of any activity which is not primarily focused on achieving
social status, and holds engineers who try to make things work even if it
makes them look bad in utter contempt.

-s
 
J

James Dow Allen

admittedly, I had underestimated the time cost of qsort it seems, fair
enough, I don't really tend to use it anyways...

Sorry if my rejoinder to you was abrasive. There was a pompous
asshole intruding in the thread who got my ire up. I made
a hasty omnibus response.

On the (tangential) topic of qsort speed, an important point
is that many applications will have a time cost of
k_1 O(N) + k_2 O(N log N)
where qsort *is* the time-important routine. A O(1) improvement
in its speed *is* often important.

Another point (and I hope the pompous asshole gets someone to
show him how to display this in a larger font, as his reading
comprehension is poor) is that the source changes envisioned to
convert qsort.c to qsort_adaptable.c are AMAZINGLY trivial.
Most instances of "(*cmp)" in the source change to "COMP"
and that's the "hardest" change of all. Whether this should
be made visible to any standard library is a separate question,
but Mr. Asshole's concept ("making the 100s of qsort variants
part of the C standard" ????) shows bizarrely poor understanding:
My objective was to *reduce* the number of source code variants,
not increase them.

As I stated over and over and OVER, time speedup was never the
goal (qsort just presented itself as an easily understood
example). The goal was *reusing source modules* so that, for
example, three hash-table managers whose differences are so
great that normally three different source modules would be
needed can instead share a source module. Code generators
have their place, but (unless one wants to apply the term to
these "reusable .c files") will be inferior in the cases I envision.
(Consider the triviality of changing qsort.c to qsort_adaptable.c.)

James Dow Allen

PS: On a separate topic,
If they can't have some humility it's not my fault.

I wonder if Saint Dennis knows how preciously arrogant
this sounds.
 
S

Seebs

As I stated over and over and OVER,

I think you have GRAVELY overestimated the reading comprehension skills
of Usenet.
The goal was *reusing source modules* so that, for
example, three hash-table managers whose differences are so
great that normally three different source modules would be
needed can instead share a source module. Code generators
have their place, but (unless one wants to apply the term to
these "reusable .c files") will be inferior in the cases I envision.
(Consider the triviality of changing qsort.c to qsort_adaptable.c.)

I'm actually totally surprised that this makes a significant difference,
because I wouldn't have expected it to be even measurable, since the
callback idiom is so common. I've done a ton of things that use a
function pointer for their inner loop processing, and it never occurred
to me to try to figure out a way to inline them.

That actually leads to an interesting question, to me, which is where
the inefficiency is. Is it in the indirection through a function pointer?
The argument passing? The lack of optimization because the comparison
argument can't be inlined? In particular, I'm wondering how much of it
comes down to the extra argument to qsort(), since a lot of qsort()
implementations are presumably recursive, meaning every layer of the
call tree has a function pointer argument. If you could strip that away,
I wonder whether it would noticably improve performance.

I guess, to me, it doesn't seem like replacing the callback model with
this #define/inlining stuff model is enough of a conceptual improvement to
be significant -- it doesn't strike me as making things much easier to
understand. But if it's making a big speed difference, maybe that IS
worth looking at. But it surprises me that it'd matter much! It would
be interesting to come up with some reasonable test cases to experiment
with and see where the efficiency is coming from -- whether it's the
inlining or the argument lists or what.

-s
 
R

Ralf Damaschke

Tom said:
If that's stepping in shit ... Wow.

Well, what do you expect from a troll who can't even spell the name of
his anal-retentive favorite toon.
 
S

Squeamizh

Anyways, since you guys are just inventing the conversation this is my
last post in this thread.

Then you're letting yourself off easy. You should respond to the
comments that James Dow Allen addressed to you, rather than wasting
your time on a so-called invented conversation.
 
J

James Dow Allen

I think you have GRAVELY overestimated the reading comprehension skills
of Usenet.

Sigh. It appears I've overestimated your skills as well.
SAVING THE CALLBACK IN QSORT() WAS JUST A TRIVIAL EXAMPLE
to avoid digressing into a discussion of a reusable hash-table
manager. The hashtable manager has MANY options. Do you store
via bits or bytes? What's your collision/rehash strategy?
We're not trying to avoid one trivial callback, we're avoiding
a largish collection of switches, etc. Sometimes you need to
waste a bit for a deletion marker, sometimes you don't. Etc. etc.

Writing one hashtable manager is better than writing 3 or 4, right?
Using simple macros makes the code simpler than switches or if/elses,
right?


You quote part of the purposeful discussion:
.... But then focus on the trivial example:
I'm actually totally surprised that this makes a significant difference,

This has *nothing* to do with my intent for this thread but:
The reason it makes a difference is that qsort() spends a *huge*
majority of its time calling (*cmp)() (which is often a trivial
function) and doing swaps.
Saving function invocation is a big win. A known-size swap is
faster than a variable-size swap.
since a lot of qsort()
implementations are presumably recursive,

I don't have a number to give you offhand, but calls to (*cmp)
sharply outnumber the recursive calls to the sorting function.
I guess, to me, it doesn't seem like replacing the callback model with
this #define/inlining stuff model is enough of a conceptual improvement to
be significant.

I tried to emphasize that it was the READABILITY advantage of using
one (hashtable-manager, for example) function, where otherwise
multiple versions would be needed that was the goal, and that qsort()
(were a speed advanatge happens to obtain) was introduced solely
because it was well-known, simple, and going into detail on a REAL
example where the mehtod isapplicable would be diversive.
Alas, it was to no avail.

Thanks anyway,
James
 
R

robertwessel2

Jacob's mention of his library reminds me of my own
suggestion to help with software reusability.

Much software has the form
  High Level -- application-specific code
    Mid Level -- application-specific code
      Low Level -- FREQUENT FUNCTION, use library

But often one encounters
  High Level -- application-specific code
    Mid Level -- FREQUENT FUNCTION
      Low Level -- application-specific code

A familiar example where this form is encountered and
dealt with is qsort(), with the low-level handled
simply with a function pointer.

If the qsort() source were recompiled with the
application-dependent comparison etc. it would speed
up, perhaps by 25%.  The qsort.c source wouldn't
even change; instead the variation would be provided
by a very simple application-dependent header included
by qsort.c.

For example, make this definition:
  #define COMP(x,y)      \
     (((struct elem *)x)->key - ((struct elem *)y)->key)
visible to qsort.c, rather than just pass a pointer
to the function
  int ecomp(const void *x, const void *y)
  { return COMP(x,y); }

I mention this simple case to make the idea clear,
but if all I wanted to do is speed-up qsort() by 25%
I wouldn't be posting this.  Instead I'm thinking
of cases where the common function CAN'T BE USED
AT ALL, without this flexibility.

In these interesting cases, the application-dependent
variations are much too complicated to be represented
with a few function arguments, but a common source
code could still be used, with any variations
coming from #define's in an application-dependent header.
(I have specific examples in mind, but won't mention
them.  I want to focus on the idea of reusing
source code in the way I imply, rather than any
specific example.)

How about it?  Anybody do this, or want to do it?
(There are examples of it *within* a buildable
library, but I'm talking about library source where
the user is encouraged to recompile a routine,
like qsort() but probably more complicated, using
his own application-specific header.)


qsort is a good example of a function where the indirect function call
can be very expensive, particularly with small elements (say, ints) to
be sorted.

C++'s approach to that is templates, with sort() doing just what you
suggest.

There's also no particular reason a compiler couldn't inline
comparison function to a specialized qqort function as things stand
now. While qsort is a bit special in that it's a library function, a
compiler that supports link time code generation could quite plausible
specialize and inline that so long as it had the semi-compiled qsort
source available.
 
S

Seebs

Sigh. It appears I've overestimated your skills as well.
SAVING THE CALLBACK IN QSORT() WAS JUST A TRIVIAL EXAMPLE
to avoid digressing into a discussion of a reusable hash-table
manager.

Yes. But it's an *interesting* trivial example, because the
results obtained are so surprising.
Writing one hashtable manager is better than writing 3 or 4, right?
Using simple macros makes the code simpler than switches or if/elses,
right?

Sometimes! There is a real risk that one of two things will happen:

1. The macros stop being simple.
2. The code keeps the macros simple at the expensive of some logic
improvements that would have made the macros less simple.
This has *nothing* to do with my intent for this thread but:
The reason it makes a difference is that qsort() spends a *huge*
majority of its time calling (*cmp)() (which is often a trivial
function) and doing swaps.
Saving function invocation is a big win. A known-size swap is
faster than a variable-size swap.

Makes sense.
I tried to emphasize that it was the READABILITY advantage of using
one (hashtable-manager, for example) function, where otherwise
multiple versions would be needed that was the goal, and that qsort()
(were a speed advanatge happens to obtain) was introduced solely
because it was well-known, simple, and going into detail on a REAL
example where the mehtod isapplicable would be diversive.

Hmm. I think the issue is, without that example to look at, it's hard for
me to get a feel for what kind of improvements in code readability
you're thinking about. Any time you start a discussion with C programmers
with the claim "if we used macros, this would be easier to read", you are
necessarily fighting an uphill battle... :)

Can you give a simple example of, say, two functions which are largely
similar, and how simple macros could make it into one function?
I think the basic idea of having #defines lead to changes in code behavior
is a pretty well-accepted one up to a very limited point; consider assert.

What this suggests to me, though, is possibly a variant usage. Imagine
a source file to the effect of:

static inline int cmp(blah *a, blah *b) { /* code goes here */ }
static inline int swap(blah *a, blah *b) { /* code goes here */ }
#define qsort qsort_blah
#define qsort_type blah
#include <qsort_code.c>

and a header:
#define qsort qsort_blah
#define qsort_type blah
#include <qsort_decl.h>

If this .c module were a separate translation unit, a reasonable compiler
could do the same sorts of optimizations (?) that it would with the macros,
but you'd be avoiding the "it's a macro so it has to be simple and not
re-evaluate arguments" problems. And then it would provide both a declaration
and a definition of qsort_blah(), which would be a nice and efficient thing
that happened to function much like a qsort.

Is that the kind of thing you're getting at? I'm afraid I haven't ever
really looked at anything using a hashtable manager in C enough to have much
of a sense for what they look like or where you'd want to insert stuff
in them.

-s
 
E

Eric Sosman

I almost prefaced my message wtih a disclaimer
that SPEED WAS **NOT** MY CONCERN, but felt that a
simple speed-up case led to a clear example.
It seemed unnecessary -- I mention the point when
I mention speed.

But the only one to make a joke about the speed-up
was also the only one who grasped that speed
performance was not my primary concern!



Aside: Did you suspect that, although any speedup was peripheral
to my main point, I actually did the implied experiment,
using glibc's qsort source, with constanted num, size,
compare? The element was
struct { int a, key, c, d; }
Speedup was 25%.


One example is hash-table management. If space-efficiency
is important, low-level details essential to the table
handling will vary, while higher-level functionality
(collision handling, resizing) may be invariant.
If this isn't clear, note that a space-efficient table
may pack key AND payload into 2 or 3 bytes, while
off-the-shelf hashtables often dictate 12 bytes PLUS key.

Can't comment, as there's a great lack of specificity
about the designs of these competing hash tables. Separate
chaining, coalesced chaining, open-addressed, ... The phrase
"hash table" covers a *wide* variety of data structures with
disparate characteristics.

Besides, the possible space efficiency seems insufficient
to justify "CAN'T BE USED AT ALL;" it's entirely akin to the
time savings you already don't care about.
I'm assuming the user is an expert programmer.

s/expert/infallible/, or so it seems. Speaking for myself,
I have a hard time imagining the mind behind that level of
perfection -- except to think that such a mind wouldn't bother
with crutches like pre-written libraries, but would write all he
needed anew each time, omitting the generalities and such that
are inapplicable to the immediate case at hand. See also

http://www.pbm.com/~lindahl/mel.html
Please note that there are two quite distinct approaches:

(1) common source is in header, often in the form of
#define'd code fragments, and invoked by application
via macros.
(2) common source is in .c file, application-specific
stuff is presented to .c via user-specific .h defines.
The .c is compiled (with its user-specific header) and
then behaves just as an ordinary .c/.o.

I'm thinking of (2) specifically, but Eric's comments
apply mostly to (1).

Sorry; I fail to see any difference between the two "quite
distinct" approaches. Both employ the preprocessor to make
case-specific alterations to a batch of source code; it's just
a matter of the packaging, not of the essence.

James, I'm not trying to make fun of you. The technique of
using the preprocessor to specialize general code to a particular
use is fairly well-known -- yet the plain fact is that it's not
widely and routinely used. Considering the efficiencies it may
offer, one wonders why not? And my answers (or speculations) are
about the added difficulties the approach brings, and about the
choices experienced developers make. Occasionally, very occasionally,
the advantages are appealing enough that someone chooses to go this
route. Mostly, though, they don't -- So you're left with a choice
between "They're all dimwits" and "They've considered other factors."
It's possible that the former is the case, but "Everyone's out of
step except Johnny" is a difficult position to sustain ...
 
E

Eric Sosman

sallysort.o: sallysort.c
rm qspecif.h
ln sallysort.h qspecif.h
cc -O -c -o sallysort.o qvariant.c
Much simpler than many makefiles.

Simple, maybe, but correct? I don't see how it could be --
but it's possible I'm blinded by the simplicity. It seems to me
that sallysort.o should have dependencies on qvariant.c and on
sallysort.h, but I don't see those expressed anywhere.

Even if cleaned up, such a "for the nonce" change to the file
system would be utter poison for parallel builds. Since we're
supposing a large project (who would go to this much trouble for
a small one?), I think we've also got to suppose long build times
and an incentive to parallelize.
 
I

Ian Collins

I've been an on-and-off poster for a long time.

What I don't get about you trolls is why you think there is a
hierarchy here. There isn't. I consider a few people here more
knowledgeable about the ISO C spec than myself but it's not like I pay
homage to them or something, or seek their approval or whatever.

That's the difference between reality and the fantasy world they inhabit.

You trolls really, really, really need to get a life. Look at what
you're doing. You're trolling "computers languages C" ... can you get
any nerdier and less useful than that? For all that is decent in this
world, seriously, go outside.

Well said.
 
I

Ian Collins

I tried to emphasize that it was the READABILITY advantage of using
one (hashtable-manager, for example) function, where otherwise
multiple versions would be needed that was the goal, and that qsort()
(were a speed advanatge happens to obtain) was introduced solely
because it was well-known, simple, and going into detail on a REAL
example where the mehtod isapplicable would be diversive.

That (readability) may be one of the reasons the whole thing falls down.
whenever I have seen this done using macros, readability suffers.
Pity the poor maintenance programmer.
 
I

Ian Collins

I guess, to me, it doesn't seem like replacing the callback model with
this #define/inlining stuff model is enough of a conceptual improvement to
be significant -- it doesn't strike me as making things much easier to
understand. But if it's making a big speed difference, maybe that IS
worth looking at. But it surprises me that it'd matter much! It would
be interesting to come up with some reasonable test cases to experiment
with and see where the efficiency is coming from -- whether it's the
inlining or the argument lists or what.

It's not easy to knock up a quick example in C, but it is in C++, where
the standard sort is a function template. Comparing the sort of
10000000 random doubles, qsort is at least as fast as std::sort with no
optimisation, but takes twice as long with optimisation on. The big
difference is inlining the comparison function. I'll post the test case
if granted immunity from flames for posting C++!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top