Improving efficiency of a bytecode interpreter

E

Eric Sosman

bartc said:
Mark said:
As an example: Imagine that you currently have a giant switch table.
And you replace it with a table of function pointers. There have
existed compilers such that, when you do this, and write:

*(func[opcode])(...);

that your performance will INCREASE. Dramatically.

If I did this, I'd have to pass all the same variables to every
function, wouldn't I? I have a large number of variables used by the
processing engine, which different instructions may or may not
manipulate. How could I implement a table of function pointers, but
still have the flexibility to send only the right arguments to the
right functions?

Well, you just use globals, although that is frowned upon in this group
(but then so are 10000-line functions..).

Parameters only slow things down anyway.

Globals will probably slow them down even more. (Of course,
neither your opinion nor mine is supported by the C language
itself; measurement wins.)
One problem with switch (opcode) is that there will be code generated to
ensure opcode is in range (there may or may not be a way of turning this
off). And of course it needs to be in a dispatch loop.

Again, the C language takes no position on this. But it
seems odd to fret about one compare-and-jump-not-taken while
blithely ignoring the *two* jumps plus linkage overhead plus
register-to-memory spills that a function table will likely
involve. (And it's well beyond odd to worry about "a dispatch
loop" around a switch statement while conveniently forgetting
that a solution based on function pointers needs that very
same loop ...)
 
B

BGB / cr88192

Eric Sosman said:
It's somewhat a matter of opinion, but I'll venture to
disagree with both James Kuyper and Richard Heathfield in this
instance. Ordinarily, I'd agree that a ten thousand-line function,
even a five thousand-line function, is too big and almost sure to
be too snarled. But for a function of the kind Mark Bannister
describes, I think the objection based on size largely (sorry)
disappears: Total size is no longer a good measure of complexity.

What Mark has is a function body that looks something like

{
preamble();
while (something) {
switch (doodad) {
case 0: ...
case 1: ...
...
case 999: ...
default: abort();
}
}
postamble();
}

... and with a structure like this the "size" to consider is
roughly the total size of preamble(), the gnarliest `case', and
postamble(). One thousand `case's of about ten lines each,
and there you have your ten thousand-line function, but the
conceptual burden is light.

did you actually look at the code?...

I've come across (and written) big `switch' constructs like
this in three contexts: Lexical scanners, interpreters, and
state machines. (There are certainly others, but these three
seem most susceptible to `case' proliferation.) In none of these
situations does the total number of `case's significantly affect
the conceptual complexity -- state machines can get hairy, but
most of the hair grows on the state-to-state transitions, not on
the relatively bald bulk of the function containing them all.

Size doesn't *always* matter, lads.

I don't think you have looked at it...

look before you comment...

it is a good deal more terrible, more like:

{
declarations...
....
switch(foo)
{
case 0x0a:
{
declarations...
...
while(p=foo)
{
...
switch(*p)
{
...
}
}
...
more conditionals, loops, switches, ...
...
}
}

more-or-less (not exact, but close enough), but it is scary-bad code
organization...

I have seen some plenty big switches of the type you describe, and this was
not one of them...

"Elegant" is a word sometimes applicable, but more often
used as a synonym for "The Devil's in the details, and I shun
the Devil." There's an old joke about an engineer, a physicist,
and a mathematician asked to solve an aerodynamic drag problem:

The engineer proposes to build a detailed full-scale model
of the automobile, put it in a wind tunnel, and measure the
drag directly. It'll take several weeks, but it's The Way.

yep, yep...

The physicist pooh-poohs the engineer's naïveté and says
he can write a huge system of nonlinear partial differential
equations for the drag and solve them on a supercomputer. It'll
take a solid week to run the solution (two, counting the re-run
after the bug is fixed), but the answer will come out Sooner.

I have SEEN this approach to problems...

I am taking a physics class, and IMO the way a lot of things are approached
makes little sense from the POV of a programmer like myself...

The mathematician smiles indulgently at his colleagues'
folly, and says "Consider a spherical Ferrari ..."

The mathematician's approach was elegant. IMHO, we can
do without that sort of elegance.

yep.


but, alas, it is still better advised that you look over the code before
making comments that show that you haven't...


I have written an x86 interpreter, and so I have seen big-ass switches, but
what was presented was something altogether different...
 
B

BGB / cr88192

Eric Sosman said:
bartc said:
Mark said:
As an example: Imagine that you currently have a giant switch table.
And you replace it with a table of function pointers. There have
existed compilers such that, when you do this, and write:

*(func[opcode])(...);

that your performance will INCREASE. Dramatically.

If I did this, I'd have to pass all the same variables to every
function, wouldn't I? I have a large number of variables used by the
processing engine, which different instructions may or may not
manipulate. How could I implement a table of function pointers, but
still have the flexibility to send only the right arguments to the
right functions?

Well, you just use globals, although that is frowned upon in this group
(but then so are 10000-line functions..).

Parameters only slow things down anyway.

Globals will probably slow them down even more. (Of course,
neither your opinion nor mine is supported by the C language
itself; measurement wins.)

depends on the compiler and OS...

globals are likely to be faster on Win32 and Win64, since code direct-links
against the globals.

locals and arguments are likely to be faster on Linux, since Linux uses
position-independent-code by default, which tends to involve (in the 32-bit
case), fetching EIP and calculating the position of the GOT, and then
accessing said globals indirectly via the GOT.

this means that, almost invariably, a global will require a double-indirect
access on Linux (first GOT then value), wheras a local is a plain indirect
access (rel ESP or EBP), whereas a global is direct on Win32/64.

Again, the C language takes no position on this. But it
seems odd to fret about one compare-and-jump-not-taken while
blithely ignoring the *two* jumps plus linkage overhead plus
register-to-memory spills that a function table will likely
involve. (And it's well beyond odd to worry about "a dispatch
loop" around a switch statement while conveniently forgetting
that a solution based on function pointers needs that very
same loop ...)

yep, yep...

ASM is about the only real way for a "fast" interpreter, but is it worth the
cost?
for most things, probably not...

 
J

James Kuyper

Eric said:
James Kuyper wrote: ....

"Elegant" is a word sometimes applicable, but more often
used as a synonym for "The Devil's in the details, and I shun
the Devil." There's an old joke about an engineer, a physicist,
and a mathematician asked to solve an aerodynamic drag problem:

For me, in a programming context, an "elegant" way of doing something is
a way that clearly and obviously does precisely what needs to be done,
nothing more, nothing less. All too often, we have to settle for unclear
methods, or methods that waste time doing something only to undo it
later. It has nothing to do with avoiding the details. When I can find a
method of doing something that is truly elegant, I treasure it.
 
B

BGB / cr88192

Gordon Burditt said:
You might make it smaller by grouping the opcodes by type of arguments
and handling the types in common code. That assumes that there
*are* groups of opcodes that have common argument types. If there
aren't, perhaps your bytecode design should be re-examined.

yep, I think the OP did something weird here...

This will likely make the code smaller, but it probably won't speed it
up, unless you get a *LOT* of speedup from better locality of code.
You'll have an extra switch.

actually, it probably can help speed up the code, since one can essentially
batch together a bunch of things that couldn't otherwise be done, but this
depends a lot on the interpreter design.

switch (opcode)
{
case for each bytecode instruction with similar arguments:
process arguments
perform command
}

char argtype[256] = { 1, 2, 1, .... };

switch (argtype[opcode])
{
case for each group with similar arguments:
process common arguments for this group;
}
switch (opcode)
{
case for each bytecode instruction :
process arguments
perform command
}


Where argtype[opcode] represents what group this particular opcode
falls in. For instance, you might have several instructions that
each refer to a destination register and a source virtual memory
with a 32-bit offset and an optional index register (e.g. load,
add, subtract, bitwise AND, bitwise OR, etc.). You can calculate
the addresses of these without having to look at the specific
instruction.

I personally used a strategy (for my x86 interpreter) where the first switch
essentially branched off to different functions, each doing their own
versions of the second switch.

granted, it is worth noting that I did the opcode decoding separate from the
interpretation, mostly as the way the x86 opcode encodings work is, well,
messy...

so, splitting up the 'decode' and 'interpret' steps helps greatly reduce the
overall complexity, since fairly generic logic can be used for decoding the
opcodes, and the interpretation step is mostly about getting to the right
place and doing whatever (no worries over the complexities of opcode
encoding rules, ...).

however, my current opcode decoder is likely to be terribly buggy and slow,
and I may need at some point to figure how to optimize it... (one remote
possibility here being auto-generated mega-switches, and some very different
processing logic, another possibility being a specialized lookup table,
possibly matching on the first 1 or 2 bytes...).

the current decoder is based on table-based lookups and a lot of ugly
hacked-together logic code (I had hacked code from my disassembler into an
opcode decoder...).

That's way too much code to look at.

yep, I skimmed...
 
B

bartc

Eric said:
bartc said:
Mark said:
As an example: Imagine that you currently have a giant switch
table. And you replace it with a table of function pointers. There
have existed compilers such that, when you do this, and write:

*(func[opcode])(...);

that your performance will INCREASE. Dramatically.

If I did this, I'd have to pass all the same variables to every
function, wouldn't I? I have a large number of variables used by
the processing engine, which different instructions may or may not
manipulate. How could I implement a table of function pointers, but
still have the flexibility to send only the right arguments to the
right functions?

Well, you just use globals, although that is frowned upon in this
group (but then so are 10000-line functions..).

Parameters only slow things down anyway.

Globals will probably slow them down even more. (Of course,
neither your opinion nor mine is supported by the C language
itself; measurement wins.)

That seems counterintuitive. You'd think that pushing a parameter somewhere
(or maybe two parameters, or maybe ten), creating a local stack frame and
uncreating it on return (as I assume you will suggest that using local
statics is slower), would take a bit longer than, um, not doing any of
that...
Again, the C language takes no position on this. But it
seems odd to fret about one compare-and-jump-not-taken while
blithely ignoring the *two* jumps plus linkage overhead plus
register-to-memory spills that a function table will likely
involve. (And it's well beyond odd to worry about "a dispatch
loop" around a switch statement while conveniently forgetting
that a solution based on function pointers needs that very
same loop ...)

Well I do go on to mention possible ways of eliminating the dispatch loop,
although standard C makes that difficult.

(And a simple dispatch loop using a function pointer can be unrolled more
easily.)

The best I could manage with a switch (using gcc 3.4.5 a couple of years
ago), was I think 4 or 5 instructions for the switch, including a jump, plus
at the end of the case handler, a jump for the break (I assume this was
optimised to go straight to the start of the loop, I don't think it
optimised it by replicating the switch jump code). Some half-dozen
instructions overhead per opcode decode.

The best I could achieve with asm was 2 instructions overhead per opcode,
using one dedicated register. Of course these counts are only significant
for opcodes with very simple handlers. And for a bytecode design that
doesn't have complex operand decoding.
 
P

Phil Carmody

James Kuyper said:
For me, in a programming context, an "elegant" way of doing something
is a way that clearly and obviously does precisely what needs to be
done, nothing more, nothing less.

I'd say a design which recognises a likely greater class of
problems that this one is a single precise instance of, and
which can be clearly and cleanly extended in the direction
of that greater class would often be preferable.

So, for example, atoi8() and atoi10() may be what you were
asked for, but atoiN(...,N) might be the elegant solution.
All too often, we have to settle for
unclear methods, or methods that waste time doing something only to
undo it later. It has nothing to do with avoiding the details. When I
can find a method of doing something that is truly elegant, I treasure
it.

Yup.

Phil
 
J

James Kuyper

Phil Carmody wrote:
....
I'd say a design which recognises a likely greater class of
problems that this one is a single precise instance of, and
which can be clearly and cleanly extended in the direction
of that greater class would often be preferable.

So, for example, atoi8() and atoi10() may be what you were
asked for, but atoiN(...,N) might be the elegant solution.

While atoiN(..., N) has a virtue that I approve of, I would call that
virtue generality, not elegance. But that's hardly an issue worth
arguing about.
 
N

Nick Keighley

As an example:  Imagine that you currently have a giant switch table.  And you
replace it with a table of function pointers.  There have existed compilers
such that, when you do this, and write:
*(func[opcode])(...);

that your performance will INCREASE.  Dramatically.

If I did this, I'd have to pass all the same variables to every
function, wouldn't I?  I have a large number of variables used by the
processing engine, which different instructions may or may not
manipulate.  How could I implement a table of function pointers, but
still have the flexibility to send only the right arguments to the
right functions?

well currently all the blocks of code for operands can access all the
variables. So the function version will be no worse. As a first go I'd
simply pass the whole virtual register set to the operand handlers.
later you worry about approriate visibility.

Do you have a test harness? Life becomes much easier if you can run
the test, modify the code, run the test. Running the test should
involve pressing one button (ok, a couple of keystrokes on a decent
cli), and if it's ok return nothing but "Test ok".
 
E

Eric Sosman

bartc said:
Eric said:
bartc said:
[...]
Well, you just use globals, although that is frowned upon in this
group (but then so are 10000-line functions..).

Parameters only slow things down anyway.

Globals will probably slow them down even more. (Of course,
neither your opinion nor mine is supported by the C language
itself; measurement wins.)

That seems counterintuitive. You'd think that pushing a parameter
somewhere (or maybe two parameters, or maybe ten), creating a local
stack frame and uncreating it on return (as I assume you will suggest
that using local statics is slower), would take a bit longer than, um,
not doing any of that...

Actually, what I'd think is that passing things around
in globals prevents the compiler from passing them around in
registers, forcing actual memory accesses instead. Since the
CPU runs a few hundred times faster than memory ...

Even if you're lucky and all the globals wind up in a nearby
cache, access to them will be slower than if they were sitting
in the CPU already. But if the compiler has just computed `pc++'
and has the program counter in a register and is about to call
a function, it must first spill the register back to the global
`pc' location lest the function need it. And when the function
returns, the compiler must re-fetch the global `pc' just in case
the function changed it. Both the store and the fetch (and any
fetches and stores within the called function) are potentially
avoidable if the compiler can "see" all the accesses, which will
be more likely if they're all inside one function.

But anyhow: As I mentioned earlier, this sort of thing is
not about the C language, but about specific implementations
thereof. Measurement *still* wins.
 
M

Moi

I'm developing a new programming language, and at the core of this
language is a bytecode interpreter (yes, I've invented the bytecode too,
as well as assembly code and an assembler ...)

It's all written in C.

My question is, can anyone think of a way of making engine.c (see link
below) smaller and more efficient? This is the heart of the
interpreter, it's a big switch with switches inside it, a bit like this
pseudo-code:

switch (opcode)
{
case for each bytecode instruction with similar arguments:
process arguments
perform command
}

What troubles me is that a lot of the code for processing arguments is
very similar, but because different instructions take different
arguments I can't readily process them up-front. Take a look at the
code itself and if you have any useful suggestions please send them my
way.

http://prose.cvs.sourceforge.net/viewvc/prose/prose/src/prose/engine.c?view=markup

Thanks,
Mark Bannister.
(cambridge at sourceforge dot net)



I scrolled trough the code, and found that 95 % of it
dealt with handling arguments/registers/addressing modes, with a lot of common code.

Personally I would put as much as possible of the instruction set into data-tables
, and create some helper functions to find out instruction length / addressing modes / register usage.
That would reduce the big switch to a few lines per case.
(if you use the opcode to index an opcode-table, you could even leave out the big switch)
When needed you could always inline these functions.

If that is still too slow, you could use these same tables to generate code.


HTH,
AvK
 
T

Tim Rentsch

Mark R. Bannister said:
I'm developing a new programming language, and at the core of this
language is a bytecode interpreter (yes, I've invented the bytecode
too, as well as assembly code and an assembler ...)

It's all written in C.

My question is, can anyone think of a way of making engine.c (see link
below) smaller and more efficient? This is the heart of the
interpreter, it's a big switch with switches inside it, a bit like
this pseudo-code:

switch (opcode)
{
case for each bytecode instruction with similar arguments:
process arguments
perform command
}

What troubles me is that a lot of the code for processing arguments is
very similar, but because different instructions take different
arguments I can't readily process them up-front. Take a look at the
code itself and if you have any useful suggestions please send them my
way.

http://prose.cvs.sourceforge.net/viewvc/prose/prose/src/prose/engine.c?view=markup

I don't know if you've thought of this, but the way the question is
asked basically precludes the most useful answers. The most signicant
factors in performance have to do with program design at higher
levels. But, here we are, presented with a program where all the
higher level decisions have been made, and _now_ there's a request for
help with performance? It's too late in the development cycle. If
you really want or need better performance (and I mean really better,
not just marginally better), the way to start is to reconsider some
higher level design questions. For example, can the bytecode be
transformed to a different intermediate form that allows faster
interpretation (in the spirit of JIT compiling)? Very likely it can.

Recommendation: read "The Mythical Man-Month". If you've already
read it, read it again. Representation is the essence of programming.
 
B

BGB / cr88192

Gordon Burditt said:
I see nothing wrong with a 10,000 line function if the purpose of
the function is to generate a fixed 10,000 line text output (e.g.
one of the shorter software license agreements for certain companies)
and it needs to be reviewable for accuracy by a non-programmer (e.g.
a lawyer) before anyone is allowed to compile it. You can do this
with 10,000 consecutive calls to puts(), and no funny backslash
sequences that lawyers don't understand.

Performance is not an issue for this function. It might work better
for its intended purpose if it slowed down to first-grade reading
speed, although the text is more suitable for a first-year law
student.

No, it's not acceptable to put the license agreement in a separate
file, open it, and output it. (It's too easy for someone to replace
the license with a copy of the GPL or the Pirate Bay logo).

if I were doing this, I would probably develop things with the liscense as a
separate text file, and at compile time essentially compress and C-ify the
document via a tool (it is possible to do pure-ASCII text compression for
use in string constants, although the ratio is not as good at with a more
traditional compressor).

for example, one could use an LZ77-based algo with '$' as a run escape, and
maybe base-64 encoded runs.
"LOLZ$/AD"

could emit LOLZ 16 times ("LOLZLOLZLOLZ"...).

(note, this scheme would have a 4k window and a max match of 64).

similarly, decoding would be fairly easy.

or such...
 
M

Mark R. Bannister

As discussed a few weeks ago, I took on board the helpful comments
that I received from this group and re-engineered my bytecode
interpreter's engine.c so that all bytecode instructions were handled
by functions and called via a jump table.

Here is the result:
http://prose.cvs.sourceforge.net/viewvc/prose/prose/src/prose/engine.c?view=markup

As you will see, engine.c is no longer 10,000 lines long :)

I'd appreciate any further suggestions you may have that will help
improve the efficiency of the interpreter.

Many thanks,
Mark.
 
M

Mark R. Bannister

Looks like you've turned it into a maintainable program. Assuming it
still works, you deserve some congratulations.

Yes of course it still works! I have an extensive test suite (make
check) and have ensured that the product is still working as before.
I've taken so long because I only have half hour train journeys in the
morning to do my coding...

Ultimately he could change his jump table back to a switch statement and
declare his functions static inline (or rely on compiler heuristics to
inline). That would put him more-or-less back to square zero.

I investigated this. It would seem that I'd have to put all the
functions in a single source file for inlining to work. That would
take me back to a 10,000 line engine.c again. Or should I #include
them? I don't think I've ever seen a project that #includes a .c
source file, only .h.

There's something else I'm struggling with. I'm using gcc (various
versions including 4.1.2) and I'm getting these nuisance compile
warnings:

.../../../prose-0.7.0/src/prose/stringtype.c:1793: warning: passing
argument 2 of 'mSlist_init_properties' from incompatible pointer type
.../../../prose-0.7.0/src/prose/stringtype.c:1793: warning: passing
argument 3 of 'mSlist_init_properties' from incompatible pointer type
.../../../prose-0.7.0/src/prose/stringtype.c:1793: warning: passing
argument 4 of 'mSlist_init_properties' from incompatible pointer type

These are examples. I get a lot of these warnings from numerous
source files.

Argument 2, to take one particular example, of mSlist_init_properties,
is declared in src/libIvor/slist.c as:

int (*cmpkeys)(const VOIDP *key1, const VOIDP *key2);

where VOIDP in this case is 'void'. This is basically giving a singly-
linked list library routine an entry-point for comparing two pieces of
arbitrary data.

In stringtype.c, I'm calling this function with argument 2: STRcmp.
STRcmp is a bytestring comparison routine declared as:

int STRcmp(const STRING *ss1, const STRING *ss2);

where STRING is a typedef for a structure called stringtype_struct.

So, I see where the error is coming from. The compiler is expecting a
function that takes void pointers, and I'm giving it a function that
takes different pointers instead. The code works fine and I've
ignored the warnings for many years. However, it would be nice to get
rid of these warnings from the compiler output. Any suggestions as to
how to get around this would be very much appreciated. Wouldn't it be
nice if I could use a cast....

Best regards,
Mark.
 
B

Ben Bacarisse

Mark R. Bannister said:
There's something else I'm struggling with. I'm using gcc (various
versions including 4.1.2) and I'm getting these nuisance compile
warnings:

../../../prose-0.7.0/src/prose/stringtype.c:1793: warning: passing
argument 2 of 'mSlist_init_properties' from incompatible pointer type
Argument 2, to take one particular example, of mSlist_init_properties,
is declared in src/libIvor/slist.c as:

int (*cmpkeys)(const VOIDP *key1, const VOIDP *key2);

where VOIDP in this case is 'void'.

OK, but that is an odd name. What is the P for? If VOIDP is not void
then some of what I say below will be off the mark a little.
This is basically giving a singly-
linked list library routine an entry-point for comparing two pieces of
arbitrary data.

In stringtype.c, I'm calling this function with argument 2: STRcmp.
STRcmp is a bytestring comparison routine declared as:

int STRcmp(const STRING *ss1, const STRING *ss2);

where STRING is a typedef for a structure called stringtype_struct.

So, I see where the error is coming from. The compiler is expecting a
function that takes void pointers, and I'm giving it a function that
takes different pointers instead. The code works fine and I've
ignored the warnings for many years.

Just so you know, I have used systems where this would break unless
you use a cast to convert the function back to its real type when you
call it with struct * arguments.
However, it would be nice to get
rid of these warnings from the compiler output. Any suggestions as to
how to get around this would be very much appreciated. Wouldn't it be
nice if I could use a cast....

There are two choices. (a) Cast the pointer to the type expected by
the function but be sure to cast it back to whatever is the real type
of the function when you call it. This is messy. (b) Make all your
functions have the type of the pointer that you actually pass, and
convert the void *s internally. This can be done without a cast.
I.e. you re-write:

int STRcmp(const STRING *ss1, const STRING *ss2) { ... }

to be

int STRcmp(const void *vp1, const void *vp2)
{
const STRING *ss1 = vp1, *ss2 = vp2;
/* body as before */
}
 
M

Mark R. Bannister

OK, but that is an odd name.  What is the P for?  If VOIDP is not void
then some of what I say below will be off the mark a little.

Well, VOIDP could be a char on a system that doesn't support void. I
guess when I started the project (a long time ago), I was not making
any assumptions about the capabilities of your compiler ... now I've
implemented a function jump table in an array, I've negated that. So
at some point, when I find the time, I can trawl through and remove
support for very old compilers.

There are two choices.  (a) Cast the pointer to the type expected by
the function but be sure to cast it back to whatever is the real type
of the function when you call it.  This is messy.  (b) Make all your
functions have the type of the pointer that you actually pass, and
convert the void *s internally.  This can be done without a cast.
I.e. you re-write:

  int STRcmp(const STRING *ss1, const STRING *ss2) { ... }

to be

  int STRcmp(const void *vp1, const void *vp2)
  {
      const STRING *ss1 = vp1, *ss2 = vp2;
      /* body as before */
  }

STRcmp is used elsewhere as a comparison function in its own right.
If I used void pointers as parameters in STRcmp, I'd lose the type-
checking. I'm trying to avoid writing two different STRcmp functions,
one with STRING pointers, one with void pointers. That seems
unnecessary duplication.
 
S

Seebs

I investigated this. It would seem that I'd have to put all the
functions in a single source file for inlining to work. That would
take me back to a 10,000 line engine.c again. Or should I #include
them? I don't think I've ever seen a project that #includes a .c
source file, only .h.

Nothing wrong with a huge source module with tons of small functions in it.
So, I see where the error is coming from. The compiler is expecting a
function that takes void pointers, and I'm giving it a function that
takes different pointers instead. The code works fine and I've
ignored the warnings for many years. However, it would be nice to get
rid of these warnings from the compiler output. Any suggestions as to
how to get around this would be very much appreciated. Wouldn't it be
nice if I could use a cast....

You can.

But!

It is undefined behavior to call a function through a pointer of a different
sort. So if you have a function which is actually declared to take a
char * argument, and you call it through a pointer with a void * argument,
you lose.

Solution:

int my_example_strcmp(const void *v1, const void *v2) {
unsigned char *s1 = v1, *s2 = v2;
while (*s1 && *s2 && *s1 == *s2)
++s1, ++s2;
return *s1 - *s2;
}

Convert the arguments in the function, so its signature matches what the
function pointer looks like.

-s
 
B

Ben Bacarisse

Mark R. Bannister said:
On 24 Nov, 15:35, Ben Bacarisse <[email protected]> wrote:

STRcmp is used elsewhere as a comparison function in its own right.
If I used void pointers as parameters in STRcmp, I'd lose the type-
checking. I'm trying to avoid writing two different STRcmp functions,
one with STRING pointers, one with void pointers. That seems
unnecessary duplication.

The use (a) and live with all the casts. I don't see another way to
avoid the problem.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,264
Messages
2,571,065
Members
48,770
Latest member
ElysaD

Latest Threads

Top