[RFC] C token counter

A

Arthur J. O'Dwyer

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~ajo/workshop/tokens.c

It's a token counter for the C programming language, following
the outline kind-of-described here:
http://www.kochandreas.com/home/language/tests/TOKENS.HTM
Basically, it's supposed to give a reasonable approximation of
the number of "atomic tokens" present in a C source file.

To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?

To those reading in c.l.m: Andreas, this is for you. :)
I'm just posting it generally in case anyone has any comments
along the lines of "gee, C is nifty!" or "gee, C can't do
anything!"

Please set follow-ups appropriately in your reply: comp.lang.c
probably doesn't care about Practical Language Comparison ;) and
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

-Arthur

[1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.
 
N

Nils M Holm

In comp.lang.misc Arthur J. O'Dwyer said:
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

Why not? They /are/ two tokens:

# define x y

Nils
 
M

Martin Dickopp

I'm reading this in comp.lang.c, followup set.

Arthur J. O'Dwyer said:
Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~ajo/workshop/tokens.c

It's a token counter for the C programming language, following
the outline kind-of-described here:
http://www.kochandreas.com/home/language/tests/TOKENS.HTM
Basically, it's supposed to give a reasonable approximation of
the number of "atomic tokens" present in a C source file.

To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?

I have just quickly skimmed your code, so please take my comments with a
grain of salt.

I prefer `int main (void)' over `int main ()', especially since you
don't use pre-C89-style definitions for all other functions.

When `getchar ()' returns `EOF', you should check if this is due to an
error or due to end-of-file condition.

The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.

Martin
 
J

Jeremy Yallop

Arthur said:
Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~ajo/workshop/tokens.c [...]
To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?

It doesn't handle digraphs correctly: the following produces an error:

int main()<%%>

I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens. In order to count tokens you
either need to specify that the input is already preprocessed, or
implement most of a preprocessor yourself. The following program has
35 preprocessing tokens, but only 11 tokens:

#define str(x) # x
int main() { puts(
str(
int main() { puts
("Hello, world.");
return 0;
}));}

Your program gives "34" for this (counting "#define" as a single
token), so it seems to be counting preprocessing tokens. However,
some valid preprocessing tokens, such as certain preprocessing
numbers, are rejected:

#define str(x) # x
int main() { str(3p+); }

I think all the programs above are strictly conforming.
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;) [...]
[1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.

"#" is a punctuator, which is a token, and "define" is an identifier,
also a token.

Jeremy.
 
M

Martin Dickopp

Nils M Holm said:
In comp.lang.misc Arthur J. O'Dwyer said:
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

Since this posting might be the start of a long pedantic debate ;) about
`#define' being two tokens, I ignored the Followup-To: comp.lang.misc,
and set a Followup-To: comp.lang.c.
Why not? They /are/ two tokens:

# define x y

<pedantic>
While you're right, the fact that spaces are allowed between `#' and
`define' doesn't prove anything. ;) The language /could/ have defined
the sequence of `#', spaces, and `define' as a single preprocessing-
token. String literals are an example of preprocessing-tokens which may
include spaces.
</pedantic>

Martin
 
A

Arthur J. O'Dwyer

Combined response to Martin's and Jeremy's replies.
Thanks (so far...)!

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~ajo/workshop/tokens.c

Things I fixed:

int main(void) replaces int main(). Style issue.
Comments inside string and character literals weren't being handled.
Fixed now.
I had forgotten digraphs entirely. Fixed now.
Comments add spaces; e.g., makes a/**/b two tokens instead of one.
"# define" et al. are now two tokens instead of one. Jeremy's
argument convinced me, I guess. :) The fact that you can stick a
comment in between the "#" and the "define" contributed to my conviction,
too, although I knew that before.
Unrecognized or probably-invalid symbols are passed through anyway,
thus doing away with the NULL return from 'mystate'. This is to deal
with problems involving stringizing macros: basically *anything* could
be passed to them!


Martin said:
The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.

It seems at a quick glance that the pp-number definition would
make
0xDE+0xAD
into one token, where a more comprehensive approach would suggest it's
"really" three tokens --- 0xDE, +, and 0xAD. So while I agree I'm
doing too much with numbers, I don't yet see a better way that jibes
with the way I'm trying to define "tokens."


Jeremy said:
I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens.

Right. At least, I have to make up a definition of "token"
that makes sense for most applications. Post-preprocessing tokens
are too complicated to handle on this program's scale; and you
pointed out the "stringizing" issue, which I hadn't considered before
either.

Well, new version uploaded; same request for comments on this one. :)
Particularly, I'm not entirely sure that all numbers are handled
appropriately; and I'm not entirely sure that I did the digraph stuff
right --- especially 'PercentOp', which has to discriminate between
%, %:, %=, %>, and %:%:. FSMs are hard. ;-)

-Arthur
 
N

Nils M Holm

In comp.lang.misc Martin Dickopp said:
While you're right, the fact that spaces are allowed between `#' and
`define' doesn't prove anything. ;) The language /could/ have defined
the sequence of `#', spaces, and `define' as a single preprocessing-
token.

While you are right, let's apply occam's razor: what is more likely,
that the parts of /some/ specific tokens can be separated by
(otherwise useless) spaces or that these space separate individual
tokens? :)
String literals are an example of preprocessing-tokens which may
include spaces.

True. However, these spaces do have a purpose.

Nils
 
C

CBFalconer

Arthur J. O'Dwyer said:
.... snip ...

Please set follow-ups appropriately in your reply: comp.lang.c
probably doesn't care about Practical Language Comparison ;) and
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

[1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.

I suggest you could have better set follow-ups yourself. How does
your code handle:

# if whatever
# define blah
# else
# undefine foo
# endif

which is perfectly legal, and necessary for some older compilers.
 
M

Martin Dickopp

Arthur J. O'Dwyer said:
It seems at a quick glance that the pp-number definition would
make
0xDE+0xAD
into one token,

Yes, it's one pp-token.
where a more comprehensive approach would suggest it's "really" three
tokens --- 0xDE, +, and 0xAD.

It's a pp-token which cannot be converted to a token. Compare the
following two programs (difference underlined):

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE + 0xAD); return 0; }
/* ^^^^^^^^^^^ */

and

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }
/* ^^^^^^^^^ */

The first one displays the number 395. The second one violates the
constraint of 6.4#2, so the implementation is free to interpret it in
any way it likes (after issuing at least one diagnostic).

Martin


PS: C specific discussion, therefore Followup-To: comp.lang.c.
 
M

Martin Dickopp

Nils M Holm said:
While you are right, let's apply occam's razor: what is more likely,
that the parts of /some/ specific tokens can be separated by
(otherwise useless) spaces or that these space separate individual
tokens? :)

There's not even a need to apply Occam's razor, since this aspect of the
C language is precisely defined: The likelihoods of the variants are
exactly 0 and 1, respectively. :)

Of course, I agree with your point: Defining the language such that the
sequence of `#', spaces, and `define' are a single token would have been
utterly stupid.

Martin
 
A

Arthur J. O'Dwyer

Yes, it's one pp-token.


It's a pp-token which cannot be converted to a token. Compare the
following two programs (difference underlined):

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE + 0xAD); return 0; }
/* ^^^^^^^^^^^ */

and

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }
/* ^^^^^^^^^ */

% gcc test.c
test.c: In function `main':
test.c:2: invalid suffix on integer constant

Wow. I learn something new every day!
This looks like *very* weird behavior to me... what's C's
rationale for parsing numbers this way rather than the "common-
sense" approach I *thought* it used? It would not be hard to
make 0xDE+0xAD the addition of the hex constants 0xDE and 0xAD,
rather than a syntax error; why did C decide to do the latter,
then? Just to make the formal lexing spec simpler?

-Arthur,
confused but slightly more enlightened
 
J

Jeremy Yallop

Arthur said:
This looks like *very* weird behavior to me... what's C's rationale
for parsing numbers this way rather than the "common- sense"
approach I *thought* it used?

Dennis Ritchie's explanation seems plausible:

"In truth, I think that preprocessing numbers are the most
conspicuously incorrect thing X3J11 did. [...] Reportedly the idea
was worked out over lunch; it bears more signs of a late-night bar
session."
said:
It would not be hard to make 0xDE+0xAD the addition of the hex
constants 0xDE and 0xAD, rather than a syntax error; why did C
decide to do the latter, then? Just to make the formal lexing spec
simpler?

Yes. The C89 Rationale has the details:

3.1.8 Preprocessing numbers

The notion of preprocessing numbers has been introduced to simplify
the description of preprocessing. It provides a means of talking
about the tokenization of strings that look like numbers, or
initial substrings of numbers, prior to their semantic
interpretation. In the interests of keeping the description
simple, occasional spurious forms are scanned as preprocessing
numbers --- 0x123E+1 is a single token under the rules. The
Committee felt that it was better to tolerate such anomalies than
burden the preprocessor with a more exact, and exacting, lexical
specification. It felt that this anomaly was no worse than the
principle under which the characters a+++++b are tokenized as a ++
++ + b (an invalid expression), even though the tokenization a ++ +
++ b would yield a syntactically correct expression. In both cases,
exercise of reasonable precaution in coding style avoids surprises.

I don't see how it simplifies anything much, really, unless you're
writing a stand-alone preprocessor. Numbers have to be parsed
properly at some stage: the addition of preprocessing numbers means
that two number parsers (with conflicting behaviour) are needed rather
than one.

Jeremy.
 
C

CBFalconer

Arthur J. O'Dwyer said:
% gcc test.c
test.c: In function `main':
test.c:2: invalid suffix on integer constant

Wow. I learn something new every day!
This looks like *very* weird behavior to me... what's C's
rationale for parsing numbers this way rather than the "common-
sense" approach I *thought* it used? It would not be hard to
make 0xDE+0xAD the addition of the hex constants 0xDE and 0xAD,
rather than a syntax error; why did C decide to do the latter,
then? Just to make the formal lexing spec simpler?

Consider 0xD. This is a hex value. The following E signifies a
floating point value, with the exponent following, which is
+0xAD. No different than 2e-23 in principle.

This is the sort of thing idiots who economize on blanks are
subject to, and that they impose on the poor suffering maintenance
programmer.
 
A

Arthur J. O'Dwyer

Consider 0xD. This is a hex value. The following E signifies a
floating point value, with the exponent following, which is
+0xAD. No different than 2e-23 in principle.

Wrong. "E" (or "e") signifies a floating-point exponent *only*
when used with decimal and octal constants. The floating-point
exponent signifier for hexadecimal constants is "P" (or "p"),
because "E" is a hex digit itself. 0xDE+1 is just as invalid a
C construct as 0xDE+0xAD. See, isn't that weird, now? ;)

-Arthur
 
P

Peter Nilsson

Arthur J. O'Dwyer said:
Combined response to Martin's and Jeremy's replies.
Thanks (so far...)!

If you're considering C99 digraphs, then...

g\u00E5

....should be reported as a single token.
 
A

Arthur J. O'Dwyer

If you're considering C99 digraphs, then...

g\u00E5

...should be reported as a single token.

Request for clarification: First, universal character
names have nothing to do with digraphs, right? you just meant
that they're both obscure C99 features?
Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?
As I understand N869 6.4.3#2, I don't ever have to worry about
universal characters filling in for, say, digits or other
C tokens.
I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.

-Arthur
 
P

Peter Nilsson

Arthur J. O'Dwyer said:
Request for clarification: First, universal character
names have nothing to do with digraphs, right? you just meant
that they're both obscure C99 features?

Yup. I should have typed: '(e.g. digraphs)', sorry.
Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?

The \u or \U must be followed by either 4 or 8 hex characters.

The only other constraint is...

A universal character name shall not specify a character whose
short identifier is less than 00A0 other than 0024 ($), 0040 (@),
or 0060 ('), nor one in the range D800 through DFFF inclusive.

[This differs fron N869.]
As I understand N869 6.4.3#2, I don't ever have to worry about
universal characters filling in for, say, digits or other
C tokens.
Yes.

I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.

Your call, but fair enough. [If you do, the standard (+TC1) didn't change any of the
characters listed in appendix D from N869.]
 
A

Arthur J. O'Dwyer

The \u or \U must be followed by either 4 or 8 hex characters.

....but if it's not, then we have undefined behavior in the C program,
so I'm allowed to do whatever I like. So the semantics of the token
counter don't need to include the counting of hex digits.
The only other constraint is...

A universal character name shall not specify a character whose
short identifier is less than 00A0 other than 0024 ($), 0040 (@),
or 0060 ('), nor one in the range D800 through DFFF inclusive.

[This differs from N869.]

Eek! You scared me there for a moment! s/'/`/ in the above text!
If 0027 (') could be replaced by \u0027, *that* would have been really
icky. But 0060 (`) doesn't do anything in C, so it's okay.

Your call, but fair enough.

Again, as far as I can tell if the user enters an invalid universal
character name in the middle of an identifier, we get undefined
behavior. My current token-counter has gone ahead with the "pp-number"
semantics for counting numeric constants, so 0x0E4+D..P-xg27 counts as
one token; I don't see why FOO\u4D99BAR should be counted (or not) any
differently.

Thanks,
-Arthur
 
A

Andreas Koch

Jeremy said:
I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens.

Good point. I'll add that. We want preprocessing tokens; neither
macros expanded nor files included etc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,565
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top