When shorts are longer than longs !

T

Tim Rentsch

Richard Heathfield said:
Tim Rentsch said:



Never "it was", but it can mean "it has". For example, "the value of
an object should not be accessed once it's gone out of scope".

Another thought on the matter:

If someone says, "Oh, I never said it's reasonable," it may very well
be meant in the sense of "Oh, I never said it was reasonable," rather
than "Oh, I never said it is reasonable." And I don't see anything
wrong with its being said that way.
 
W

Willem

Tim Rentsch wrote:
) Another thought on the matter:
)
) If someone says, "Oh, I never said it's reasonable," it may very well
) be meant in the sense of "Oh, I never said it was reasonable," rather
) than "Oh, I never said it is reasonable." And I don't see anything
) wrong with its being said that way.

Doubtful. There is no way to differentiate the two from context, contrary
to differentiating "it has" and "it is".

Besides, "it was" is commonly contracted to "'twas".
Try it: Say both uncontracted sentences really quickly.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
T

Tim Rentsch

Willem said:
Tim Rentsch wrote:
) Another thought on the matter:
)
) If someone says, "Oh, I never said it's reasonable," it may very well
) be meant in the sense of "Oh, I never said it was reasonable," rather
) than "Oh, I never said it is reasonable." And I don't see anything
) wrong with its being said that way.

Doubtful. There is no way to differentiate the two from context, contrary
to differentiating "it has" and "it is".

Doubtful that it /may/ be meant that way? It might be (or
might not be) unlikely that it /is/ meant that way, but it's
nearly impossible to rule out the possibility that it /may/
be meant that way.

Besides, "it was" is commonly contracted to "'twas".
Try it: Say both uncontracted sentences really quickly.

I brought up the example precisely because the reading "Oh,
I never said it was reasonable" seems the more likely
reading (at least to me) in this case.

And, it seems much more likely that someone would say "Oh, I
never said it's reasonable" than "Oh, I never said 'twas
reasonable." As one of my writing teachers used to say, the
latter is pure oatmeal.
 
R

Richard Bos

Keith Thompson said:
Taking this far more seriously than I should, an infinite loop doesn't
terminate after an eternity. It doesn't terminate *at all*.

Ah, but is that after an Aleph-null eternity, or after a gothic-c one?
And if either, does the program support or deny the CH?

Richard
 
S

Spiros Bousbouras

I'm changing the title of the thread to something more appropriate.

That's why I said "if so"; I did not consider that the condition of that
if was actually met.

I don't understand you , what condition ?
I would agree with that conclusion, were it not for the
already-established exceptions allowed by the definition of the term
"trap representation". It is still a binary representation if every
non-trap representation is interpreted as binary. The fact that trap
representations are not interpreted at all doesn't change that.

Footnote 40 which provides the definition for "pure binary
notation" does not mention trap representations. If you could
get trap representations for unsigned integers in the absence of
padding bits the definition of "pure binary notation" would not
be satisfied.
According to your interpretation what is the significance of
"pure binary representation" ?

Its significance is probably exactly the same as what you believe it to
be, for any representation not identified by the implementation as a
trap representation. My interpretation differs from yours only in that I
believe that requirement for "pure binary representation" has no
significance for trap representations.
[...]

My reasoning is not that no mention of trap representation is needed in
order to make them applicable. My reasoning is that a single mention of
trap representations, as is already present in the standard, is
sufficient to render them relevant in all contexts for which that
mention applies, and that there is no need to redundantly mention trap
representations in later clauses of the standard.

Which later clauses? We are mainly disagreeing on the
interpretation of paragraph 2 of 6.2.6.2 which does mention trap
representations.

Note by the way the first sentence of footnote 45: "Some
combinations of padding bits might generate trap
representations, for example, if one padding bit is a parity
bit". This refers to signed integers and is identical to the
first sentence of footnote 44 which refers to unsigned integers.
All they had to do to support your interpretation would be to
modify footnote 45 so that it read "... of padding or value bits
....". But they didn't.
The difference is that trap representations are in fact defined; special
handling of Friday the 13th is not.

Is it important that they are defined rather than simply mentioned?
After all , Friday the 13th doesn't need to be defined , everyone
knows what it means. In any case the point of the example is that
there is nothing in the standard to suggest that trap
representations are relevant in the absence of padding bits (and
when the sign bit is 0 in the case of signed integers) in much the
same way that there is nothing to suggest that Friday the 13th is
relevant.

[...]
No. So I guess that defines our differences.

It seems to be a central point.
Thank you for that answer, because it fits my argument well. If I have
one quarter, two dimes, and five pennies, it's often the case that I am
not allowed to use them together to buy 50 cents of goods (think vending
machines). That strikes me as a very good analogy for the way that the
possibility of trap representations can foul up what otherwise seems
like airtight logic.

You are legally allowed to use them regardless of whether a vending
machine will accept them. You wouldn't expect it would be illegal
unless there was a law saying so.
 
S

Spiros Bousbouras

If that were so, the standard would be in error to define -0 and -2**N
as possible trap representations, since the formula it gives does give
these bit representations values.

The standard by definition is correct. The worse it can happen is
that it is contradictory but I see no contradiction here. In the
same paragraph where it says "Each bit that is a value bit shall
have the same value as the same bit in the object representation of
the corresponding unsigned type..." it proceeds to describe
possible exceptions and the paragraph should be taken as a whole.
It is you (and James) who are postulating additional exceptions
to what the standard says.

And by the way I find it inaccurate to say that the standard
*defines* "-0 and -2**N as possible trap representations". -0 is
also ambiguous although this paragraph is probably not related to
our disagreement.
it is not an error, however: the statement about the values of the
individual bits is not a statement that those bit combinations are all
valid values. What it provides is a formula to map the representations
of valid values of an integer type (whatever those might be) to
mathematical integers.

I almost agree. The main purpose of 6.2.6.2 is to define a mapping
from object representations of integers to mathematical values.
Paragraph 2 in particular defines the mapping for signed integers.
(Not the whole definition exists in paragraph 2 but it's the
paragraph's purpose.) In order to define a mapping you don't just
need to define the formula but also the domain of the mapping. So
the paragraph should be interpreted in the spirit of not just
defining a formula but also a domain.

[...]
I'm referring to the bits the quoted standard text is referring to.
A bit in a signed integer, vs the same bit in an unsigned integer.

I still don't understand your argument. Note that paragraph 2 says
"Each bit that is a value bit shall *have* the same value as the same
bit in the object representation of the corresponding unsigned
type..." where I've added emphasis this time. In any case I don't
see why you think there is an important distinction to be made
between "represent" and "have".
I _expect_ to be alive and healthy next week, yet I know a traffic
accident tomorrow might prove me wrong.

I also find this sentence contradictory.
If so, I or someone else will
just have to deal with the disruption in my work when it happens. It's
well and good to live as if I might die tomorrow, but there are limits
how how thoroughly it's practical to do so.

You can be agnostic about it.

But I am being weak willed commenting on this at all because if
it's one thing this whole discussion is teaching me is that
analogies are not helpful in technical matters so let's get back to
techical stuff.

If it happened in one of your programmes that it would be useful to
apply | to two positive signed values would you take measures to
ensure that it doesn't lead to overflow ? Would you do for example

if ( INT_MAX - a >= b )
/* Safe to do a|b */
else
/* Overflow may occur */

I wouldn't since I don't believe it can lead to overflow.

--
Congratulations on figuring out the difference between
count nouns and mass nouns. I hope you're not overly
disappointed to learn that others have reached this peak
ahead of you.
Brian E. Clark at http://tinyurl.com/spiros-quote-2
 
J

James Kuyper

Spiros said:
I'm changing the title of the thread to something more appropriate.



I don't understand you , what condition ?

The word "so", when used in such a context, refers the most recent
assertion, in this case, your assertion that "paragraph 1 still needs to
explain ...". Thus, my statement "if so" was short for "if paragraph 1
still needs to explain...". The condition of that if statement was
therefore "paragraph 1 still needs to explain ...". So, when I said that
"I did not consider that the condition of that if was actually met",
what that means is "I did not consider it to be true that paragraph 1
still needs to explain ...".

It gets very tedious expanding all those verbal shortcuts, which is
precisely why they were invented (I'm NOT the one who invented them).
Were they really that obscure?

....
Footnote 40 which provides the definition for "pure binary
notation" does not mention trap representations.

Nor does it need to; the existence of 6.2.6.1p5, which does mention
them, is sufficient to exclude them from consideration while judging
whether the definition has been satisfied.
If you could
get trap representations for unsigned integers in the absence of
padding bits the definition of "pure binary notation" would not
be satisfied.

I cannot agree with that flat assertion; and as a flat assertion,
without supporting argument, it provides no basis for further discussion.

....
Which later clauses?

Footnotes 40 and 45, and section 6.2.6.2p2, just to cite the ones that
came up in your message itself..
... We are mainly disagreeing on the
interpretation of paragraph 2 of 6.2.6.2 which does mention trap
representations.

Yes, it does, but not in a way that limits them to padding bits and
negative zero; those are merely examples, not an exhaustive list of the
possible ways of being a trap representations.
Note by the way the first sentence of footnote 45: "Some
combinations of padding bits might generate trap
representations, for example, if one padding bit is a parity
bit". This refers to signed integers and is identical to the
first sentence of footnote 44 which refers to unsigned integers.
All they had to do to support your interpretation would be to
modify footnote 45 so that it read "... of padding or value bits
...". But they didn't.

That footnote was attached to a statement about padding bits; it was
unnecessary to refer to value bits to make the point that the footnote
makes, and the failure to mention them doesn't render the statement
inapplicable to them, it merely means that it wasn't applied to them.
Is it important that they are defined rather than simply mentioned?

No, you're right. It's not the definition, but the mention, that matters
- but trap representations are both defined by the standard, and
mentioned as a reason why, under certain circumstances, the behavior of
a program is undefined. "Friday the 13th" is neither defined in the
standard (which, as you point out, isn't the relevant issue) nor
identified by it as a reason why program may have undefined behavior.

The behavior of a program can be implicitly undefined "by the omission
of any explicit definition". (4p2) However, not that an explicit
definition must actually be ommitted. The fact that the standard doesn't
mention "Friday the 13th" isn't sufficient to make the behavior of code
run on Friday the 13th undefined; all the definitions in the standard
continue to apply on that date, even if they don't explicitly mention it.

Unlike implicitly undefined behavior, an explicit statement that the
behavior is undefined (such as that provided by 6.2.6.1p5) can and does
cause an equally explicit definition of behavior that would otherwise be
applicable (such as that provided by 6.2.6.2p2) to become irrelevant.
knows what it means. In any case the point of the example is that
there is nothing in the standard to suggest that trap
representations are relevant in the absence of padding bits

You're looking at it backwards. There's no statement in the standard
about padding bits that makes them the only allowed way to form a trap
representation.
 
S

Spiros Bousbouras

It's clear that the Standard expects that all value bits
of a signed integer type fully participate in forming
the value, so INT_MAX etc all will be of the form 2**N - 1.
It doesn't express this expectation very well, but if you
look in the Rationale

I had looked in the rationale and I didn't see anything which
strongly supports this view otherwise I would have mentioned
it.
 
S

Spiros Bousbouras

The word "so", when used in such a context, refers the most recent
assertion, in this case, your assertion that "paragraph 1 still needs to
explain ...". Thus, my statement "if so" was short for "if paragraph 1
still needs to explain...". The condition of that if statement was
therefore "paragraph 1 still needs to explain ...". So, when I said that
"I did not consider that the condition of that if was actually met",
what that means is "I did not consider it to be true that paragraph 1
still needs to explain ...".

It gets very tedious expanding all those verbal shortcuts, which is
precisely why they were invented (I'm NOT the one who invented them).
Were they really that obscure?

Not "if so" on its own. I guess my confusion (which still remains)
and I didn't manage to explain properly when I asked specifically
about the referent of "if so" is the following: I said ``The
combination algorithm *is* mentioned'' to which you replied
``That's why I said "if so"; I did not consider that the condition
of that if was actually met''. Your reply appeared in a context
where I was expecting you to agree or disagree with my claim that
the combination algorithm is mentioned and your reply doesn't seem
to do either. In other words I still don't know if you agree that
the algorithm is mentioned.
Nor does it need to; the existence of 6.2.6.1p5, which does mention
them, is sufficient to exclude them from consideration while judging
whether the definition has been satisfied.


I cannot agree with that flat assertion; and as a flat assertion,
without supporting argument, it provides no basis for further discussion.

The argument which I felt justifies the "flat assertion" second
sentence is in the first sentence namely that footnote 40 does not
mention trap representations. I guess you feel differently.

[...]
You're looking at it backwards.

Well , one of us does ;-)
 
T

Tim Rentsch

Spiros Bousbouras said:
I'm changing the title of the thread to something more appropriate.

[..snip..snip..snip..]

I'd thought I'd jump in with a few comments on the Subject.

1. ISTM that the discussion has migrated to a point where it's
better suited in comp.std.c than comp.lang.c. For all practical
purposes (ie, what may realistically be expected of all current
and future C implementations) the issue is settled.

2. Different people read the Standard in different ways. Some
people think the Standard should be read "axiomatically",
presuming that the language it uses is as abstract and as precise
as that of a mathematics textbook. It can be helpful to look at
it from that viewpoint, but I certainly don't think that's the
only viewpoint, or even the best one most of the time. IME
people who insist (or presume) that this "axiomatic" viewpoint
is the only "right" way to read the Standard aren't especially
interesting or useful to talk to past a certain point, because
their arguments are usually based on the one faulty underlying
premise.

3. As to the question -- first there is the statement in 6.2.6.2 p 2:

Each bit that is a value bit shall have the same value as the
same bit in the object representation of the corresponding
unsigned type

What is "value"? It's defined in 3.17:

value
precise meaning of the contents of an object when
interpreted as having a specific type

So, if signed value bits have the same /meaning/ as the same bit in
the corresponding unsigned type, there can be no doubt that (for
example) the largest positive value of a signed type is one less
than a power of two. A careful reading of the rest of 6.2.6.2 p 2
will produce a similar conclusion for the negative range.

Therefore: with no padding bits, signed integer types have at most
one trap representation, which (if present) is the particular object
representation identified in 6.2.6.2 p 2.
 
K

Keith Thompson

Tim Rentsch said:
2. Different people read the Standard in different ways. Some
people think the Standard should be read "axiomatically",
presuming that the language it uses is as abstract and as precise
as that of a mathematics textbook. It can be helpful to look at
it from that viewpoint, but I certainly don't think that's the
only viewpoint, or even the best one most of the time. IME
people who insist (or presume) that this "axiomatic" viewpoint
is the only "right" way to read the Standard aren't especially
interesting or useful to talk to past a certain point, because
their arguments are usually based on the one faulty underlying
premise.
[...]

My own opinion is that the Standard should be *written*
"axiomatically", at least more so than it is now. I don't mean that
it should use a mathematical formalism; after all, it has to be
understandable to non-mathematicians.

My biggest pet peeve is the use of "definitions" that aren't really
definitions as I understand the word. A definition of a "foobar"
should allow me to determine unambiguously, for any given entity,
whether that entity is a foobar or not. The determination needn't be
trivial; it can depend on other definitions (and ultimately it has
to). But an arbitrary statement *about* foobars, isn't necessarily a
definition of the word "foobar".

I'm thinking in particular of the standard's definitions of
"expression" and "lvalue". A strict reading of the definition of
"lvalue" implies that 42 is an lvalue -- except that an lvalue is an
expression, and a strict reading of the definition of "expression"
implies that 42 isn't an expression.

Sometimes you just have to acknowledge the flaws in the standard and
read it based on the (hopefully obvious) intent, setting aside any
overly literal readings that lead to absurdities. Of *course* 42 is
an expression and not an lvalue.
 
L

lawrence.jones

Keith Thompson said:
My biggest pet peeve is the use of "definitions" that aren't really
definitions as I understand the word. A definition of a "foobar"
should allow me to determine unambiguously, for any given entity,
whether that entity is a foobar or not. The determination needn't be
trivial; it can depend on other definitions (and ultimately it has
to). But an arbitrary statement *about* foobars, isn't necessarily a
definition of the word "foobar".

Most dictionary definitions do not meet that standard. And a two-page
long definition doesn't fit into the ISO Standard format very well.
I'm thinking in particular of the standard's definitions of
"expression" and "lvalue". A strict reading of the definition of
"lvalue" implies that 42 is an lvalue -- except that an lvalue is an
expression, and a strict reading of the definition of "expression"
implies that 42 isn't an expression.

The definition of "lvalue" is acknowledged to be defective. It's also
excruciatingly hard to get right. The definition of "expression" could
also be better, but it's not nearly as problematic as "lvalue".
 
T

Tim Rentsch

Keith Thompson said:
Tim Rentsch said:
2. Different people read the Standard in different ways. Some
people think the Standard should be read "axiomatically",
presuming that the language it uses is as abstract and as precise
as that of a mathematics textbook. It can be helpful to look at
it from that viewpoint, but I certainly don't think that's the
only viewpoint, or even the best one most of the time. IME
people who insist (or presume) that this "axiomatic" viewpoint
is the only "right" way to read the Standard aren't especially
interesting or useful to talk to past a certain point, because
their arguments are usually based on the one faulty underlying
premise.
[...]

My own opinion is that the Standard should be *written*
"axiomatically", at least more so than it is now. I don't mean that
it should use a mathematical formalism; after all, it has to be
understandable to non-mathematicians.

I agree with this up to a point, but only to a point. Abstract
mathematical objects have only the properties described in their
definitions and axioms. The C Standard is talking about terms
that bear some relationship to things in the real world (eg,
memory locations, addresses). There's no point in trying to
pretend those associations don't exist. Indeed, specifying the
requirements on such associations is one of the most important
functions of the C Standard. As such it cannot be made completely
abstract and "axiomatic".
My biggest pet peeve is the use of "definitions" that aren't really
definitions as I understand the word. A definition of a "foobar"
should allow me to determine unambiguously, for any given entity,
whether that entity is a foobar or not. The determination needn't be
trivial; it can depend on other definitions (and ultimately it has
to). But an arbitrary statement *about* foobars, isn't necessarily a
definition of the word "foobar".

I'm with you on this one. The Standard needs to distinguish (and
distinguish clearly) between "definitions" that (a) exactly
define a term, (b) define some constituent elements of a term but
leave others out, and (c) impose some requirements on what items
qualify to be put under the heading of some term (but don't
necessarily tell the whole story). There are numerous examples,
I'm pretty sure, in each of the three categories, appearing in
the Standard.
I'm thinking in particular of the standard's definitions of
"expression" and "lvalue". A strict reading of the definition of
"lvalue" implies that 42 is an lvalue -- except that an lvalue is an
expression, and a strict reading of the definition of "expression"
implies that 42 isn't an expression.

The definition for expression may be just poorly worded. Clearly
it's an oversight that 42 isn't an expression.

For lvalue, I don't know whether it's just poor wording, or
if there was some other more conceptual difficulty. Certainly
the definition of lvalue could stand some improvement.
Sometimes you just have to acknowledge the flaws in the standard and
read it based on the (hopefully obvious) intent, setting aside any
overly literal readings that lead to absurdities. Of *course* 42 is
an expression and not an lvalue.

I agree, but I also think it's important to distinguish between
different levels of glitches. An oversight in the wording is
one thing; using "definition" to mean either (a) or (b) or (c)
above is another thing, and it's less obvious what was meant
when such confusions occur.
 
T

Tim Rentsch

Most dictionary definitions do not meet that standard. And a two-page
long definition doesn't fit into the ISO Standard format very well.

Yes, but the Standard isn't defining terms the same way a
dictionary does. A dictionary defines words that are used
elsewhere and tries to describe how they are used, including
multiple meanings. The Standard defines terms only as it
means that they will be used in that context. For writing
in the Standard, it seems self evident that precise definitions
should be the grail for terms it deems important enough to
give definitions for.

The definition of "lvalue" is acknowledged to be defective. It's also
excruciatingly hard to get right. The definition of "expression" could
also be better, but it's not nearly as problematic as "lvalue".

ISTM that it isn't that hard to give a definition for "lvalue"
that's fairly short and exactly covers those things that are
lvalues. The problem is, in different contexts different kinds
of lvalues behave differently, and the way the Standard is written
these differences are sometimes intermingled with the notion
of 'lvalue' (as the Standard uses the term). A crisper definition
of lvalue, and one kept separated from the different rules that
apply for different kinds of lvalues in different contexts,
would (I suggest) lead to a better text all around.

Yes, I know, it's easy to say that here from the cheap seats. :)
I might be motivated to say more later, but right now I have
to leave the matter here.
 
T

Tim Rentsch

Richard Heathfield said:
(e-mail address removed) said:



And yet we all know what an lvalue is. Ultimately, *any* definition
is bound to be defective. Here's some evidence to support that
assertion:

An lvalue is an object. (A modifiable lvalue is an object whose
value you are allowed to change.)

One specific example, especially one so clearly chosen poorly,
provides /at best/ only very poor evidence for a completely
general assertion. Is there an actual point you're trying to
make here, or was this posted just for the enjoyment of being
a contrarian?
 
K

Keith Thompson

Tim Rentsch said:
Keith Thompson said:
Tim Rentsch said:
2. Different people read the Standard in different ways. Some
people think the Standard should be read "axiomatically",
presuming that the language it uses is as abstract and as precise
as that of a mathematics textbook. It can be helpful to look at
it from that viewpoint, but I certainly don't think that's the
only viewpoint, or even the best one most of the time. IME
people who insist (or presume) that this "axiomatic" viewpoint
is the only "right" way to read the Standard aren't especially
interesting or useful to talk to past a certain point, because
their arguments are usually based on the one faulty underlying
premise.
[...]

My own opinion is that the Standard should be *written*
"axiomatically", at least more so than it is now. I don't mean that
it should use a mathematical formalism; after all, it has to be
understandable to non-mathematicians.

I agree with this up to a point, but only to a point. Abstract
mathematical objects have only the properties described in their
definitions and axioms. The C Standard is talking about terms
that bear some relationship to things in the real world (eg,
memory locations, addresses). There's no point in trying to
pretend those associations don't exist. Indeed, specifying the
requirements on such associations is one of the most important
functions of the C Standard. As such it cannot be made completely
abstract and "axiomatic".
Agreed.
My biggest pet peeve is the use of "definitions" that aren't really
definitions as I understand the word. A definition of a "foobar"
should allow me to determine unambiguously, for any given entity,
whether that entity is a foobar or not. The determination needn't be
trivial; it can depend on other definitions (and ultimately it has
to). But an arbitrary statement *about* foobars, isn't necessarily a
definition of the word "foobar".

I'm with you on this one. The Standard needs to distinguish (and
distinguish clearly) between "definitions" that (a) exactly
define a term, (b) define some constituent elements of a term but
leave others out, and (c) impose some requirements on what items
qualify to be put under the heading of some term (but don't
necessarily tell the whole story). There are numerous examples,
I'm pretty sure, in each of the three categories, appearing in
the Standard.
I'm thinking in particular of the standard's definitions of
"expression" and "lvalue". A strict reading of the definition of
"lvalue" implies that 42 is an lvalue -- except that an lvalue is an
expression, and a strict reading of the definition of "expression"
implies that 42 isn't an expression.

The definition for expression may be just poorly worded. Clearly
it's an oversight that 42 isn't an expression.

Right. The definition is

An _expression_ is a sequence of operators and operands that
specifies computation of a value, or that designates an object or
a function, or that generates side effects, or that performs a
combination thereof.

The oversight is that in the expression 42, there are no operators,
and therefore no operands. One could argue that 42 is the operand of
some mythical invisible operator, but there's no support for this idea
in the standard.

But we all understand the intent -- and if we don't, we can study the
syntax in section 6.5.

IMHO the best solution would be to *define* expression as a construct
that satisfies the grammar for the non-terminal "expression".
Something like the current definition can be kept, but not as a
definition, just as a statement of what expressions are for.
For lvalue, I don't know whether it's just poor wording, or
if there was some other more conceptual difficulty. Certainly
the definition of lvalue could stand some improvement.

The term "lvalue" is actually difficult to define. C90 had:

An _lvalue_ is an expression (with an object type or an incomplete
type other than void) that designates an object.

This certainly expressed the intent, but the problem is that, if taken
literally, whether an expression is an lvalue could depend on the
current run-time value of some object:

int x;
int *ptr = x; /* *ptr designates x */
ptr = NULL; /* *ptr doesn't designate any object */

C99 replaced this flaw with another one:

An _lvalue_ is an expression with an object type or an incomplete
type other than void; if an lvalue does not designate an object
when it is evaluated, the behavior is undefined.

42 is an expression with an object type (namely int), so by this
definition 42 is an lvalue -- and evaluating 42 invokes undefined
behavior.

Having a flawed definition where the intent is sufficiently obvious
isn't too bad; I think the C90 definition is an example of that. But
the C99 definition is nearly impossible to understand unless you
already know what an lvalue is. The whole point of an lvalue is that
it designates an object; the C99 definition doesn't directly express
this.

The actual intent is something like this:

An _lvalue_ is an expression (with an object type or an incomplete
type other than void) that *potentially* designates an object. If
an lvalue does not designate an object when it is evaluated *in a
context that requires an lvalue*, the behavior is undefined.

I've suggested something like this, but apparently defining
"potentially designates" in standardese is too difficult. In my
opinion a footnote with a couple of examples, showing that *ptr where
ptr is currently a null pointer is an lvalue, would be sufficient; it
would certainly be an improvement over the C90 and C99 definitions.
I agree, but I also think it's important to distinguish between
different levels of glitches. An oversight in the wording is
one thing; using "definition" to mean either (a) or (b) or (c)
above is another thing, and it's less obvious what was meant
when such confusions occur.

Agreed.
 
K

Kaz Kylheku

Right. The definition is

An _expression_ is a sequence of operators and operands that
specifies computation of a value, or that designates an object or
a function, or that generates side effects, or that performs a
combination thereof.

Yet another standard C contradiction. Which do you believe? The syntax for
primary-expression, or the above?

Is it reasonable to use the above definition in place of understanding what an
expression is based on the phrase structure grammar?
The oversight is that in the expression 42, there are no operators,
and therefore no operands. One could argue that 42 is the operand of
some mythical invisible operator, but there's no support for this idea
in the standard.

Isn't there?

What if a term can be an operand, but not with respect to an
expression-level operator, but, more generally, with respect to any
special syntactic form in the language which coordinates evaluation?

We might say that E is an operand of the if statement in if (E) S;

In the following, 42 is the operand of the expression-statement:

{ 42; }

The expression statement evaluates the expression which is its operand,
and discards the value. The semantics of doing this belongs to the
expression-statement; the evaluation and discarding of the value
is done because 42 is the operand of that statement.

Here, 42 is the operand of a selection statement:

if (42) { }

The if statement is a kind of operator: a ``special form'' built into the
language which has operands and coordinates their evaluation.

We can also regard 42 as an operator that is its own operand, denoting
self-evaluation. That is to say, 42 is an operator that denotes the
construction of the value 42. That value may be serve as an operand to other
things, which makes 42 and operator and operand simultaneously.

:)
IMHO the best solution would be to *define* expression as a construct
that satisfies the grammar for the non-terminal "expression".

But we have that!!!

3. Terms, definitions, and symbols

1 For the purposes of this International Standard, the following definitions
apply. Other terms are defined where they appear in italic type or on the
left side of a syntax rule.
^^^^^^^^^^^^^^^^^^^^^^^^^^^

If a term appears as a syntax nonterminal, that's a definition.

So the definition of the expression nonterminal is on equal footing
with the italicized introduction of the term in 6.5, paragraph 1.

If these two conflict in any way, it is a defect.
 
L

lawrence.jones

Keith Thompson said:
The actual intent is something like this:

An _lvalue_ is an expression (with an object type or an incomplete
type other than void) that *potentially* designates an object. If
an lvalue does not designate an object when it is evaluated *in a
context that requires an lvalue*, the behavior is undefined.

I'm not so sure about the second part of that -- it seems to me that
evaluating an lvalue expression that doesn't designate an object is bad
news no matter what the context.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,147
Latest member
CarenSchni
Top