Does your C compiler support "//"? (was Re: using structures)

D

Douglas A. Gwyn

David said:
... You're arguing that there are no such
platforms, but that programmers should not be allowed to rely on there
being no such platforms -- which makes absolutely no sense.

It makes sense to me, and I suspect to others.
Just because current platforms share a certain
characteristic X, doesn't mean that it is wise
to make one's programs *depend* on characteristic
X, especially when it is easy to avoid such
dependence.
I recall when the GNU project was insisting that
it was reasonable for them to assume C ints were
exactly 32 bits wide. Is that something POSIX
should also require? Why?
 
R

Richard Kettlewell

Douglas A. Gwyn said:
I recall when the GNU project was insisting that it was reasonable
for them to assume C ints were exactly 32 bits wide.

standards.info has said assume int >= 32 bits (not = 32 bits), for at
least many years.
 
B

Brian Inglis

It makes sense to me, and I suspect to others.
Just because current platforms share a certain
characteristic X, doesn't mean that it is wise
to make one's programs *depend* on characteristic
X, especially when it is easy to avoid such
dependence.

ISTM that making definitions restrictive is short sighted when we
really don't know much in the way of characteristics of platforms
we'll be running on in five years, and really don't support many of
the capabilities of current architectures.
Perhaps IBM's decimal IEEE FP will become more desirable than the
current binary IEEE FP; with that success, they could then decide to
push e.g. 64 bit decimal integers, backing that with generous funding
to all the necessary standards committees, but invalidating
assumptions made in some programs.
I recall when the GNU project was insisting that
it was reasonable for them to assume C ints were
exactly 32 bits wide.

That was their original target and unfortunately it was valid for
about 20 years.
Is that something POSIX
should also require? Why?

I just wonder if 64 bits can last as long as 10 years nowadays.
 
C

Chris Hills

Brian Inglis said:
That was their original target and unfortunately it was valid for
about 20 years.


I just wonder if 64 bits can last as long as 10 years nowadays.


Since the majority of the MCU in the world are 8 bit followed by 16 bit
The majority of systems still use int == 16 bits

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

CBFalconer

Richard said:
standards.info has said assume int >= 32 bits (not = 32 bits), for
at least many years.

I have always considered that a poor assumption. Positing the
existance of 'long' is quite enough.
 
D

David Hopwood

Douglas said:
It makes sense to me, and I suspect to others.
Just because current platforms share a certain
characteristic X, doesn't mean that it is wise
to make one's programs *depend* on characteristic
X, especially when it is easy to avoid such
dependence.

This is not only a property of current platforms; it is extremely likely
to be a property of future platforms. If making a particular assumption
simplifies programs, and the assumption is technically reasonable and
does not unduly limit implementations, why not make it?

You have previously argued, if I remember correctly, that the technical
advantages of each signed integer representation in specific circumstances
justify supporting multiple representations. I disagree -- the advantages of
having just one kind of signed integer representation outweigh any relative
advantages of each representation. Much the same applies to 8-bit bytes:
standardizing on a single definition of a byte has been more important
than which specific size was chosen.
I recall when the GNU project was insisting that
it was reasonable for them to assume C ints were
exactly 32 bits wide.

Since no individual speaks for "the GNU project", I very much doubt it.
Anyone who did say that was simply wrong: it was not reasonable.
Assuming two's complement (and 8-bit bytes), OTOH, is reasonable. The
differences between these assumptions are in their technical merits,
and the range of real-world hardware platforms and ABIs that are
consistent with them.

David Hopwood <[email protected]>
 
D

Douglas A. Gwyn

Brian said:
I just wonder if 64 bits can last as long as 10 years nowadays.

The reason that C99 adopted an open-ended approach
is so that we don't have to keep adding "long long long"
etc. just to keep up.

What I hope for is good machine-level support for
arbitrary-precision integers, which I'm sure most
know are of great importance for network security
(but unfortunately are a bottleneck using current
software implementations). Maybe by the time these
show up, or even before (if we decide that it would
be sufficiently useful to have a standard way to
use them effectively even when run-time library
support is needed), we'll have figured out a good
way to embed them in the language. (They almost
certainly cannot be lumped in with fixed-width
integer types.)
 
D

Douglas A. Gwyn

David said:
Since no individual speaks for "the GNU project", I very much doubt it.
Anyone who did say that was simply wrong: it was not reasonable.

Of course it was not reasonable, but Stallman did say it.
Assuming two's complement (and 8-bit bytes), OTOH, is reasonable.

Not to me, since I have to deal with platforms using
other (native) representations. What happens is that
entire large packages such as POSIX libraries get
constructed with some hidden dependencies on the
representation, "because POSIX says so", not because
it was actually necessary. That means that I can't
use such packages on other kinds of platform, at least
not without a lot of work that should never have been
necessary. We went through this exercise when porting
UNIX and its main apps to newer platforms, and one
would think that some lessons would have been learned.
 
C

Clive D. W. Feather

David said:
I have now read the original posting:

http://groups.google.co.uk/[email protected]

You have a point in saying that Clive's quoting did not preserve important
context. However, the statement that "the POSIX people have not understood
what was suggested by the Standard C people" is still inconsistent with the
rest of the post.

It's also still wrong.

The POSIX people understood perfectly well what was suggested by the
Standard C people. They chose, however, not to go that way.
 
R

Richard Kettlewell

Douglas A. Gwyn said:
Of course it was not reasonable, but Stallman did say it.

*Did* he? The versions of the GNU coding standards I've seen all say
to assume ints are at least 32 bits, not exactly 32 bits.
 
D

Douglas A. Gwyn

Richard said:

I no longer recall the exact circumstances,
but possibly it was on an e-mail reflector
(mailing list) shortly after he started the
GNU project. I certainly exchanged e-mail
with RMS about this issue.
 
P

Paul D. Smith

dag> I no longer recall the exact circumstances, but possibly it was
dag> on an e-mail reflector (mailing list) shortly after he started
dag> the GNU project. I certainly exchanged e-mail with RMS about
dag> this issue.

This is certainly possible (anyway there's no way to disprove it :)) but
I've been doing work on various GNU projects for 16+ years and I've
never heard anything like that.

As Clive says, the documented GNU position has always been to not bother
trying to write code that's portable to systems where int is less than
32 bits. From the GNU coding standards:
Even GNU systems will differ because of differences among CPU
types--for example, difference in byte ordering and alignment
requirements. It is absolutely essential to handle these differences.
However, don't make any effort to cater to the possibility that an
`int' will be less than 32 bits. We don't support 16-bit machines in
GNU.

--
 
B

Brian Inglis

dag> I no longer recall the exact circumstances, but possibly it was
dag> on an e-mail reflector (mailing list) shortly after he started
dag> the GNU project. I certainly exchanged e-mail with RMS about
dag> this issue.

This is certainly possible (anyway there's no way to disprove it :)) but
I've been doing work on various GNU projects for 16+ years and I've
never heard anything like that.

As Clive says, the documented GNU position has always been to not bother
trying to write code that's portable to systems where int is less than
32 bits. From the GNU coding standards:

With a goal to write code that is free of historical Unix limitations,
like 512 byte buffers, and 256 byte lines, required because of the
limitations of Unix's 16 bit addressing heritage, and the machine
advances in the years since then, it would have been unrealistic to
shackle GNU projects with the same limitations and expect the goals to
be met in anything but the smallest projects.
But a lot of the general GNU utilities were easily ported to DOS
systems, using compilers with somewhat Unix compatible libraries,
requiring only minor changes to makefiles, and CFLAGS options like
-Dint=long.
But I still can't bring myself to use an int for any purpose that
isn't well within the Standard defined 32K limit.
 
D

Douglas A. Gwyn

Brian said:
But I still can't bring myself to use an int for any purpose that
isn't well within the Standard defined 32K limit.

Yeah, the problem is that it hampers portability to
make unnecessary assumptions. -Dint=long sort of
works, but not quite: long int l;
If one wants an integer type that is at least 32
bits wide, instead of int one should use long (or
int_least32_t or some similar header-defined type).
 
T

Thomas Pornin

According to Brian Inglis said:
But I still can't bring myself to use an int for any purpose that
isn't well within the Standard defined 32K limit.

In my own code, I often see that the 32K limit on "int" is not
restrictive because "int" is rarely an appropriate type except
for booleans (granted, I could use the C99 "_Bool") or symbolic
enumerations. For instance, if I need an index for an array, the type
I use is "size_t" which will always be appropriate: 16-bit on 16-bit
machines, 32-bit on my PC, 64-bit on my Alpha. "size_t" strikes me as
"natural" for offsets in an object which fits in memory.


--Thomas Pornin
 
N

Niklas Matthies

For instance, if I need an index for an array, the type I use is
"size_t" which will always be appropriate: 16-bit on 16-bit
machines, 32-bit on my PC, 64-bit on my Alpha. "size_t" strikes me
as "natural" for offsets in an object which fits in memory.

Wouldn't the "natural" type for offsets much rather be ptrdiff_t?

-- Niklas Matthies
 
B

Brian Inglis

Wouldn't the "natural" type for offsets much rather be ptrdiff_t?

ISTM he's using the term offsets in the unsigned sense relative to a
base address, not in the signed sense relative to a location within a
composite object.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top