J
jacob navia
Interesting...Al said:For *some* Unix systems, perhaps.
On HP-UX, a long is still 32 bits.
What does gcc do in those systems?
It follows HP or remains compatible with itself?
Interesting...Al said:For *some* Unix systems, perhaps.
On HP-UX, a long is still 32 bits.
Al Balmer said:For *some* Unix systems, perhaps.
On HP-UX, a long is still 32 bits.
Keith said:gcc has 32-bit longs on some systems, 64-bit longs on others. For the
most part, the division is between "32-bit systems" and "64-bit
systems", though neither phrase is particularly meaningful. I know
that some versions of HP-UX are 32-bit systems, so it wouldn't
surprise me to see 32-bit longs. If there are 64-bit versions of
HP-UX, I'd expect gcc to have 64-bit long on that system, though I
wouldn't depend on it.
I *think* that gcc typically has sizeof(long) == sizeof(void*). In
fact, all the systems I currently use, perhaps even all the systems
I've ever used, have sizeof(long)==sizeof(void*) (either 32 or 64
bits), though of course there's no requirement for them to be the same
size.
My comment was based on my experience with Sparc, I'm sure LP64 wasjacob said:Microsoft disagrees...
You have to, if you want to link with the system libraries!I have to follow the lead compiler in each system. By the way, the
lead compiler in an operating system is the compiler that compiled
the Operating System: MSVC under windows, gcc under linux, etc.
Interesting...
What does gcc do in those systems?
It follows HP or remains compatible with itself?
Ian said:My comment was based on my experience with Sparc, I'm sure LP64 was
adopted as the 64 bit ABI (mainly to support mixed 32 and 64
applications IIRC) before gcc had a 64 bit Sparc port.
A bit OT, but can you mix 32 and 64 bit applications on 64 bit windows?
Yes, that's the reason.You have to, if you want to link with the system libraries!
jacob said:For unix systems, gcc decided that
char 8, short 16, int 32, long 64, long long 64
Ancient_Hacker said:<soap>
I think it's high time a language has the ability to do the very basic
and simple things programmers need to write portable software: the
ability to specify, unambiguously, what range of values they need to
represent, preferably in decimal, binary, floating, fixed, and even
arbitrary bignum formats. Not to mention hardware-dependent perhaps
bit widths. There's no need for the compiler to be able to actually
*do* any arbitrarily difficult arithmetic, but at least give the
programmer the ability to ASK and if the compiler is capable, and get
DEPENDABLE math. I don't think this is asking too much.
Ancient_Hacker said:The biness of C having wink-wink recognized defacto binary integer
widths is IMHO just way below contempt. The way was shown many
different times since 1958, why can't we get something usable, portable
and reasonable now quite a ways into the 21st century?
That's a VERY wise decision.
Microsoft's decision of making sizeof(long) < sizeof(void *)
meant a LOT OF WORK at a customer's site recently. It was a basic
assumption of thir code.
> Case (1): I need an integer that can represent the part numbers in a
> 1996 Yugo, well known to range from 0 to 12754. Pascal leads the way
> here: var PartNumber: 0..12754.
> Case (2): I need a real that can represent the age of the universe, in
> Planck units, about 3.4E-42 to 12.E84. The accuracy of measurement is
> only plus or minus 20 percent, so two significant digits is plenty.
> PL/I is the only language I know of that can adequately declare this:
> DECLARE AGE FLOAT BINARY (2,85). Which hopefully the compiler will map
> to whatever floating-point format can handle that exponent.
> case (3): I need an integer that can do exact math with decimal prices
> from 0.01 to 999,999,999,99. COBOL and PL/I can do this.
> case (4): I need a 32-bit integer. Pascal and PL/I are the only
> languages that can do this: var I32: -$7FFFFFFF..$7FFFFFFF; PL/I:
> DECLARE I32 FIXED BINARY (32).
> case (5): I need whatever integer format is fastest on this CPU and
> is at least 32 bits wide. Don't know of any language that has this
> capability.
jacob navia said:That's a VERY wise decision.
Microsoft's decision of making sizeof(long) < sizeof(void *)
meant a LOT OF WORK at a customer's site recently. It was a basic
assumption of thir code.
Richard said:Darling, if you want Ada, you know where to find her.
Richard
jacob said:Interesting...
What does gcc do in those systems?
It follows HP or remains compatible with itself?
Ancient_Hacker said:!! ewwww. yuck. Somehow after I took one look at Ada, I just put it
out of my mind. I think a lot of people did that. Maybe when there's
a open source Ada compiler that generates good code and doesnt give me
the feeling I'm writing a cruise-missle guidance program..
COBOL more than made up for its innovative and genuinely useful approach toAncient_Hacker said:The funny thing is this issue was partly solved in 1958, 1964, and in
1971.
In 1958 Grace Hopper and Co. designed COBOL so you could actually
declare variables and their allowed range of values! IIRC something
like:
001 DECLARE MYPAY PACKED-DECIMAL PICTURE "999999999999V999"
001 DECLARE MYPAY USAGE IS COMPUTATIONAL-/1/2/3
Miracle! A variable with predictable and reliable bounds! Zounds!
Well, it's still a C group, so let's look at how far our language ofWhat the typical user REALLY needs, and may not be doable in any simple
or portable way, is a wide choice of what one wants:
Although C offers less flexibility and safety in this regard, an adequateCase (1): I need an integer that can represent the part numbers in a
1996 Yugo, well known to range from 0 to 12754. Pascal leads the way
here: var PartNumber: 0..12754.
And which will hopefully not introduce any glaring rounding or accuracyCase (2): I need a real that can represent the age of the universe, in
Planck units, about 3.4E-42 to 12.E84. The accuracy of measurement is
only plus or minus 20 percent, so two significant digits is plenty.
PL/I is the only language I know of that can adequately declare this:
DECLARE AGE FLOAT BINARY (2,85). Which hopefully the compiler will map
to whatever floating-point format can handle that exponent.
C cannot do this natively, so you'll need libraries. Luckily, C also makescase (3): I need an integer that can do exact math with decimal prices
from 0.01 to 999,999,999,99. COBOL and PL/I can do this.
You can do this in C, but only if your platform actually provides a nativecase (4): I need a 32-bit integer. Pascal and PL/I are the only
languages that can do this: var I32: -$7FFFFFFF..$7FFFFFFF; PL/I:
DECLARE I32 FIXED BINARY (32).
That's odd, because this is just one of those things C was practically madecase (5): I need whatever integer format is fastest on this CPU and
is at least 32 bits wide. Don't know of any language that has this
capability.
The biness of C having wink-wink recognized defacto binary integer
widths is IMHO just way below contempt.
<soap>
I think it's high time a language has the ability to do the very basic
and simple things programmers need to write portable software:
the ability to specify, unambiguously, what range of values they need to
represent, preferably in decimal, binary, floating, fixed, and even
arbitrary bignum formats. Not to mention hardware-dependent perhaps bit
widths. There's no need for the compiler to be able to actually *do*
any arbitrarily difficult arithmetic, but at least give the programmer
the ability to ASK and if the compiler is capable, and get DEPENDABLE
math. I don't think this is asking too much.
Reasonability is in the eye of the beholder, but anyone who would deny C'sThe way was shown many different times since 1958, why can't we get
something usable, portable and reasonable now quite a ways into the 21st
century?
You can use a typedef to abstract away from the actual type and convey the
purpose to humans, but C does not allow true subtyping, so you'd have to do
any range checks yourself. This obviously encourages a style where these
checks are done as little as possible, or possibly never, which is a clear
drawback.
jacob navia said:Microsoft disagrees...
I am not saying that gcc's decision is bad, I am just stating this
as
a fact without any value judgement. Gcc is by far the most widely
used compiler under Unix, and they decided LP64, what probably is
a good decision for them.
I have to follow the lead compiler in each system. By the way, the
lead compiler in an operating system is the compiler that compiled
the Operating System: MSVC under windows, gcc under linux, etc.
Yes. If we just have one variable in scope, a bit of gibberish at the startStephen Sprunk said:Do you really care so much about the extra two or three letters it takes
to use a type that is _guaranteed_ to work that you're willing to accept
your program randomly breaking?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.