size_t problems

K

Keith Thompson

Ed Jensen said:
Where can one find the original charters for both comp.lang.c and
comp.std.c?

Though comp.lang.c has no charter, the article announcing its creation
(as net.lang.c) has been posted several times in this thread.

I've tried and failed to find the corresonding announcement for
comp.std.c. Perhaps someone else will have better luck. I'd be
interested in reading it.
 
K

Kelsey Bjarnason

[snips]

No, it's a campaign for people implementing the standard to make int 64
bits, where that is the natural integer size suggested by the architecture.
Or in other words, where an arbitrary index variable needs 64 bits because
the architecture supports huge arrays.

So let me see if I understand.

On a 64-bit implementation, you want ints to be 64-bit, rather than
something else. So far so good.

Your reasoning for this is so that those ints can be used to index memory
regions larger than those index-able by smaller - say 32-bit - ints.

So far so good.

Yet you dislike and want to abolish size_t, which has pretty much as its
sole reason for existence the ability to index memory regions larger than
those index-able by ints.

Hmm. You want to get rid of a type whose apparently sole reason for
existing is to accomplish the very task you want to have accomplished.

This, to you, makes sense?
 
K

Kelsey Bjarnason

[snips]

Thank you for clarifying. That is indeed one of the portability issues
that I knew about when posting that code - i.e. it was one of my
illustrations.

In a *perfect* example of non-infallibility, I managed to brain fart from
"stack" to "fifo", rather than "lifo" - and have this caught by
a sharp-eyed reader.

Just goes to show, ain't nobody prefect. :)

Also goes to show why having a competent editor is probably a good idea
for an author.
 
F

Flash Gordon

Malcolm McLean wrote, On 03/09/07 22:34:
No, it's a campaign for people implementing the standard to make int 64
bits, where that is the natural integer size suggested by the
architecture.

Well, Intel say not to have a 64 bit int on their 64 bit processor. A
point you have yet to address.
Or in other words, where an arbitrary index variable needs
64 bits because the architecture supports huge arrays.

Actually, that is a distinct issue.
It is not a campaign to change the standard.

If you want the size of int to change on Window, go lobby MS, if you
want it to change on Linux, go lobby the Linux people, if you want it
changed for Posix, go lobby the Posix people. As far as I can see no one
is interested in your ideas here.
 
C

CBFalconer

Chris said:
jacob navia said:
Yes, and I have posted that many times. Here is it again:

[some snippage]

Feel free to post that sort of thing in *net* dot lang dot c (but,
please, not *comp* dot lang dot c). :)

Seriously, the USENET world was rather different in 1982 (as was
the C language, for that matter). On the one hand, you seem to
want change; and yet here, on the other, you want us to pretend it
is still 1982. I find this ... curious.

Yet the list he quoted adequately carries the present thread of
topicality. Repeated below:

All that is required is a proper definition of C, which is provided
to us by the standard. Note that the standard did not exist in
1982.
 
M

Martin Wells

Flash:
Then we had better stick with 32 bit int because that is what Intel say
is most efficient on their spanking new high end 64 bit processors.


Then why are they calling them 64-Bit processors? I can use arrays of
unsigned char's in conjunction with bitwise operators to give the
illusion of 256-Bit numbers on any machine... but that doesn't mean
I'm gonna go calling the machine 256-Bit.

Many 32-Bit machines also have 16-Bit registers, however the 32-Bit
ones are (naturally) more efficient. If the same goes for these
alleged 64-Bit machines, then they should be called 32-Bit.

Martin
 
K

Keith Thompson

Martin Wells said:
Flash:


Then why are they calling them 64-Bit processors?

Probably because they have 64-bit addresses.

[...]
Many 32-Bit machines also have 16-Bit registers, however the 32-Bit
ones are (naturally) more efficient. If the same goes for these
alleged 64-Bit machines, then they should be called 32-Bit.

The "bitness" of a processor has always been a vague concept. There's
a real difference between a processor that can address 2**32 bytes of
(virtual) memory and one that can address 2**64 bytes, even if the
recommended size of int is the same on both.
 
C

CBFalconer

Malcolm said:
.... snip ...

If presented as a portable program that will produce correct
output on any conforming implementation, it is bugged, because an
array of UCHAR_MAX is not guaranteed to fit in available stack
space, sorry, automatic memory. However if you restrict it to 8
bit char platforms, by far the most common, it is OK.
From which I gather that you wouldn't trust a system with CHAR_BIT
set to 9 or 10?
 
C

CBFalconer

Malcolm said:
No, it's a campaign for people implementing the standard to make
int 64 bits, where that is the natural integer size suggested by
the architecture. Or in other words, where an arbitrary index
variable needs 64 bits because the architecture supports huge
arrays.

It is not a campaign to change the standard.

Maybe you should notice that, at present, any implementor is
perfectly free to make an int any bit length that equals or exceeds
16. This (surprise) does not exclude 64. Specifying that
"(CHAR_BIT * sizeof int) == 64" would be a (surprise) drastic
change to the standard and would immediately invalidate most
existing implementations and code. The committee (surprise) frowns
on this, as do I.
 
E

Ed Jensen

pete said:
just in case you feel like making a post in the future
concerning the future of the language
and you want to appear to be serious.

Thanks, Pete. I added comp.std.c to my (already too large :) list of
newsgroups I read.
 
I

Ian Collins

Malcolm said:
No, it's a campaign for people implementing the standard to make int 64
bits, where that is the natural integer size suggested by the
architecture. Or in other words, where an arbitrary index variable needs
64 bits because the architecture supports huge arrays.
The "natural size" for int on most common 64 bit systems is 32 bits.
Unless you pick a specific definition of "natural" that excludes code
size and performance.
 
I

Ian Collins

Flash said:
What, none of the arguments used by Intel which I've pointed you at and
you have not addressed are good? How about the arguments you have been
pointed at by those responsible for the Unix standard that you have not
dealt with?
Notice he always avoids responding when these are mentioned. This tends
to the the usual behavior of a troll.
 
F

Flash Gordon

Martin Wells wrote, On 04/09/07 00:52:
Flash:


Then why are they calling them 64-Bit processors?

Because they are 64 bit processors.
I can use arrays of
unsigned char's in conjunction with bitwise operators to give the
illusion of 256-Bit numbers on any machine... but that doesn't mean
I'm gonna go calling the machine 256-Bit.

Many 32-Bit machines also have 16-Bit registers, however the 32-Bit
ones are (naturally) more efficient. If the same goes for these
alleged 64-Bit machines, then they should be called 32-Bit.

I believe the main reason for not having a 64 bit int is probably memory
bandwidth and bloat meaning that even though the ALU is as fast for 64
bit numbers as 32 bit numbers overall performance of 64 bit numbers is
slower.
 
A

Army1987

Why would you want to assign an unsigned value to an int? Why do you
think it makes sense to have a negative size?

That's what I thought on seeing the type of the second argument to
fgets.
(What happens when I call fgets(s, -3, stream)? The behavior isn't
undefined by omission because the Standard does specify what will
happen. Too bad that it is a logically impossible thing. (Unless
we interpret it in a creative way and conclude that it should
ungetc() the characters *(s - 1), *(s - 2) and so on down to
*(s - 6)...)
 
A

Army1987

I fail to see any reason why that 'i' should not be declared as
unsigned. Well, maybe extreme laziness.
Because an idiot programmer could write unsigned i;
for (i--; i >= 0; i--) and disasters would happen.
(In particular, I did this particular mistake the first time I
ever used a C compiler.)
Of course using while (i--) would do exactly the same, but it also
works with unsigned types.
 
M

Martin Wells

Chuck:
Maybe you should notice that, at present, any implementor is
perfectly free to make an int any bit length that equals or exceeds
16. This (surprise) does not exclude 64.


More specifically, value bits >= 16. There can be any amount of
padding bits.

Specifying that
"(CHAR_BIT * sizeof int) == 64" would be a (surprise) drastic
change to the standard and would immediately invalidate most
existing implementations and code. The committee (surprise) frowns
on this, as do I.


(CHAR_BIT * quantity_val_bits(int)) == 64

There's a macro out there called IMAX_BITS (or something very similar)
for working this out at compiler-time.

Martin
 
A

Army1987

The campaign for 64 bit ints wants int to be 64 bits. Then basically it's
ints for everything - no need for unsigned, 63 bits hold a number large
enough to count most things. Other types will be kept for special purposes.

Why should I waste 64 bits to store a flag variable, a value which
can either be a character or EOF, or an enumeration constant, or
the return type of main(), or ...
 
M

Martin Wells

Army:
Because an idiot programmer could write unsigned i;
for (i--; i >= 0; i--) and disasters would happen.
(In particular, I did this particular mistake the first time I
ever used a C compiler.)
Of course using while (i--) would do exactly the same, but it also
works with unsigned types.


That's still no reason to use signed types when we shouldn't. We don't
accomodate incompetence. Anyway, i >= 0 should yield a warning if i is
unsigned.

Martin
 
C

Craig Gullixson

Look, it is not the C standard that runs my code.

It is a mindless processor, churning instruction after instruction, no
mind no standards, no nothing.

I have an aesthetic view of code. What is important in it, from my
viewpoint, is clarity of design and above all, that
IT WORKS.

Code can be written up to the best standards, but if doesn't work or if
it doesn't perform very well I do not care. It is ugly.

The code I am porting is the code of the IDE of lcc-win, and the code of
the debugger. I started writing it around 1992.

The IDE was one of the few pieces of code I salvaged from my failed
lisp interpreter project, that was a technical success but a commercial
failure.

It has been ported to windows 16 bits (from 32 bit DOS emulator with
Delorie), then ported to windows 32 bits in 1996 (windows 95), then
ported to linux 32 bits , under GTK, and then to windows 64 bits.

Believe me, I know what porting means, what is important in code
what is not.


Yeah. I have to cope with the possibility of strings larger than 2GB.
Gosh!



You're making not necessarily valid assumptions regarding, int,
size_t, and the maximum size of a string that your application
may want to deal with.

I think that the fix proposed will fit the bill.


You will agree with me that THAT is much serious than a few compiler
warnings because of size_t I suppose...

I adopted immediately C89 when it come out, because of the prototypes.
It was an INCREDIBLE relaxing thing that I did NOT have to worry
anymore if I always passed the right number of parameters to my
functions. The compiler would yell at me. What a fantastic feeling.
I still remember it!


Let me get this straight - you adopted C89 because of the prototyping,
yet when you started your project in 1992, you didn't bother to use
size_t, included in that same standard, and now you are complaining
about compiler warnings due to not using size_t?

Sorry but did you contact the vendor? If they still exists and
sell that package they have surely upgraded it...


Nope. I'm using the latest version of their software, dated August
2005.

---Craig



________________________________________________________________________
Craig A. Gullixson
Instrument Engineer INTERNET: (e-mail address removed)
National Solar Observatory/Sac. Peak PHONE: (505) 434-7065
Sunspot, NM 88349 USA FAX: (505) 434-7029
 
R

Richard

Mark McIntyre said:
Nobody here is averse to new ideas, and its scurrilous to suggest
otherwise.

What many people here /are/ averse to is offtopic discussion. Good
groups filled with serious experts exist to discuss posix extensions
to C, networking, graphics, UI development, specific compilers etc
etc. Theres no NEED to discuss absolutely everything in one place.

Threads develop. And when they do it is NOT natural to up and
leave. Don't like it? Kill the thread. The objection I, and from what I
can gather , quite a few others is simply that small minded busy bodies
keep popping up in the middle of an ongoing thread and playing net
cop. Ignore it. Let it go. Stop polluting lively debate.

And of course the worst offenders are the netcops themselves who feel
that when they stroll "OT" it is amusing, stimulating and a bit of a rib
tickler for the "boys". The news group Narcissus is particularly prone
to that.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top