Request

R

Richard Heathfield

Spiros Bousbouras said:
What's wrong with "less" ?

Others have answered this whilst I slept. But whether it is "wrong" depends
on whether one is trying to be precise and clear. I would not have
mentioned such a trivial thing in such a newsgroup as this one, but for the
contrast between your sentence's message and its construction.
 
C

CBFalconer

alex said:
CBFalconer said:
:

... snip ...
54135^2 is going to be on the order of 2.5G. That is a pretty
fair hunk of memory.

---------

[nouser@pamc ~]$ cat test3.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char *p;
int i;

for (i=65535; i>0; i--)
{
if (p = calloc(i,i))
{
printf("%d is apparently the largest integer that will succeed.\n",
i);
break;
}
}
}
[nouser@pamc ~]$ gcc test3.c
[nouser@pamc ~]$ ./a.out
54135 is apparently the largest integer that will succeed.
[nouser@pamc ~]$

With DJGPP 2.03 (specifies the library) that crashes immediately in
memset. With nmalloc linked in in place of the library it yields
23093.

Advisory cross-post to comp.os.msdos.djgpp, f'ups set to clc.

even the first malloc is 4gb, and i doubt you have more ;-)

Which should simply fail. The code is looking for the largest
assignable value and for foulups in the calloc multiply operation.
It shouldn't crash. nmalloc fails and returns NULL. DJGPP malloc
claims to succeed, but doesn't, and the memset crashes. This also
shows the need to include calloc in the nmalloc package, to ensure
the same limits are observed.

Why did you override the follow-up I had set? And top-post.
Please go to comp.lang.c for further general discussion of this.
Meanwhile, be aware of the DJGPP bug.
 
K

Keith Thompson

christian.bau said:
Implementations where sizeof (char *) == 4, and sizeof (size_t) == 2,
have been quite common in the past. On an 80386 processor, an
implementation with char* = 48 bits and size_t = 32 bits wouldn't have
been completely unreasonable (I have never seen one myself).

Right. The point I missed (I think I mentioned this later) is that
there has to be a distinct char* value for each byte of each object,
but size_t only has to span a single object (and perhaps only a single
declared object).

In a system with 32-bit char* and 16-bit size_t, I presume that no
single object can exceed 65535 bytes; is that correct? (I'm
accustomed to systems where the theoretical maximum size of a single
object is the same as the size of the entire memory space.)
 
C

Chris Torek

Right. The point I missed (I think I mentioned this later) is that
there has to be a distinct char* value for each byte of each object,
but size_t only has to span a single object (and perhaps only a single
declared object).

In a system with 32-bit char* and 16-bit size_t, I presume that no
single object can exceed 65535 bytes; is that correct?

That is, I think, the only *sensible* way to handle this in C.
(Of course, if you use Vendor-Specific Extensions, all bets are
off.) I am not certain that this is actually required by the
standards, though. (That is, perhaps a compiler can make objects
where sizeof(obj) exceeds SIZE_MAX, under various peculiar conditions.
One constraint would be that the code never actually uses the size
of the object, i.e., if sizeof(obj) appears anywhere in the code,
the value must be discarded.)
(I'm accustomed to systems where the theoretical maximum size of
a single object is the same as the size of the entire memory space.)

These are "flat" memory systems. Segmented systems must have
"locally flat" memory, that is, flat within each C-language-level
object, in order to conform to the requirements of the C standards.
(Exceptions can be arranged for objects whose address is never
taken, and perhaps others, as noted above.)
 
A

av

Yes. *Another* problem is the lack of operations on size_t (and
unsigned types in general) that behave in a manner other than the one
chosen by the standard. This inflexibility of the language makes some
useful operations difficult to implement, and in some cases impossible
to implement both portably and efficiently.

On the other hand, "fixing" this "problem" would make the language
more complicated, and it's not entirely sure that it would be worth
it.

so is it not possible to save "capra e cavoli"?

in a pc with 32bits cpu
where is the problem in have 31bits unsigned size_t (all like say the
C standard) and to have the 32th bit that say if there is some
"overflow" (or use mod) in the middle of some computation?

in a pc with 64bits cpu
where is the problem in have 63bits unsigned size_t (all like say the
C standard) and to have the 64th bit that say if there is some
"overflow" (or use mod) in the middle of some computation?

if a function want to see if that some size_t variable has overflow
see the last bit in that variable

than could be added another type that use (-1) for error
and not have "mod" behaviour like below, for computation
that can not overflow in mod
---------------------
/* it could do size_t multiplication
for numbers [0..2^32-2]
*/
asm{
multix:
push edx
mov eax, [esp+ 8]
cmp [esp+12], -1
je .1
cmp eax, -1
je .f
xor edx, edx
mul dword[esp+12]
cmp edx, 0
je .f
.1:
mov eax, -1
.f:
pop edx
ret 8
}

asm{
addsize:
mov eax, [esp+ 4]
cmp [esp+8], -1
je .1
cmp eax, -1
je .f
add eax, [esp+8]
jnc .f
.1:
mov eax, -1
.f:
pop edx
ret 8
}

asm{
subsize:
mov eax, [esp+ 4]
cmp [esp+8], -1
je .1
cmp eax, -1
je .f
sub eax, [esp+8]
j>= .f
.1:
mov eax, -1
.f:
pop edx
ret 8
}

unsigned _stdcall multix(unsigned a, unsigned b);
uns32 operator*(uns32 x, uns32 y) {return multix(x, y);}
unsigned _stdcall addsize(unsigned a, unsigned b);
uns32 operator+(uns32 x, uns32 y) {return addsize(x, y);}
unsigned _stdcall subsize(unsigned a, unsigned b);
uns32 operator-(uns32 x, uns32 y) {return subsize(x, y);}
etc
uns32 result, a, b, c, d, h;
.....
result=a*b+c*d-h;

i forget to say if there is some overflow in some place
if(result==-1) error;

the same like above for an signed int type
 
S

santosh

av said:
so is it not possible to save "capra e cavoli"?

in a pc with 32bits cpu
where is the problem in have 31bits unsigned size_t (all like say the
C standard) and to have the 32th bit that say if there is some
"overflow" (or use mod) in the middle of some computation?

in a pc with 64bits cpu
where is the problem in have 63bits unsigned size_t (all like say the
C standard) and to have the 64th bit that say if there is some
"overflow" (or use mod) in the middle of some computation?

No. The C standard has to apply to more than just PCs. There's no way
to safely detect overflow on all machines for which a C implementation
is possible. The standard can't simply ignore such systems, since one
of the strengths of C is it's portability.
 
R

Richard Tobin

What's wrong with "less" ?
[/QUOTE]
Others have answered this whilst I slept. But whether it is "wrong" depends
on whether one is trying to be precise and clear.

I'm all for having plenty of words so that we can make finer
distinctions, but I don't think ambiguity often, if ever, arises from
using "less" instead of "fewer".

As evidence for this, consider that the opposite of both words is "more"
(there's no English word "manier' corresponding to "many"). Have you ever
encountered a case where you felt that "more" was ambiguous?

-- Richard
 
R

Richard Heathfield

Richard Tobin said:
Others have answered this whilst I slept. But whether it is "wrong"
depends on whether one is trying to be precise and clear.

I'm all for having plenty of words so that we can make finer
distinctions, but I don't think ambiguity often, if ever, arises from
using "less" instead of "fewer".[/QUOTE]

You're probably right. Neverthefewer, I stand by my reply. :)

Have you ever encountered a case where you felt that "more" was ambiguous?

Such a case is easy to construct, although in this case "ambiguous"
under-describes the *three* possible meanings:

Many people were oppressed by King Henry VII. More suffered under King Henry
VIII.
 
R

Richard Tobin

Have you ever encountered a case where you felt that "more" was ambiguous?
[/QUOTE]
Such a case is easy to construct, although in this case "ambiguous"
under-describes the *three* possible meanings:
Many people were oppressed by King Henry VII. More suffered under King Henry
VIII.

I can only see two meanings there ("a greater number of people than under
Henry VII", and "some additional people"), and it's not clear that a
less/fewer type distinction would resolve it (quite likely "manier"
would be idiomatic for both). What is the third?

-- Richard
 
R

Richard Heathfield

Richard Tobin said:
Such a case is easy to construct, although in this case "ambiguous"
under-describes the *three* possible meanings:
Many people were oppressed by King Henry VII. More suffered under King
Henry VIII.

I can only see two meanings there ("a greater number of people than under
Henry VII", and "some additional people"), and it's not clear that a
less/fewer type distinction would resolve it (quite likely "manier"
would be idiomatic for both). What is the third?[/QUOTE]

Cf http://www.luminarium.org/renlit/morebio.htm :)
 
D

Dik T. Winter

> In article <[email protected]>,

>
> I can only see two meanings there ("a greater number of people than under
> Henry VII", and "some additional people"), and it's not clear that a
> less/fewer type distinction would resolve it (quite likely "manier"
> would be idiomatic for both). What is the third?

Sir Thomas.
 
R

rjkematick

CBFalconer said:
Why did you override the follow-up I had set? And top-post.
Please go to comp.lang.c for further general discussion of this.
Meanwhile, be aware of the DJGPP bug.

I'd wager he sent a short response to (e-mail address removed), replying to
something that showed up in his e-mail. Maybe doesn't give a hoot about
the c.l.c. traffic or top-posting cops.
 
K

Keith Thompson

santosh said:
No. The C standard has to apply to more than just PCs. There's no way
to safely detect overflow on all machines for which a C implementation
is possible. The standard can't simply ignore such systems, since one
of the strengths of C is it's portability.

That's not quite true. It's always possible to detect overflow; if
nothing else, you can perform tests on the operands before performing
the operation. If C required overflow checking on all arithmetic
operations, it *could* be implemented, but the resulting code might be
significantly slower on some systems. Also, compiler technology at
the time the language was developed wasn't as advanced as it is now.
The designers of the language chose to keep the language simpler in
this area.

It's been argued that the effect of this is to get wrong answers more
quickly. There's some truth to that, but it also produces *correct*
answers more quickly, at the cost of putting the burden of avoiding
errors in the first place on the shoulders of the programmer.
 
M

Mark McIntyre

Mark McIntyre a écrit :

Excuse me but then the pointer is bigger than 32 bits!!!

It still doesn't follow. The pointer returned by malloc need not be a
complete absolute reference. Magic can go on behind the scenes, much
as it did with 16-bit pointers used to reference 20-bit memory.

Of course, for this to work, the pointer needs not to be treated as an
integer...

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

Others have answered this whilst I slept. But whether it is "wrong" depends
on whether one is trying to be precise and clear.

I'm all for having plenty of words so that we can make finer
distinctions, but I don't think ambiguity often, if ever, arises from
using "less" instead of "fewer".[/QUOTE]

I'm all for clarity, but not at the expense of correct English usage.
"less" is correctly applicable only to matters of degree or value,
whereas fewer is applicable to number. Less is also commonly applied
to plural nouns, though this is Bad Form. Thus fewer choices are less
useful.
As evidence for this, consider that the opposite of both words is "more"

Euh, false logic. Consider "happy" and "smut-free". The opposite of
both is "blue" but that doesn't make them synonyms :)
..

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
B

Ben Pfaff

Mark McIntyre said:
It still doesn't follow. The pointer returned by malloc need not be a
complete absolute reference. Magic can go on behind the scenes, much
as it did with 16-bit pointers used to reference 20-bit memory.

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
extensions.
 
R

Richard Tobin

Excuse me but then the pointer is bigger than 32 bits!!!
[/QUOTE]
It still doesn't follow. The pointer returned by malloc need not be a
complete absolute reference. Magic can go on behind the scenes, much
as it did with 16-bit pointers used to reference 20-bit memory.

So how can it be used to access all the bytes of the object?

-- Richard
 
J

jacob navia

Richard Tobin a écrit :
It still doesn't follow. The pointer returned by malloc need not be a
complete absolute reference. Magic can go on behind the scenes, much
as it did with 16-bit pointers used to reference 20-bit memory.


So how can it be used to access all the bytes of the object?

-- Richard[/QUOTE]

You could NOT address more than 64 k with 16 bit pointers.

You could address 640K with a 32 bit pointer composed of
a segment and an offset.

Can't you understand that? There were two halves to
the pointers, a segment part, addressed with the segment
registers es, ds, and ss for extra segment, data segment
and stack segment, and a pointer part, i.e. a 16 bit integer
that addressed the bytes from zero to 65535.

In some cases the addressing was implicit,and declared
in those famous

ASSUME

statements in the assembly code.

Obviously each 16 bit pointer could address only 64K!!!
Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!
 
R

Rod Pemberton

CBFalconer said:
alex said:
CBFalconer said:
:

... snip ...

54135^2 is going to be on the order of 2.5G. That is a pretty
fair hunk of memory.

---------

[nouser@pamc ~]$ cat test3.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char *p;
int i;

for (i=65535; i>0; i--)
{
if (p = calloc(i,i))
{
printf("%d is apparently the largest integer that will succeed.\n",
i);
break;
}
}
}
[nouser@pamc ~]$ gcc test3.c
[nouser@pamc ~]$ ./a.out
54135 is apparently the largest integer that will succeed.
[nouser@pamc ~]$

With DJGPP 2.03 (specifies the library) that crashes immediately in
memset. With nmalloc linked in in place of the library it yields
23093.

Advisory cross-post to comp.os.msdos.djgpp, f'ups set to clc.

even the first malloc is 4gb, and i doubt you have more ;-)

Which should simply fail. The code is looking for the largest
assignable value and for foulups in the calloc multiply operation.
It shouldn't crash. nmalloc fails and returns NULL. DJGPP malloc
claims to succeed, but doesn't, and the memset crashes. This also
shows the need to include calloc in the nmalloc package, to ensure
the same limits are observed.

Why did you override the follow-up I had set? And top-post.
Please go to comp.lang.c for further general discussion of this.
Meanwhile, be aware of the DJGPP bug.

http://groups.google.com/group/comp.lang.c/msg/2a4bcbb6f8224d28?hl=en

Chuck,

1) you've encourage DJGPP (and c.l.c) users to use nmalloc for a few years
with little response
2) you never built DJGPP demand for nmalloc by submitting it to the DJGPP
archives
3) since you've said your health isn't good, you could've asked if someone
on comp.os.msdos.djgpp was willing to prepare and submit nmalloc to the
DJGPP archives for you, but you haven't
4) DJ has never publicly responded to your posts on nmalloc...
5) although you believe in nmalloc, you've never bothered to ask DJ why he's
shown zero interest in nmalloc... Without asking, you'll never know if it
was something simple holding back nmalloc.

Just how serious are you about getting nmalloc into the DJGPP archives or
the DJGPP C library?


Rod Pemberton
PS. Sent to both groups...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,226
Latest member
KristanTal

Latest Threads

Top