Signed and unsigned int

J

jacob navia

A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.

Nothing happens, if you do not attempt to read all files in the directory,
or even worst, copy them to another drive.

Worried by mysterious disk errors, I tried to save my work. When making
the copy however, there was an error when the copy arrived at an image
and could not read it.

Foolish me, I tried to copy the file again.

That was it: the disk started to make a periodic mechanical noise, and
it was GONE ALONG WITH ALL MY DATA IN THE DISK!!!!!!!!!!!!!!!!!!!!!

Why?

When a signed integer is increased beyond 2147483648, it becomes NEGATIVE.
This means that the system will issue a NONSENSE movemevent order to the
read heads, destroying the disk instantly.

I was lucky. I had a full backup of the mars data in my linux system. Ahh
Microsoft.
I fired up the linux machine running ext2 file system. I issued the order to
copy
the files, and started doing other things during the copy.

When the amount of data transmitted arrived at approx 2GB, I heared with
horror that
the disk started doing the SAME repeating mechanical noise and my linux
system
WAS GONE, I HAVE LOST several months of work without any means of getting
my data back.

Signed integer can contain up to 2147483648 bytes. Not a single byte more.

I have developed a compile time switch to check for overflows within
lcc-win32 and
posted here a message, several weeks ago. Nobody cared to answer.

SHIT !!!!!!!!!!!

C is a nice language in which to write file systems. But IT WOULD BE BETTER
TO
BE CAREFUL WITH THOSE "int" s OK?

You do not believe me?

Try it. Make several directories with several thousand files of 100-200K
each, until
you get more than 2GB.

But backup your drive first...

jacob
 
C

Christian Bau

"jacob navia said:
Try it. Make several directories with several thousand files of 100-200K
each, until
you get more than 2GB.

But backup your drive first...

A digital video camera records exactly 3.6 million bytes per seoncd.
That is 216 million bytes per minute, or 12,960,000,000 byte per hour. I
have directories on my harddisk with about 13 Gigabyte of contents, my
neighbour has about 30 Gigabyte in a single directory - just 2 1/2 hours
of digital video.

Both machines are Macs using the HFS+ filesystem, but I would think that
there is video editing software for Windows and Linux, and I am sure it
must be able to handle similar amounts of data.
 
E

Emmanuel Delahaye

jacob navia wrote on 10/08/04 :
A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

Of course you meant

"A signed long can contain at least to 2Gig, 2 147 483 648, to be
exact."

or

"On my machine, a signed int can contain up to 2Gig, 2 147 483 648, to
be exact."
 
E

Eric Sosman

jacob said:
A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

"Exact" if you ignore the off-by-one error ...
Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.
[... and two different O/Ses trashed the data].

C is a nice language in which to write file systems. But IT WOULD BE BETTER
TO
BE CAREFUL WITH THOSE "int" s OK?

Jacob, while I'm genuinely sorry for your data loss I think
you may have fingered the wrong culprit. It seems unlikely that
any file system nowadays would have a 31-bit limit built in; disks
have been larger than 2GB for a long time now. In fact, disks have
been larger than 4GB for a long time; you'd probably need to look
long and hard to find a drive that small today. And if the file
systems using these larger drives had been limited by 31-bit or
even 32-bit byte counts, plenty of people other than you would
have noticed ...
You do not believe me?

Try it. Make several directories with several thousand files of 100-200K
each, until
you get more than 2GB.

Well, I'm using files of 50-100MB instead, but I hope you'll
accept the (slightly edited) evidence anyhow:

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c14t40d0s6 356770888 37071528 316131656 11% (withheld)
/dev/dsk/c14t40d1s6 356770888 37140512 316062672 11%
/dev/dsk/c13t40d0s6 356770888 37150672 316052512 11%
/dev/dsk/c16t40d0s6 356770888 37206864 315996320 11%
/dev/dsk/c13t40d1s6 356770888 37356592 315846592 11%
/dev/dsk/c15t40d0s6 356770888 37412760 315790424 11%
/dev/dsk/c16t40d1s6 356770888 53911776 299291408 16%
/dev/dsk/c15t40d1s6 356770888 54248872 298954312 16%

I'd bet that the 31-bit limit (the number in your sad tale
is too coincidental to ignore) has nothing to do with the file
system. It may, perhaps, have something to do with the utility
you were using to mass-copy all that data.

My condolences.
 
G

Gordon Burditt

A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

I know of no system where this is the limit. A 32-bit signed int
cannot contain 2 147 483 648. It can contain 2 147 483 647. It
is not guaranteed that a signed int has more than 16 bits, nor is
there anything preventing it from having 47 or 128 bits.
Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.

Nothing happens, if you do not attempt to read all files in the directory,
or even worst, copy them to another drive.

Most of the problems I have heard of involve having a SINGLE FILE
containing more than 2GB, or 4GB. I hope there is not much software
in use where a whole disk drive is constrained by size limits of
2GB or 4GB. I don't understand why there is a limit on reading a
bunch of files in a directory (unless you were copying them into a
single file, e.g. with tar or cpio) rather than a limit on individual
file size or disk partition size.

A reasonable OS will not let you write a file beyond the limit,
rather than corrupting the file system, whatever that limit is,
even if it's something huge like 512YB.
Worried by mysterious disk errors, I tried to save my work. When making
the copy however, there was an error when the copy arrived at an image
and could not read it.

Foolish me, I tried to copy the file again.

That was it: the disk started to make a periodic mechanical noise, and
it was GONE ALONG WITH ALL MY DATA IN THE DISK!!!!!!!!!!!!!!!!!!!!!

I'm sorry about your lost files but I'm not entirely convinced that
copying too much at once is the reason you lost them.
Why?

When a signed integer is increased beyond 2147483648, it becomes NEGATIVE.
This means that the system will issue a NONSENSE movemevent order to the
read heads, destroying the disk instantly.

A reasonable OS will not issue such an order to the disk drive.
The driver should range-check the values it is computing and object
if the values are out of range. So, for that matter, should the
disk drive. Disk drives nowadays are much smarter than the old
mechanical floppy drives where you could keep stepping and slam the
heads into things.
I was lucky. I had a full backup of the mars data in my linux system. Ahh
Microsoft.
I fired up the linux machine running ext2 file system. I issued the order to
copy
the files, and started doing other things during the copy.

When the amount of data transmitted arrived at approx 2GB, I heared with
horror that
the disk started doing the SAME repeating mechanical noise and my linux
system
WAS GONE, I HAVE LOST several months of work without any means of getting
my data back.

Signed integer can contain up to 2147483648 bytes. Not a single byte more.

I have developed a compile time switch to check for overflows within
lcc-win32 and
posted here a message, several weeks ago. Nobody cared to answer.

SHIT !!!!!!!!!!!

C is a nice language in which to write file systems. But IT WOULD BE BETTER
TO
BE CAREFUL WITH THOSE "int" s OK?

You do not believe me?

Try it. Make several directories with several thousand files of 100-200K
each, until
you get more than 2GB.

I routinely make single files that are greater than 4GB on FreeBSD.
(The size of a burnable DVD image is about 4.7GB if you pack things
to use the maximum capacity. I'm using DVDs to store DATA, not
video or audio. The number goes up if your DVD supports double-sided
or blue-laser DVDs) Yes, I've got some directory trees somewhat
like what you describe, only the total is 22GB (most of it downloaded
with a single multi-file FTP command). Yes, there's a backup of
it on another system as a single 22GB tar file. I need to split
it up into chunks small enough to burn on DVDs. (actually, I've
done this several times, but I don't like how the files end up
getting split - I want related files on the same disk. I do need
to check the program that does the splitting, but I think it's using
off_t's for holding file lengths (64 bits on FreeBSD)). After I do
that, the program that makes a DVD image is essentially a copy
program that puts all the files in one big file (with assorted
headers & stuff). I have had no problems.

I believe I have done this on Linux also, even old versions of Linux
(think TiVo's OS - was that built in 1996?) only supporting LBA32
and not LBA48 (meaning Linux won't support DRIVES larger than 137GB
or so). Modern versions of Linux don't have that limit.

Gordon L. Burditt
 
K

Keith Thompson

jacob navia said:
A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

Typically 2 147 483 647, to be even more exact, but of course it can
be any value greater than or equal to 32767.
Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.

Nothing happens, if you do not attempt to read all files in the directory,
or even worst, copy them to another drive.

Worried by mysterious disk errors, I tried to save my work. When making
the copy however, there was an error when the copy arrived at an image
and could not read it.

Foolish me, I tried to copy the file again.

That was it: the disk started to make a periodic mechanical noise, and
it was GONE ALONG WITH ALL MY DATA IN THE DISK!!!!!!!!!!!!!!!!!!!!!

Why?

Because there's a bug in your OS's filesystem code. (Do you know, or
are you assuming, that that code was written in C?)

[...]
I have developed a compile time switch to check for overflows within
lcc-win32 and posted here a message, several weeks ago. Nobody cared
to answer.

Would this have prevented your problem? Detecting overflow is only
part of the problem; the programmer has to decide what to do once the
overflow is detected. Using a compiler that traps overflows might
have caused whatever program you were using (or the OS itself) to
crash rather than continue with invalid data, which is a bit of an
improvement but not enough of one.
C is a nice language in which to write file systems. But IT WOULD BE
BETTER TO BE CAREFUL WITH THOSE "int" s OK?

"Be careful" is always good advice, but you have to know *how* to be
careful. If you want to count more than 2**31-1 of something, you
just need to use something bigger than a 32-bit signed type.
Detecting overflow is a good way to detect bugs, but it's seldom a
good way to correct them; it's rare to want to an overflow handler to
be executed during the normal execution of a program.

<OT>I'm surprised that a Linux ext2 filesystem would behave the way
you describe, but of course this isn't the place to discuss the
specifics.</OT>
 
K

Keith Thompson

Christian Bau said:
A digital video camera records exactly 3.6 million bytes per seoncd.
That is 216 million bytes per minute, or 12,960,000,000 byte per hour. I
have directories on my harddisk with about 13 Gigabyte of contents, my
neighbour has about 30 Gigabyte in a single directory - just 2 1/2 hours
of digital video.

Both machines are Macs using the HFS+ filesystem, but I would think that
there is video editing software for Windows and Linux, and I am sure it
must be able to handle similar amounts of data.

<WAY_OT>
For Windows, FAT32 vs. NTFS might be significant.
</WAY_OT>
 
J

Joe Wright

jacob said:
A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.

Nothing happens, if you do not attempt to read all files in the directory,
or even worst, copy them to another drive.

Worried by mysterious disk errors, I tried to save my work. When making
the copy however, there was an error when the copy arrived at an image
and could not read it.

Foolish me, I tried to copy the file again.

That was it: the disk started to make a periodic mechanical noise, and
it was GONE ALONG WITH ALL MY DATA IN THE DISK!!!!!!!!!!!!!!!!!!!!!

Why?

When a signed integer is increased beyond 2147483648, it becomes NEGATIVE.
This means that the system will issue a NONSENSE movemevent order to the
read heads, destroying the disk instantly.

I was lucky. I had a full backup of the mars data in my linux system. Ahh
Microsoft.
I fired up the linux machine running ext2 file system. I issued the order to
copy
the files, and started doing other things during the copy.

When the amount of data transmitted arrived at approx 2GB, I heared with
horror that
the disk started doing the SAME repeating mechanical noise and my linux
system
WAS GONE, I HAVE LOST several months of work without any means of getting
my data back.

Signed integer can contain up to 2147483648 bytes. Not a single byte more.

I have developed a compile time switch to check for overflows within
lcc-win32 and
posted here a message, several weeks ago. Nobody cared to answer.

SHIT !!!!!!!!!!!

C is a nice language in which to write file systems. But IT WOULD BE BETTER
TO
BE CAREFUL WITH THOSE "int" s OK?

You do not believe me?

Try it. Make several directories with several thousand files of 100-200K
each, until
you get more than 2GB.

But backup your drive first...

jacob
The largest signed int on your system is 2147483647, not 2147483648.
Navia should know this. 2^31-1 is a magic number on virtually all
32-bit computer architectures.

01111111 11111111 11111111 11111111 is magic. Magic +1 is..

10000000 00000000 00000000 00000000 and too much magic.

Having said that, I'm surprised and saddened that both Windows and
Linux would actually fall over past the upper bound. I would have
expected something more graceful.
 
C

CBFalconer

jacob said:
A signed int can contain up to 2Gig, 2 147 483 648, to be exact.

No, a signed int can contain values from INT_MIN through INT_MAX,
if you #include <limits.h>. Those values must be at least -32767
through 32767.

If you need values from 0 to 2,147,483,648, use an unsigned long
and check for overflow yourself.
 
K

Keith Thompson

Joe Wright said:
Having said that, I'm surprised and saddened that both Windows and
Linux would actually fall over past the upper bound. I would have
expected something more graceful.

Note the "[OT]" tag.

I'd be very surprised if the failures Jacob encountered were caused by
something as simple as a signed 32-bit integer overflow in the
filesystem implementation. I'd be especially surprised by this for
the Linux system; for Windows, I could almost believe it for FAT32,
but not for NTFS. (Not that I have any particular expertise for any
of these, but I've used them.) The failure at 2 gigabytes is
suspicious, but I'd expect any of the relevant filesystems to handle
at least that much data in a partition, and I wouldn't expect any them
to have a limitation that depends on the total size of the files in a
single directory.

Perhaps there's a bug in some program he was using to copy the files
-- though it's still surprising that such a bug would trash the target
Linux system.

Never underestimate the destructive power of an unlucky coincidence.
 
R

Richard Bos

Keith Thompson said:
<WAY_OT>
For Windows, FAT32 vs. NTFS might be significant.
</WAY_OT>

Nope. Not on this machine. I don't know _what_ Jacob's been up to, but
blaming his gaffes on C's ints sounds rather cheap to me.

Richard
 
J

jacob navia

Sorry for not replying but I had to rebuild my machine
and mail sin't there yet.

I repeat:

The bug is not about SINGLE files bigger than 2GB.

It is when the contents of a directory exceed 2GB. Each
of the files is between 30-300K.

Maybe it is relevant to the bug that the upper and lower
directories are also quite big (0.8-1.5GB)

I repeat that the bug is related to the 2GB limit since it appeared
when I crossed it, around monday, when I updated
the directory from the NASA site.

Sorry but I do not know if I can answer more, but it
is VERY EASY TO REPRODUCE IT.

Just make a program that makes random files of
33-300K until you get 3GB, then do some directories and
make files.

I would be *very* interested in hearing from you!

jacob
 
B

boa

jacob said:
Sorry but I do not know if I can answer more, but it
is VERY EASY TO REPRODUCE IT.
Ok...


Just make a program that makes random files of
33-300K until you get 3GB, then do some directories and
make files.

So I did, 34701 files in one directory, with a total size of 5.7G. Did a
umount+fsck+mount. No errors. File system was ext2 on a linux 2.4.25
running under VMWare.

boa@home
 
G

Gordon Burditt

It is when the contents of a directory exceed 2GB. Each
of the files is between 30-300K.

What do you think is overflowing here? What is it that even bothers
to compute the total size of files in a directory? (Well, the
commands "ls" and "du" might, but that's not what you were running
at the time of the problem, was it?) To the best of my knowledge,
*NO* file I/O operation (open, close, read, write, seek, or whatever,
and no, those are not references to C function names) is the slightest
bit interested in the total amount of space taken by files in the
same directory as the file in question, and it does not bother to
compute such a number (so it can't overflow).

Could the problem be related to the total size of files *IN THE FILESYSTEM*?
How big is this filesystem? How much in files, total were there in
there? (Could you have just crossed 4GB for this number?)
Could the problem be related to RUNNING OUT OF DISK SPACE?

Show me the command that you were running that caused the problem.
Also, what OS versions did this failure happen on?
Maybe it is relevant to the bug that the upper and lower
directories are also quite big (0.8-1.5GB)

What is an "upper directory"?
I repeat that the bug is related to the 2GB limit since it appeared
when I crossed it, around monday, when I updated
the directory from the NASA site.

Kerry was nominated the Democratic candidate for President and a
lot of equipment I had connected to my phone line and network got
fried, so it must be his fault. My air conditioning repairman (the
A/C electronics got fried too) believes it had something to do with
lightning, though.
Sorry but I do not know if I can answer more, but it
is VERY EASY TO REPRODUCE IT.

Just make a program that makes random files of
33-300K until you get 3GB, then do some directories and
make files.

Did you check this filesystem for consistency before (and after,
if possible) trying this? (Linux: fsck. Windows: scandisk or
whatever they're calling it now).

I find it very easy to FAIL to reproduce it, on Linux and FreeBSD.

I doubt I'm going to try this on Windows. I can't figure out
how to install a version of Windows that won't get instantly infected
if I connect it to the net. (Don't say Windows Update unless you
know how to use it *WITHOUT* connecting to the net until *AFTER* the
update is complete.)

Gordon L. Burditt
 
J

jim green

jacob navia said:
It is when the contents of a directory exceed 2GB. Each
of the files is between 30-300K.

Just make a program that makes random files of
33-300K until you get 3GB, then do some directories and
make files.

OK, Ive 3GB spare. My script is

# file size = 100k
$count = 200;
$bs = 512;

# 30,000 files is, then, 3GB
foreach $n (1...30000)
{
$file = sprintf "2gb/2gb-%.5i.dat",$n;
$cmd = "dd if=/dev/zero of=$file bs=$bs count=$count";
print "$cmd\n";
system($cmd);
}

(perl, of course). The directory now contains 30,000 of files

hardy:/data/other> ls -1 2gb | grep dat | wc -l
30000

they are all in the size range 30-300K

hardy:/data/other> ls -l 2gb | head
total 3120000
-rw-r--r-- 1 jjg users 102400 Aug 11 20:51 2gb-00001.dat
-rw-r--r-- 1 jjg users 102400 Aug 11 20:51 2gb-00002.dat
-rw-r--r-- 1 jjg users 102400 Aug 11 20:51 2gb-00003.dat
-rw-r--r-- 1 jjg users 102400 Aug 11 20:51 2gb-00004.dat

and the total size is > 2GB

hardy:/data/other> du -k 2gb
3120712 2gb

No problems with the file system etc. This is an ext3 filesystem
on a software raid partition on Linux 2.4.20

hardy:/data/other> df -k 2gb
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md0 115346344 105401392 4085608 97% /data

I dont think this is an generic problem

-j
 
J

jacob navia

Gordon Burditt said:
What do you think is overflowing here? What is it that even bothers
to compute the total size of files in a directory? (Well, the
commands "ls" and "du" might, but that's not what you were running
at the time of the problem, was it?) To the best of my knowledge,
*NO* file I/O operation (open, close, read, write, seek, or whatever,
and no, those are not references to C function names) is the slightest
bit interested in the total amount of space taken by files in the
same directory as the file in question, and it does not bother to
compute such a number (so it can't overflow).
I do not know WHAT
Could the problem be related to the total size of files *IN THE FILESYSTEM*?
How big is this filesystem? How much in files, total were there in
there? (Could you have just crossed 4GB for this number?)
Could the problem be related to RUNNING OUT OF DISK SPACE?
Partition size was 30 GB, used 15GB
Show me the command that you were running that caused the problem.

scp (secure copy to my windows machine)

No intenet connection was up, the two machines were connected
directly through a crossed ethernet cable. The
Also, what OS versions did this failure happen on?

Mandrake 9.2 I think but now the machine is dead.
What is an "upper directory"?

If directory A contains directory B then A is the "upper" (I do not know
a better english word)
Kerry was nominated the Democratic candidate for President and a
lot of equipment I had connected to my phone line and network got
fried, so it must be his fault. My air conditioning repairman (the
A/C electronics got fried too) believes it had something to do with
lightning, though.

The crash reproduced with EXACTLY THE SAME SYMPTOMS
the crash under windows. The machine is a 3 year old PC that
had worked flawlessly under linux since 1.5 years. File system
is ext2 I suppose.

The crash happened when copying the SAME directory that made
NTFS crash the drive.

Note that the DISK FAILS, it is not a software issue, i.e. you hear
a "pang" of the disk head repeatedly trying to get to a bad position each
second or so. After a few "pangs" the drive is DEAD.
 
A

Arthur J. O'Dwyer

If directory A contains directory B then A is the "upper"
(I do not know a better English word)

I recommend "superdirectory," by analogy to the established term
"subdirectory" for a directory contained within another.
"Parent directory" is common, too, especially in Apache-served
directory listings. ;)

Anyway, my guess is that your problem is caused by either a
bug in your version of 'scp', or a bug somewhere else. :) It's
certainly OT here, except as a cautionary tale---and even then,
only if you track down the bug and find that it /was/ caused by
incompetent C programming, and not by incompetent assembly
programming or Python programming or whatever.

HTH,
-Arthur
 
K

Keith Thompson

jacob navia said:
"Gordon Burditt" <[email protected]> a écrit dans le message de


If directory A contains directory B then A is the "upper" (I do not know
a better english word)

Parent directory.

[...]
The crash reproduced with EXACTLY THE SAME SYMPTOMS
the crash under windows. The machine is a 3 year old PC that
had worked flawlessly under linux since 1.5 years. File system
is ext2 I suppose.

The crash happened when copying the SAME directory that made
NTFS crash the drive.

Note that the DISK FAILS, it is not a software issue, i.e. you hear
a "pang" of the disk head repeatedly trying to get to a bad position each
second or so. After a few "pangs" the drive is DEAD.

As you say, this doesn't sound like a software issue. If the disk
drive itself is functioning properly, it shouldn't even be possible
for software to cause the disk head to bang against the inside of the
case (I think).

NTFS and ext2 are different file systems; their implementations almost
certainly contain no common code. Both are known to be capable of
handling directories containing multiple gigabytes worth of files.

I don't know why your drives failed, and you have my sympathy, but I
see no reason to believe it has anything to do with signed 32-bit
integer overflow.
 
J

jacob navia

Great. Now please
cat * >/dev/ null

Does that work?

Linux crashed when copying those files.
NOT when writing them.

Thanks for your time
 
O

Old Wolf

jacob navia said:
Since The Mars rovers landed, I have been storing the
photographs in two directories, Spirit and Opportunity. I had
more than 18 000 files in a single directory. Without being
aware of it, I crossed the 2 147 483 648 border last week.

Foolish me, I tried to copy the file again.

That was it: the disk started to make a periodic mechanical noise, and
it was GONE ALONG WITH ALL MY DATA IN THE DISK!!!!!!!!!!!!!!!!!!!!!

When a signed integer is increased beyond 2147483648, it becomes NEGATIVE.

Undefined behaviour actually (although becoming negative is a common
situation).

Add this post to the archive of posts to throw at people who say
'UB doesnt matter' or 'signed int overflow doesnt matter'
This means that the system will issue a NONSENSE movemevent order to the
read heads, destroying the disk instantly.

You should file a bug report with the operating system in question then.
Which OS calls (or C functions) were you calling that caused the error?
I was lucky. I had a full backup of the mars data in my linux system.
Ahh Microsoft.

Many people have directories with dozens (or hundreds) of gigabytes
of data in them, no problem, on windows and linux. Reading and copying
files is no problem, what were you doing exactly? Creating one large
file? Or maybe your OS has a problem when you over-fill a stream.
I HAVE LOST several months of work without any means of getting
my data back.

Maybe this will motivate you to start keeping backups
C is a nice language in which to write file systems. But IT WOULD
BE BETTER TO BE CAREFUL WITH THOSE "int" s OK?

Whose code was at fault?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top