What is wrong with this c-program?

  • Thread starter Albert van der Horst
  • Start date
A

Albert van der Horst

I have the following program that gives consistently a segfault
with
--------------------
46344 1
46345 1
46346 1
46347 1
46348 1
46349 0
Segmentation fault
-------------------

The program itself doesn't give any complaints with gcc 4.4
on a stable Debian.
It is a classical prime sieve.
I can't find any fault with it.

--------------------------

#include <stdio.h>
#include <assert.h>
#include <inttypes.h>

#define BYTE uint8_t

/* largest number */
#define MAXSIZE 20000000
#define SIZE 2000000

long int TheLimit;

/*****************************************************************************/
/* */
/* Fill the `nprimes' table up to `n' */
/* */
/*****************************************************************************/


static BYTE composite[ MAXSIZE+1];

void fill_primes( int n)
{
int i,j;

assert(n<=MAXSIZE);
for (i=0;i<=n; i++) composite = 0;
for ( i=2; i<=n; i++)
{
printf("%d %d \n",i,composite);
if (!composite)
{
for (j=i*i; j<=n; j+=i) composite[j] = 1;
}
}
}

int main( int argv, char** argc)
{
int i;
TheLimit = atoi(argc[1]);
printf("n= %d\n",TheLimit);
fill_primes(TheLimit);
printf("primes ready\n");
}

------------------------------


Must I suspect the installation of my compiler?

Groetjes Albert
 
D

D. Aaron Sawyer

I have the following program that gives consistently a segfault
with
--------------------
46344 1
46345 1
46346 1
46347 1
46348 1
46349 0
Segmentation fault
-------------------

The program itself doesn't give any complaints with gcc 4.4
on a stable Debian.
It is a classical prime sieve.
I can't find any fault with it.

--------------------------

#include <stdio.h>
#include <assert.h>
#include <inttypes.h>

#define BYTE uint8_t

/* largest number */
#define MAXSIZE 20000000
#define SIZE 2000000

long int TheLimit;

/*****************************************************************************/
/* */
/* Fill the `nprimes' table up to `n' */
/* */
/*****************************************************************************/


static BYTE composite[ MAXSIZE+1];

void fill_primes( int n)
{
int i,j;

assert(n<=MAXSIZE);
for (i=0;i<=n; i++) composite = 0;
for ( i=2; i<=n; i++)
{
printf("%d %d \n",i,composite);
if (!composite)
{
for (j=i*i; j<=n; j+=i) composite[j] = 1;
}
}
}

int main( int argv, char** argc)
{
int i;
TheLimit = atoi(argc[1]);
printf("n= %d\n",TheLimit);
fill_primes(TheLimit);
printf("primes ready\n");
}

------------------------------


Must I suspect the installation of my compiler?

Groetjes Albert


1. you fail to handle the case of a missing argument.
2. what is the size of int j? what is the maximum value an int can hold?
3. what is the value of 46349*46349 ?
4. compare the answer to 3 with that of 2b.
5. your compiler is fine. your code needs repair.
 
L

Luuk

I have the following program that gives consistently a segfault
with
--------------------
46344 1
46345 1
46346 1
46347 1
46348 1
46349 0
Segmentation fault
-------------------

The program itself doesn't give any complaints with gcc 4.4
on a stable Debian.
It is a classical prime sieve.
I can't find any fault with it.

--------------------------

#include <stdio.h>
#include <assert.h>
#include <inttypes.h>

#define BYTE uint8_t

/* largest number */
#define MAXSIZE 20000000
#define SIZE 2000000

long int TheLimit;

/*****************************************************************************/
/* */
/* Fill the `nprimes' table up to `n' */
/* */
/*****************************************************************************/


static BYTE composite[ MAXSIZE+1];

void fill_primes( int n)
{
int i,j;

assert(n<=MAXSIZE);
for (i=0;i<=n; i++) composite = 0;
for ( i=2; i<=n; i++)
{
printf("%d %d \n",i,composite);
if (!composite)
{
for (j=i*i; j<=n; j+=i) composite[j] = 1;


46349 * 46349 = -2146737495
}
}
}

int main( int argv, char** argc)
{
int i;
TheLimit = atoi(argc[1]);
printf("n= %d\n",TheLimit);
fill_primes(TheLimit);
printf("primes ready\n");
}

you have an overflow,
 
Ø

Øyvind Røtvold

Get yourself a debugger:

gdb)$ run 60000
.......
46348 1
46349 0

Program received signal SIGSEGV, Segmentation fault.
0x080485f0 in fill_primes (n=60000) at nprim.c:34
34 for (j=i*i; j<=n; j+=i) composite[j] = 1;
(gdb)$ print j
$1 = -2146737495
(gdb)$


You will probably find that 46349*46349 > 2^31.
 
G

Geoff

I have the following program that gives consistently a segfault
with
--------------------
46344 1
46345 1
46346 1
46347 1
46348 1
46349 0
Segmentation fault
-------------------

The program itself doesn't give any complaints with gcc 4.4
on a stable Debian.
It is a classical prime sieve.
I can't find any fault with it.

You're not looking very hard. See below.
--------------------------

#include <stdio.h>
#include <assert.h>
#include <inttypes.h>
-----------^^^^^^^^^^
Non-standard header used.
#define BYTE uint8_t

Why do this? Why not just use standard type unsigned char?
/* largest number */
#define MAXSIZE 20000000

#define SIZE 2000000
This is never referenced again, delete it.
long int TheLimit;

/*****************************************************************************/
/* */
/* Fill the `nprimes' table up to `n' */
/* */
/*****************************************************************************/


static BYTE composite[ MAXSIZE+1];

void fill_primes( int n)
{
int i; unsigned long long j;

assert(n<=MAXSIZE);
for (i=0;i<=n; i++) composite = 0;
for ( i=2; i<=n; i++)
{
printf("%d %d \n",i,composite);
if (!composite)
{
for (j=i*i; j<=n; j+=i) composite[j] = 1;
}
}
}

int main( int argv, char** argc)
{
int i;


Unused variable i.
TheLimit = atoi(argc[1]);
printf("n= %d\n",TheLimit);
fill_primes(TheLimit);
printf("primes ready\n");
}
No, your compiler is fine, your usage of it is flawed.

Use the -Wall switch and your compiler will talk to you.
 
N

Noob

Geoff said:
-----------^^^^^^^^^^
Non-standard header used.

inttypes.h /is/ a standard C99 header.
https://en.wikibooks.org/wiki/C_Programming/C_Reference/inttypes.h
Why do this? Why not just use standard type unsigned char?

@Geoff: unsigned char /may/ be wider than 8 bits.

@Albert: a typedef seems more appropriate than a macro.
typedef uint8_t byte;
Use INT_MAX from <limits.h> instead

Allocating INT_MAX+1 ints is unlikely to succeed (except on Win64)
Use the -Wall switch and your compiler will talk to you.

Some people also add -Wextra -O2 to enable even more warnings.
 
K

Keith Thompson

Noob said:
Geoff said:
Albert van der Horst wrote: [...]
#define BYTE uint8_t

Why do this? Why not just use standard type unsigned char?

@Geoff: unsigned char /may/ be wider than 8 bits.

And if it is, then uint8_t will not exist. (Which means the program
will fail to compile; if it depends on 8-bit bytes, that's probably a
good thing.)

[...]
 
B

BartC

Geoff said:
On 25 Dec 2013 13:09:30 GMT, (e-mail address removed)4all.nl (Albert van der
Horst) wrote:

Why do this? Why not just use standard type unsigned char?

Because 'byte' (as I would write it) is a lot shorter (and easier on the eye
than 'uint8_t'). And it explains more, because everyone knows what a byte
is. Also that a byte* type probably isn't going to used for strings.
 
K

Kenny McCormack

This is wrong. It is _not_ true that "everyone knows what a byte is".
In fact, it seems that _you_ do not know what a byte is.

Boys, boys, boys!

You guys really need to get a room!

--
Here's a simple test for Fox viewers:

1) Sit back, close your eyes, and think (Yes, I know that's hard for you).
2) Think about and imagine all of your ridiculous fantasies about Barack Obama.
3) Now, imagine that he is white. Cogitate on how absurd your fantasies
seem now.

See? That wasn't hard, was it?
 
N

Noob

BartC said:
'byte' [...] is a lot shorter (and easier on the eye) than 'uint8_t'.
And it explains more, because everyone knows what a byte is.

No. Use "octet" if you need a word for uint8_t, not "byte".

"byte" is a pun on "bit/bite". It conveys no size information.
Also that a byte* type probably isn't going to used for strings.

I don't think anyone would use (unsigned char *) for strings either.
 
S

Seebs

Because 'byte' (as I would write it) is a lot shorter (and easier on the eye
than 'uint8_t').

It might be easier on the eye, but it's also unfamiliar.
And it explains more, because everyone knows what a byte is.

C programmers had better know what an unsigned char is. Furthermore,
"byte" has a lot of implications which appear to be possibly-false and
definitely-irrelevant to your program.
Also that a byte* type probably isn't going to used for strings.

But that's misleading, because of course it could be.

-s
 
B

BartC

Noob said:
BartC said:
'byte' [...] is a lot shorter (and easier on the eye) than 'uint8_t'.
And it explains more, because everyone knows what a byte is.

No. Use "octet" if you need a word for uint8_t, not "byte".
"byte" is a pun on "bit/bite". It conveys no size information.

No? If you had to make an educated guess as to the number of bits in a
'byte' type, and your life depending on it, what would be your best guess;
11?

I always liked to think, myself, that a bit was an eight of a byte, in the
same way that a bit is an eighth of a dollar.

But apart from my opinion, the Wikipedia article suggests that an 8-bit byte
is the 'de factor standard'; that it has ubiquitous acceptance (other than
in c.l.c obviously); and that that meaning is ratified by a formal
international standard.

While 'octet' is used where unambiguity is needed (ignoring by the ambiguity
caused by people who've never heard of an octet).

Anyway, if the width of a 'byte' type alias in C source code is ambiguous,
then so is the width of a 'char' type.
I don't think anyone would use (unsigned char *) for strings either.

I use a macro 'uchar', with strings being uchar*, since I don't want any
nasty surprises from the possible signedness of a char type.
 
K

Keith Thompson

BartC said:
Noob said:
BartC wrote: [...]
Also that a byte* type probably isn't going to used for strings.

I don't think anyone would use (unsigned char *) for strings either.

I use a macro 'uchar', with strings being uchar*, since I don't want any
nasty surprises from the possible signedness of a char type.

Why is "uchar" a macro rather than a typedef?

And what's the advantage of "uchar" over "unsigned char"? If I see
"unsigned char" in your code, I know exactly what it means; if I see
"uchar", I'm only about 98% sure.
 
J

James Kuyper

On 12/25/2013 11:46 PM, Martin Ambuhl wrote:
....
Why is it that novice programmers always want to blame their compiler
for their own errors?

They know, from a user perspective, that defective software is
commonplace, so they suspect the compiler. They have not yet learned
that the reason why defective software is commonplace is that it's very
hard to write defect-free code, so they underestimate the likelihood
that it's a mistake in their own code.
 
B

BartC

Keith Thompson said:
Why is "uchar" a macro rather than a typedef?

And what's the advantage of "uchar" over "unsigned char"? If I see
"unsigned char" in your code, I know exactly what it means; if I see
"uchar", I'm only about 98% sure.

98% is good enough. It helps stop the code from sprawling and saves typing.
I'm sure you also appreciate C choosing 'int' and 'char' rather than
'integer' and 'character'.
 
K

Keith Thompson

BartC said:
<shrug> It just is. It saves the slight pause that happens with typedef
where I try and remember which way around things are supposed to be.

Ok. Not the way I'd do it, though. And a bit of practice should
alleviate the problem of remembering how typedefs are defined.
98% is good enough. It helps stop the code from sprawling and saves typing.
I'm sure you also appreciate C choosing 'int' and 'char' rather than
'integer' and 'character'.

Not particularly, no. I've used languages that spell out their type
names, and I've never has any particular problem with it.

On the other hand, in C "integer" and "character" are more generic terms
(there are several integer types, of which "int" is just one; likewise
for character types). That might be considered a rationale for using
the shorter names, though I don't believe that was the original reason.

98% tends not to be good enough *for me* when I can get 100% at the
trivial cost of a little extra typing. YMMV.
 
G

Geoff

I stand corrected. I looked it up in N1570 and saw I was mistaken only
after I had sent the post. I blame the eggnog. :)
@Geoff: unsigned char /may/ be wider than 8 bits.

A good time to use the #error directive then, isn't it?
@Albert: a typedef seems more appropriate than a macro.
typedef uint8_t byte;

More eggnog abuse. Upon review he's not using that value as I
initially perceived him to be using it.
Allocating INT_MAX+1 ints is unlikely to succeed (except on Win64)


Some people also add -Wextra -O2 to enable even more warnings.

Agreed.
 
G

Geoff

Because 'byte' (as I would write it) is a lot shorter (and easier on the eye
than 'uint8_t'). And it explains more, because everyone knows what a byte
is. Also that a byte* type probably isn't going to used for strings.

Upon further review, I were rewriting his code I'd bypass the byte vs.
BYTE vs. char issue altogether and use bool for the type of
composite[] because that's what he's using it as. It has only two
values and he could write it much more clearly using true and false.
 
G

Geoff

[email protected] (Albert van der Horst) said:
I have the following program that gives consistently a segfault
with
--------------------
46344 1
46345 1
46346 1
46347 1
46348 1
46349 0
Segmentation fault
-------------------

The program itself doesn't give any complaints with gcc 4.4
on a stable Debian.
It is a classical prime sieve.
I can't find any fault with it.

--------------------------

#include <stdio.h>
#include <assert.h>
#include <inttypes.h>

#define BYTE uint8_t

/* largest number */
#define MAXSIZE 20000000
#define SIZE 2000000

long int TheLimit;

/*****************************************************************************/
/* */
/* Fill the `nprimes' table up to `n' */
/* */
/*****************************************************************************/

Others have pointed out lots of errors. Here's one of the biggest and
one no compiler can protect you against: a stonking great comment that
doesn't actually tell you much and - crucially - what it tells you is
wrong: there is no nprimes table.
static BYTE composite[ MAXSIZE+1];

void fill_primes( int n)

I find calling a function "fill_primes" when what it does is to fill a
table called "composite" a little confusing.
{
int i,j;

assert(n<=MAXSIZE);
for (i=0;i<=n; i++) composite = 0;
for ( i=2; i<=n; i++)
{
printf("%d %d \n",i,composite);
if (!composite)
{
for (j=i*i; j<=n; j+=i) composite[j] = 1;
}
}
}

int main( int argv, char** argc)
{
int i;
TheLimit = atoi(argc[1]);
printf("n= %d\n",TheLimit);
fill_primes(TheLimit);
printf("primes ready\n");
}


No.


A lovely post and exactly right. I'll bet you smoke a pipe, don't you
doctor?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top