santosh said:
Hello all,
In K&R2 one exercise asks the reader to compute and print the limits for
the basic integer types. This is trivial for unsigned types. But is it
possible for signed types without invoking undefined behaviour
triggered by overflow? Remember that the constants in limits.h cannot
be used.
C90:
The original exercise text as provided by Richard Heathfield is:
Exercise 2-1. Write a program to determine the ranges of char, short,
int, and long, both signed and unsigned, by printing appropriate values
from standard headers and by direct computation. Harder if you compute
them: determine the ranges of the various floating-point types.
Any ideas?
I think the text gives us space, in the signed int case to see if
INT_MIN== -INT_MAX-1 or INT_MIN== -INT_MAX, and act accordingly with
unsigned int:
#include <stdio.h>
#include <limits.h>
int main()
{
unsigned x= -1;
int INTMIN, INTMAX;
printf("INT_MIN= %d, INT_MAX= %d\n", INT_MIN, INT_MAX);
if (INT_MIN== -INT_MAX -1)
{
INTMAX=x /2;
INTMIN= -INTMAX -1;
}
else
{
INTMAX=x /2;
INTMIN= -INTMAX;
}
printf("INTMIN= %d, INTMAX= %d\n", INTMIN, INTMAX);
return 0;
}
Probably this is a stupid solution. But what can we take for granted,
"by printing appropriate values from standard headers"?