Sunil said:
What i want to know is what will be the effect if i use int in
place of long in applications running on linux and also on what factors
does the size of datatypes depend.
The size depends on the implementation. Incidentally, the size is
measured in units of the space occupied by a char, which is not
guaranteed to take 8 bits of space, though it often does.
In most cases, the exact amount of space something takes should not
concern the programmer: use char when talking about characters,
unsigned char when talking about raw memory, short or unsigned short
when space savings is important, signed or unsigned char for even more
savings, long and unsigned long when dealing with large integers, long
long and unsigned long long when the integers may be really long, and
int and unsigned int when you want to use whatever is the most
`natural' integer representation in the implementation (i.e. int or
unsigned int ought to be the one you use unless there is reason to
deviate). Use the signed types if you think of them as integers, use
the unsigned types if you treat them as bit patterns, or need to use
the extra range on the large positive end, or the logic of the program
is such that blindly converting negative numbers to large positive
integers is the `right thing' to do!
The C standard does guarantee some minimum range of values for each of
these types: look at those, and decide when you want space savings
versus when your integers may become large in magnitude in interpreting
the last paragraph. But don't apply the rules blindly ... experience
teaches you what is likely to be the best data type. C99 also lets you
more fine grained control over integral types: look at them. In rare
cases, bitfields might also be useful.
Do not gratuitously put in `linux' dependencies: and not all linux
platforms will have the exact same behaviour anyway. Even if they are
the same size, int and long are not interchangeable, a program will
often become `incorrect' if you change ints to longs without changing
anything else. Evn though this makes the behaviour undefined, in
practice, on current implementations, it is not likely to create a
difference except in warnings from the compiler. But why take the
chance?
Note that sizeof gives you the space occupied in memory: and it is
possible for an implementation to not effectively use all the space, so
use the macros like CHAR_MAX if you need the exact ranges. It may also
not use the same representation in memory for different types of the
same size (For example, there is no bar to using little endian for ints
and big endian for the longs, as long as the compiler takes care to do
the bit manipulations properly; no implementation I know of does that
yet). The implementation can further require different alignments for
the two types (thus, it might decide on a 2 byte alignment for ints and
4 byte alignment for longs: planning on using 16 bit bus operations for
ints and 32 bit operations for longs; again I know of no implementation
that does that).
In short, you are asking questions which you `should' not be. C is
trying to provide a level of abstraction: a division of labour between
the programmer and the implementation. The programmer describes what
he/she wants and some low level things like whether space or range is
more important, the implementation takes care of the hardware and makes
the program run according to the specification. The standard provides
the language for unambiguous communication. Your questions are
crossing the border, violating one of the raisons de etre of high level
langauges.
Sure, there are occasions when you need to know your precise hardware
and how your implementation maps your code to it. The phrasing of your
question seems to suggest you are not in that situation, though.