R
Richard Heathfield
santosh said:
I can see why you ask, of course. Curiously, however, the answer is "no".
That probably sounds inconsistent. In fact, there are no two ways about it
- it *is* inconsistent!
My "reasoning" (such as it is) is this: either I know exactly how much
memory I need (in which case I allocate exactly that amount and populate it
immediately with good data), or I don't, in which case I allocate more than
I need, populate what I can, and keep track of the first unused slot. This
seems to work for me, and undoubtedly it works for other people here too.
Otherwise, calloc would be much more popular here.
But yes, it's inconsistent. I guess it's a "gut feel" thing - the difference
between int i; and int i = 0; is so small as to be not worth bothering
about, but the difference between int *p = malloc(n * sizeof *p) and int *p
= calloc(n, sizeof *p) clearly depends on n, at least for sufficiently
large n.
So it's a trade-off between debuggability and performance. We all make this
trade-off at some point - we just draw our lines at different places in the
sand, that's all.
Is it also correct to conclude that you have a strong preference for
calloc over malloc?
I can see why you ask, of course. Curiously, however, the answer is "no".
That probably sounds inconsistent. In fact, there are no two ways about it
- it *is* inconsistent!
My "reasoning" (such as it is) is this: either I know exactly how much
memory I need (in which case I allocate exactly that amount and populate it
immediately with good data), or I don't, in which case I allocate more than
I need, populate what I can, and keep track of the first unused slot. This
seems to work for me, and undoubtedly it works for other people here too.
Otherwise, calloc would be much more popular here.
But yes, it's inconsistent. I guess it's a "gut feel" thing - the difference
between int i; and int i = 0; is so small as to be not worth bothering
about, but the difference between int *p = malloc(n * sizeof *p) and int *p
= calloc(n, sizeof *p) clearly depends on n, at least for sufficiently
large n.
So it's a trade-off between debuggability and performance. We all make this
trade-off at some point - we just draw our lines at different places in the
sand, that's all.