Dynamically resizing a buffer

C

cr88192

Flash Gordon said:
cr88192 wrote, On 23/08/07 16:09:

You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.

but, a float and an integer are different concepts though...

not sure of why one would consider using a shift on a float...

Shift is for moving bits, divide is for scaling. Otherwise shift would
work on floating point and would have behaviour defined by the C standard
for negative numbers (it leaves it for the implementation to define the
result of right shifting a negative number).

ok, makes sense.

No, it is a question of using the wrong word. One that in some situations
happens to be similar, but in many situations means something completely
different.

the situations are different though.
as noted above, an integer and a float are conceptually different.

if I am speaking to someone do I count in floats? no, I count in integers...

Yes, it is a trick.


The shift operator is a basic operator for moving bits, using it to divide
is a trick and one that does not work in all situations.

it works on integers, integers are the context and the situation.
sizes are generally not negative either.

there is no problem as I see it.

Code riddled with casts probably is bad.

Code using a lot of shifts for shifting bits, NOT for division, could be
good.

ok, but then one has to differentiate: when is the concept shifting bits and
when is it division?...
this is a subjective matter.

No. However a shift works in terms of bits, not in terms of arithmetic.

but on integers, bit ops are arithmetic...

makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...

Several of the things above are reasons for using divide rather than
shift.

if one is confused as to whether not they are dealing with an integer...

what if I meant to grab an apple but instead grabbed a potatoe and proceeded
to eat it as said apple. people would look oddly, and then maybe ones'
stomach gets irritated from the raw potatoe...

so, casually eating an object raw is valid for an apple but not for said
potatoe.
likewise, we put potatoes in soup but not apples.

same difference really...


or a bigger mystery (OT, but as an example):
when someone can claim adherence to a certain religion and do things which
are condemned within the doctrine of said religion, meanwhile knowing the
doctrine, and then believe that their actions are moral (presumably within
the bounds of said religion).

this has happened recently, and after several months I still haven't figured
it out.

do they deny their actions? no.
do they deny the doctrine or the existence (or interpretation) of the
indicated statements? no.
do they admit that their actions are immoral? no.

this makes little sense really...

like some kind of bizarre paradox as to their reasoning...
(throwing philosophy at the problem still does not resolve it...).

The issue of you making a mistake that you would not have done had you
stuck to doing simple integer division is a good reason for using integer
division.

potentially, but this makes an assumption about ones' thought process.
if one always types shifts, one thinks shifts, one does not think division
unless one were first thinking of division, which I assert that I was not
(which is why I think I missed such an obvious answer).

What do you get if you right shift -1? On some machines it will be 32767
on others it will be -1. Does that sound like division to you?

different context.

one machine is probably an x86 in real mode with an int being 16 bits and
using shr.
the other is probably using sar.

Not relevant since a shift is not a different representation for division,
it is a different operation.

it is an operation relevant to a situation, that situation being positive
integers...
I was not claiming it was also sane for floats or for negatives.

Code is written for other people, not just the original author or the
compiler.

maybe.
if other people ever actually read said code.

Linear is another natural growth curve.

except that linear is not a curve...

Ah, so you realise now that suggesting a 4/3 curve makes less sense than
suggesting using a curve that will give good performance. My example above
was extreme, but there are plenty of less extreme examples.

the question is then what curve gives the best performance, which may depend
on the situation.
it is a tradeoff really between number of reallocations and wasted space.

4/3 is more conservative with space than 3/2, which is still more
conservative than the golden ratio.

so the question is then of rate of growth and likelyhood of continued
growth.


and, for the previous topic:
it is by some odd convention that when we see a division by undivisible
integers that we realize that it has a non-integer output.

I guess it is also by similar convention that we realize that negative even
roots are imaginary and that non-integral exponents of negative bases are
complex...

 
C

cr88192

santosh said:
cr88192 said:
Flash Gordon said:
cr88192 wrote, On 23/08/07 11:12:

not optimizations. shifts are how one typically does these things...

Not if one is being sensible. [ ... ]
I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.

yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think
divides...
Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.
'tricks'?...

shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...

The point, I think, is that using right shift for achieving the effect of
division is not completely portable. It used to be a viable alternative at
a time when optimisation was primitive, but it's useless and flaky
nowadays, since almost any compiler is going to optimise a division into
shifts, if it can.

yes, ok.

Not the devices themselves, but their uses in specific cases, eg., type
punning, casts that discard data etc.

ok.



No, in terms of their effect according to the rules of arithmetics.
Numbers
may be represented as bits under computers, but that is no reason to think
of every arithmetic operation in terms of their effect at the bit
representation level, except when it _is_ required.

ok. people then thinking in decimal rather than bits (or in math where we
think of everything in terms of transforms and rules), ok...

Personally, an amalgamation of all three, plus imagination of other types
of
sensory inputs, as appropriate.

I see the text. sometimes there is a faint echo of words (though primarily
when writing or re-reading/thinking). text and images are often remembered
as text and images.

maths are often worked out on visual representations (3D geometric, or
visual representations of the expressions, or sometimes they are
intermixed).

I don't visualize things when reading. then again, I usually also don't
really read (or terribly much enjoy) fiction either... have read a fictional
few books though.

memories of stories have some vague partial imagery, but mostly of things
that were explicitly described I suspect...


watching movies or animes is good though.
music is also good.
 
C

CBFalconer

cr88192 said:
.... snip ...

but on integers, bit ops are arithmetic...

makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...

I can see ways of deriving 4, 19, and 3. The base must be at least
octal, and the items must be at least 4 bits wide. However I see
no way of arriving at 8.
 
P

Peter J. Holzer

Philip said:
struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};
[...]
int resizeBuffer(MyBuffer *buf) {
size_t inc = buf->size;
unsigned char *tmp = NULL;
while(inc>0 &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {

You can change this test slightly to catch size_t overflow at the same
time:

while((size_t)(buf->size + inc) > buf->size &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {

hp
 
P

Peter J. Holzer

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think divides...


if I were thinking of a divide, than i/3 is an obvious difference from i/2.
if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks that
'i>>1' is actually 'i/2'...

But you were thinking of divides - you explicitely stated that you
wanted an increase of 50% (1/2), 33% (ca. 1/3) or 25% (1/4). Since 1/2,
1/3 and 1/4 is a nice progression, I find it much more likely that you
derived ((i>>2) + (i>>4) + (i>>6)) as an approximation of 1/3 than
the other way around. If you had been thinking of shifts, you would
probably have chosen ((i>>2) + (i>>3)) as the middle point between
(i>>1) and (i>>2).

hp
 
C

cr88192

Peter J. Holzer said:
Or just in terms of numbers regardless of any specific representation.

however this would work I guess...


actually, I think if I think of a specific-sized integer, I see a fixed
square with hex-digits inside.
if I think of other numbers, I see them as a decimal version. floats are
decimal but associated with a square (the number is fit inside the square).

I think they may also be overlayed with a name and other information.

I think idle thinking results in a good deal of visual "shadowing". I think
about something, and stuff goes on in some kind of semi-diagramic head-UI.

things in said head-UI, look different than in traditional UIs, mostly light
on dark rather than dark on light (as is typical in windows...).


or something...
 
P

Peter J. Holzer

Of course that's what the original proposal did: increase by 100% of
the current size. If you start from one, this means you only allocate
power-of-two sized blocks.

A problem with any approach is that it might interact badly with the
malloc() implementation. Imagine an implementation that always
allocates powers of two but uses sizeof(void *) bytes of it to record
the size - if you always allocated powers of two you would end up
allocating nearly four times as much as you need.

I actually ran into this problem when I tested my dynamic buffer
implementation on several systems in the early 1990's. I found that
increasing by a factor of 1.5 worked well across all the implementations
I tested at that time, but of course that's no guarantee that it works
as well for all possible implementations (or even most implementations
today).
I recommend separating out the increment algorithm so that it can
easily be changed if it proves to be bad on some platform.

Yup. Did that, too, at the time:

/* Macro: DA_GROW
* Purpose: Return the new size of an array which had size |a| and must
* include index |b|.
* This macro may be changed by the application, but the default is
* expected to be useful for many applications.
*
* Algorithm: It is assumed that most arrays will grow linearly,
* that is, |b| will equal |a|. To avoid calling realloc for every
* element added, the size is multiplied by a constant factor.
* The factor is 1.5 because a factor of 2 has produced extreme
* fragmentation with some allocators.
* For the case where this expansion is not sufficient to reach
* index |b| we do not guess about the future growth of the array
* and make it just large enough.
* This algorithm is good enough to start at zero, but the
* steps will be very small at first (0, 1, 2, 3, 4, 6, 9, 13, ...)
* so it might be a good idea to start with some moderate index.
*/
#ifndef DA_GROW
#define DA_GROW(a,b) MAX((a) * 3 / 2, (b) + 1)
#endif

(the whole thing was entirely implemented as macros, not as functions)

hp
 
F

Flash Gordon

cr88192 wrote, On 24/08/07 02:46:
but, a float and an integer are different concepts though...
not sure of why one would consider using a shift on a float...

Because you consider shift an appropriate method of halving a number.
ok, makes sense.

So will you move to the less error prone system then?
the situations are different though.
as noted above, an integer and a float are conceptually different.
if I am speaking to someone do I count in floats? no, I count in integers...

I measure sizes in all sorts of different systems not all of which are
integers.
it works on integers, integers are the context and the situation.
sizes are generally not negative either.

They are sometimes.
If a spec says a number is in bits 3 to 7 then getting it to the correct
place is shifting.
there is no problem as I see it.

I thought you admitted to an error that would not be made if using
simple division. For one value you certainly suggested something more
complex than simple division.
ok, but then one has to differentiate: when is the concept shifting bits and
when is it division?...
this is a subjective matter.

I honestly cannot conceive of why one would consider shifting to be the
natural way of scaling a value. Would you convert from inches to
centimetres by shifting? Would you talk about doubling your salary or
shifting it one bit to the left? Would you tell someone that they you
have doubled the storage capacity of a disk array or shifted it one bit
to the left? Or that the free space has been shifted one bit to the
right instead of halved?

Changing the size of a buffer is NOT linked to the representation of the
size, shifting is.
but on integers, bit ops are arithmetic...

Nope. Right shift is a logical shift on some processors *not*
arithmetic. If anyone ever builds a trinary machine then the shift
operation procided by the processor will *not* be multiply/divide by 2.
On a machine using BCD it isn't either, and BCD has been used for
representing integer values.
makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...

I can't see what point you are trying to make, unless it is why you
should *not* use a shift for scaling.
if one is confused as to whether not they are dealing with an integer...

You still keep forgetting that it only works for half the integer types
C provides, a sure sign in my opinion that you should stay well away
from the shift operator in C.

You are confused about values and specific representations. Binary is
not the only representation for numbers, it just happens to be the
current vogue in computing.
what if I meant to grab an apple but instead grabbed a potatoe and proceeded
to eat it as said apple. people would look oddly, and then maybe ones'
stomach gets irritated from the raw potatoe...

so, casually eating an object raw is valid for an apple but not for said
potatoe.
likewise, we put potatoes in soup but not apples.

same difference really...

All valid analogies for why you should be using division for scaling not
shifting.

<snip philosophy ramblings that seem to have no bearing on the matter at
hand>
potentially, but this makes an assumption about ones' thought process.
if one always types shifts, one thinks shifts, one does not think division
unless one were first thinking of division, which I assert that I was not
(which is why I think I missed such an obvious answer).

So if someone tells you to increase a buffer by 81% you would think in
terms of shifts?
different context.

one machine is probably an x86 in real mode with an int being 16 bits and
using shr.
the other is probably using sar.

Nope, one machine is not an x86 and does not have an arithmetic shift at
all.
it is an operation relevant to a situation, that situation being positive
integers...

So when you have spent half your money do you think that you have
shifted your resources one bit to the right?
I was not claiming it was also sane for floats or for negatives.

Why? They are just numbers. In any case, you keep referring to integers
rather than unsigned integers, so either you are repeatedly making a
mistake and therefore showing why you should not be using shift you you
are being sloppy in a way that leads to mistakes thus showing why using
shift is inadvisable.
maybe.
if other people ever actually read said code.

You were advising on how someone else should do things, therefore
someone other than you *will* see the code.
except that linear is not a curve...

Check the definition of a curve in maths and you will find that it *is*
a curve. You will also find plenty of other growth curves in nature if
you look.
the question is then what curve gives the best performance, which may depend
on the situation.
it is a tradeoff really between number of reallocations and wasted space.

4/3 is more conservative with space than 3/2, which is still more
conservative than the golden ratio.

so the question is then of rate of growth and likelyhood of continued
growth.

Not only the likelihood of continued growth, but how much it is likely
to grow if it does grow. All of which was one of my points. Why when
someone says you need to select the best growth curve would you
"correct" that to saying they should use some specific growth curve when
you don't know the details of what the input is?
and, for the previous topic:
it is by some odd convention that when we see a division by undivisible
integers that we realize that it has a non-integer output.

Not if you know C you don't. Or Pascal. Or Modula 2. Or assembler. If
you see integer division you expect an integer result because you know
that when working in integers you are working in integers.
I guess it is also by similar convention that we realize that negative even
roots are imaginary and that non-integral exponents of negative bases are
complex...

All completely irrelevant to the point. Why use something dependant on
*representation* when there is a more natural operator for scaling, i.e.
something that does not depend on representation.

Please don't quote signatures, the bit typically after a "-- ", unless
you are commenting on them.
 
C

cr88192

Flash Gordon said:
cr88192 wrote, On 24/08/07 02:46:

Because you consider shift an appropriate method of halving a number.

as noted, if that number is an integer.

I personally regard integers and non-integers as different concepts, with
different behaviors, semantics, and rules.

after all, with integers, 3/4==0, but with non-integers or reals, the answer
is 0.75...

likewise I consider reals and complexes to be different concepts.

So will you move to the less error prone system then?

I write whatever I write really.
in the past, I have never really seen any major problem with it.

if there were some problem, as I percieve it, then I probably would have
changed it long ago (before writing many hundreds of kloc using these kind
of conventions).

I measure sizes in all sorts of different systems not all of which are
integers.

lengths are reals, often.

sizes of arrays are not.
counting is not.

we don't say '3.85 apples' because one of them is small, or '4.25' because
one is large...
the count is 4 apples.

They are sometimes.
If a spec says a number is in bits 3 to 7 then getting it to the correct
place is shifting.

I do this often as well.
for example, I have a good deal of packed-integer based types which I modify
via masks and shifts...

I thought you admitted to an error that would not be made if using simple
division. For one value you certainly suggested something more complex
than simple division.

yes, this was a faulty thought, something I would have likely noticed and
fixed later for looking stupid...

I think it was because I was thinking of percentages, and this is the
typical way I manipulate things via percentages.

'17% of i' => '(i*17)/100'.

"i's percentage of j" '((i*100)/j)'.

I honestly cannot conceive of why one would consider shifting to be the
natural way of scaling a value. Would you convert from inches to
centimetres by shifting? Would you talk about doubling your salary or
shifting it one bit to the left? Would you tell someone that they you have
doubled the storage capacity of a disk array or shifted it one bit to the
left? Or that the free space has been shifted one bit to the right instead
of halved?

would probably not say it as such, but mentally I often use shifting in
performing calculations, as I find it easier than multiplication or
division.

Changing the size of a buffer is NOT linked to the representation of the
size, shifting is.

a buffer's size, however, is naturally constrained to being a positive
integer.

Nope. Right shift is a logical shift on some processors *not* arithmetic.
If anyone ever builds a trinary machine then the shift operation procided
by the processor will *not* be multiply/divide by 2. On a machine using
BCD it isn't either, and BCD has been used for representing integer
values.

maybe...

however, I reason, almost none of my crap is ever likely to be run on
something non-x86-based, much less something so far reachingly different,
which I would unlikely even consider coding for, assuming I ever even
encountered such a beast...

I can't see what point you are trying to make, unless it is why you should
*not* use a shift for scaling.

the operations make sense to humans as well, if they know them...

You still keep forgetting that it only works for half the integer types C
provides, a sure sign in my opinion that you should stay well away from
the shift operator in C.

was never saying it provably worked on negative integers.
however, it does work in what compilers I am fammiliar with.

You are confused about values and specific representations. Binary is not
the only representation for numbers, it just happens to be the current
vogue in computing.

theoretical argument, maybe, but IMO of little practical concern. I don't
think binary will go away anytime soon, as doing so would break much of the
software in existence at this point, and assuming such a change occures, it
will not matter since the mass of software would have been being
rewritten/replaced anyways.

I say though, not only is it valid for computers, but also humans...

All valid analogies for why you should be using division for scaling not
shifting.

<snip philosophy ramblings that seem to have no bearing on the matter at
hand>


So if someone tells you to increase a buffer by 81% you would think in
terms of shifts?

naturally, I wouild have thought in one of the options I originally provided
'((i*81)/100)'.
why: because this value, as it so happens, does not have an integer
reciprocal.

Nope, one machine is not an x86 and does not have an arithmetic shift at
all.

ok, abstraction failing, it uses a 16 bit int.

So when you have spent half your money do you think that you have shifted
your resources one bit to the right?

very often, this is how I reason about some things.

Why? They are just numbers. In any case, you keep referring to integers
rather than unsigned integers, so either you are repeatedly making a
mistake and therefore showing why you should not be using shift you you
are being sloppy in a way that leads to mistakes thus showing why using
shift is inadvisable.

'unsigned integer' is longer to type, however in context I repeatedly
indicate that the numbers are positive, an indication that, whether the
storage is an integer or unsigned integer, the value is constrained to be
positive.

'23>>1' is '11' regardless of it being an int or uint.

You were advising on how someone else should do things, therefore someone
other than you *will* see the code.

well, this is usenet, not ones' codebase.

Check the definition of a curve in maths and you will find that it *is* a
curve. You will also find plenty of other growth curves in nature if you
look.

x^2 is a curve.
2^x is a curve.
x^0.5 is a curve.

x is not, it is linear, and thus not a curve.

Not only the likelihood of continued growth, but how much it is likely to
grow if it does grow. All of which was one of my points. Why when someone
says you need to select the best growth curve would you "correct" that to
saying they should use some specific growth curve when you don't know the
details of what the input is?


Not if you know C you don't. Or Pascal. Or Modula 2. Or assembler. If you
see integer division you expect an integer result because you know that
when working in integers you are working in integers.

I was talking about fractions or rationals here.

humans have a convention that division of 2 integers leads to a non-integer
rather than an integer, and that computers do not is a difference. it shows
that integers are not numbers in a strictly traditional sense, because the
behavior is different.

All completely irrelevant to the point. Why use something dependant on
*representation* when there is a more natural operator for scaling, i.e.
something that does not depend on representation.

representation is value, and implementation is definition...


Please don't quote signatures, the bit typically after a "-- ", unless you
are commenting on them.
 
C

cr88192

Peter J. Holzer said:
But you were thinking of divides - you explicitely stated that you
wanted an increase of 50% (1/2), 33% (ca. 1/3) or 25% (1/4). Since 1/2,
1/3 and 1/4 is a nice progression, I find it much more likely that you
derived ((i>>2) + (i>>4) + (i>>6)) as an approximation of 1/3 than
the other way around. If you had been thinking of shifts, you would
probably have chosen ((i>>2) + (i>>3)) as the middle point between
(i>>1) and (i>>2).

odd assertion, however I am not sure how that chain of thought would work,
so was not likely employed in my case.

in any case, my memory has faded out now, I no longer remember.
 
P

Peter J. Holzer

however this would work I guess...

Works for me. The number nineteen is a specific number (the successor of
eighteen in the natural numbers), regardless of whether it is
represented as "19", "13", "10011", "IXX", "||||| ||||| ||||| ||||",
"nineteen", "neunzehn", "19.0", "1.9E1", or whatever.

actually, I think if I think of a specific-sized integer, I see a fixed
square with hex-digits inside.

I think of objects in a similar sense (I prefer rectangles to squares),
but not of numbers. Numbers are values which can be put into those
boxes, not the boxes themselves. And I usually don't imagine even
numeric objects to be in a specific base (unless the fact that they are
in fact stored as binary is important - then I think of them in binary).
if I think of other numbers, I see them as a decimal version. floats are
decimal but associated with a square (the number is fit inside the square).

Floats are the same as integers: Normally the base doesn't matter and I
don't imagine any specific base (Oh, I write FP numbers in decimal, but
I don't imagine them to be decimal - that's just a notation). If it
does matter, I imagine them to be in the base they are actually stored
in (binary, usually).

hp
 
P

Peter J. Holzer

cr88192 wrote, On 24/08/07 02:46:

I honestly cannot conceive of why one would consider shifting to be the
natural way of scaling a value. Would you convert from inches to
centimetres by shifting?

Since lengths are rarely represented in base 2.54, no.

However, I consider conversion from centimeters to meters shifting in
base 10 (throw away the last two digits, or move the decimal point two
places to the left, whichever you prefer). Scaling by a power of your
base is shifting. And that does work for real numbers, too, as long as
they are written without an exponent:

002.54 cm = 0.0254 m

If you use an exponent in the notation, the shift turns into an
addition/subtraction on the exponent:

2.54E0 cm = 2.54E-2 m

So whether a scaling process can be seen as a shift depends on the
representation.
Would you talk about doubling your salary or
shifting it one bit to the left? Would you tell someone that they you
have doubled the storage capacity of a disk array or shifted it one bit
to the left? Or that the free space has been shifted one bit to the
right instead of halved?

I don't consider these "scaling". They are changes of actual quantities.

Changing the size of a buffer is NOT linked to the representation of the
size, shifting is.
Right.



Nope. Right shift is a logical shift on some processors *not*
arithmetic. If anyone ever builds a trinary machine then the shift
operation procided by the processor will *not* be multiply/divide by 2.
On a machine using BCD it isn't either, and BCD has been used for
representing integer values.

This is wrong. N1124 says:


| The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits
| are filled with zeros. If E1 has an unsigned type, the value of the
| result is E1 × 2^{E2}, reduced modulo one more than the maximum value
| representable in the result type. If E1 has a signed type and
| nonnegative value, and E1 × 2^{E2} is representable in the result type,
| then that is the resulting value; otherwise, the behavior is undefined.
|
| The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has
| an unsigned type or if E1 has a signed type and a nonnegative value, the
| value of the result is the integral part of the quotient of E1 / 2^{E2}. If
| E1 has a signed type and a negative value, the resulting value is
| implementation-defined.

(6.5.7: Bitwise Shift Operators. I indicated superscript with ^{...})

So the effect of the shift operations in C (as opposed to some generic
concept of shifting) is explicitely defined in terms of multiplications
and divisions by powers of two. Even on a trinary machine, 1 << 3 must
be equal to 8. If that makes the implementation slow, tough luck.

hp
 
C

cr88192

Peter J. Holzer said:
Works for me. The number nineteen is a specific number (the successor of
eighteen in the natural numbers), regardless of whether it is
represented as "19", "13", "10011", "IXX", "||||| ||||| ||||| ||||",
"nineteen", "neunzehn", "19.0", "1.9E1", or whatever.

I guess I work different.

my thoughts often work much like a notepad.
so, something is written down, that is how I think of it.

a number has to be converted to some other form for some tasks.
many times, many abstract/aproximate arithmetic is done with variable length
lines, or visual representations of geometry.

not always consistent though...

I think of objects in a similar sense (I prefer rectangles to squares),
but not of numbers. Numbers are values which can be put into those
boxes, not the boxes themselves. And I usually don't imagine even
numeric objects to be in a specific base (unless the fact that they are
in fact stored as binary is important - then I think of them in binary).

ok.

if I see them, they are specified in some way or another. for data, it is
very close to the typical diagrams used in many specs, where the size/shape
of the box indicates the type, and the ordering the layout in memory.

for other things, I see graphs.
pointers are more often seen as links on a graph (and allocated memory as
various items attached to the graphs), and not as a spot on some super-long
line.


but, as noted, nothing is consistent. how I am thinking of it would
seemingly be the most major component of how I see it.

Floats are the same as integers: Normally the base doesn't matter and I
don't imagine any specific base (Oh, I write FP numbers in decimal, but
I don't imagine them to be decimal - that's just a notation). If it
does matter, I imagine them to be in the base they are actually stored
in (binary, usually).

in my case, I think I see them as decimal usually...
haven't much dealt much with the binary/hex versions of floats though
(usually only in assembler).

I will usually see something like '1.0' and not something like '0x3F800000'.

or such...
 
R

Richard Heathfield

CBFalconer said:
After which dereferencing p is undefined behaviour.

Indeed. So what? The object representation is safely stored in a
separate object, so p is irrelevant now. The *value* is safe.
So what? Who cares if q == oldvalueof(p).

I suggest you review the discussion again.
You seem to have something mixed up.

For a certain value of "You", perhaps. :)
 
P

Peter J. Holzer

in my case, I think I see them as decimal usually...
haven't much dealt much with the binary/hex versions of floats though
(usually only in assembler).

I will usually see something like '1.0' and not something like '0x3F800000'.

1.0 is also 1.0 in binary (0x3F800000 is hex, not binary, and it is a
a reinterpretation of the bits of a floating-point number as an integer
- which is a very different thing from a fp number, although useful if
you have to write a software fp implementation).

Thinking about floating-point values as binary is very useful if you
have to worry about rounding. For example, if you think about fp numbers
as decimal, you expect this code to print exactly 10 lines:

float x;
for (x = 0; x != 1; x += 0.1) {
printf("%f\n", x);
}

But if you think about them as binary, it is immediately obvious that
the loop may not terminate. 0.1 is just a different notation for 1/10,
and 1/10 is a periodic number in binary (just like 1/3 is in decimal):

0.0001100110011001100110011001100110011001100110011001...

So it needs to be approximated, and if you add 0.1+eps 10 times,
rounding after each step, you are unlikely to arrive at exactly 1.0.

hp
 
C

cr88192

Peter J. Holzer said:
1.0 is also 1.0 in binary (0x3F800000 is hex, not binary, and it is a
a reinterpretation of the bits of a floating-point number as an integer
- which is a very different thing from a fp number, although useful if
you have to write a software fp implementation).

ok, I misinterpreted the comment.

I know of binary and hex being different, but usually use/think hex given
the longness of binary (maintaining binary numbers mentally take a lot more
effort than hex numbers, so I primarily work in hex and can 'explode' into
binary if needed).

now, where I see them, is where working with my compiler's assembler output,
where in many cases, said floats end up in assembler as hex numbers.

Thinking about floating-point values as binary is very useful if you
have to worry about rounding. For example, if you think about fp numbers
as decimal, you expect this code to print exactly 10 lines:

float x;
for (x = 0; x != 1; x += 0.1) {
printf("%f\n", x);
}

But if you think about them as binary, it is immediately obvious that
the loop may not terminate. 0.1 is just a different notation for 1/10,
and 1/10 is a periodic number in binary (just like 1/3 is in decimal):

0.0001100110011001100110011001100110011001100110011001...

So it needs to be approximated, and if you add 0.1+eps 10 times,
rounding after each step, you are unlikely to arrive at exactly 1.0.

ok, yes...

the frequent need of using an 'epsilon' (whos' exact optimal value is highly
variable but often also very critical...).

or such...
 
C

cr88192

CBFalconer said:
I can see ways of deriving 4, 19, and 3. The base must be at least
octal, and the items must be at least 4 bits wide. However I see
no way of arriving at 8.

crap, it was 4...

bad at these stupid and obvious errors...
this kind of thing was a major killer back in my calculus classes...
 
B

Barry Schwarz

cr88192 wrote, On 23/08/07 16:09:
snip


You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.

I nominate this for best typo of the year. Do I hear a second?


Remove del for email
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top