RE: PEP 327: Decimal Data Type

Discussion in 'Python' started by Batista, Facundo, Jan 30, 2004.

  1. Stephen Horne wrote:

    #- My concern is that many people will use a decimal type just
    #- because it
    #- is there, without any consideration of whether they actually need it.

    Speed considerations are raised. You'll *never* get the performance of using
    floats or ints (unless you have a coprocessor that handles this).


    #- I don't know what the solution should be, but I do think it needs to
    #- be considered.

    (In my dreams) I want to "float" to be decimal. Always. No more binary.
    Maybe in ten years the machines will be as fast as is needed to make this
    posible. Or it'll be implemented in hardware.

    Anyway, until then I'm happy having decimal floating point as a module.

    .. Facundo
     
    Batista, Facundo, Jan 30, 2004
    #1
    1. Advertising

  2. > (In my dreams) I want to "float" to be decimal. Always. No more binary.
    > Maybe in ten years the machines will be as fast as is needed to make this
    > posible. Or it'll be implemented in hardware.
    >
    > Anyway, until then I'm happy having decimal floating point as a module.



    In my dreams, data is optimally represented in base e, and every number
    is represented with a roughly equivalent amount of fudge-factor (except
    for linear combinations of the powers of e).

    Heh, thankfully my dreams haven't come to fuition.


    While decimal storage is useful for people and money, it is arbitrarily
    limiting. Perhaps a generalized BaseN module is called for. People
    could then generate floating point numbers in any base (up to perhaps
    base 36, [1-9a-z]). At that point, having a Money version is just a
    specific subclass of BaseN floating point.

    Of course then you have the same problem with doing math on two
    different bases as with doing math on rational numbers. Personally, I
    would more favor a generalized BaseN class than just a single Base10 class.

    - Josiah
     
    Josiah Carlson, Jan 30, 2004
    #2
    1. Advertising

  3. Batista, Facundo

    Dan Bishop Guest

    Josiah Carlson <> wrote in message news:<bvef14$919$>...
    > > (In my dreams) I want to "float" to be decimal. Always. No more binary.


    I disagree.

    My reasons for this have to do with the real-life meaning of figures
    with decimal points. I can say that I have $1.80 in change on my
    desk, and I can say that I am 1.80 meters tall. But the two 1.80's
    have fundamentally different meanings.

    For money, it means that I have *exactly* $1.80. This is because
    "dollars" are just a notational convention for large numbers of cents.
    I can just as accuately say that have an (integer) 180 cents, and
    indeed, that's exactly the way it would be stored in my financial
    institution's database. (I know because I used to work there.) So
    all you really need here is "int". But I do agree with the idea of
    having a class to hide the decimal/integer conversion from the user.

    On the other hand, when I say that I am 1.80 m tall, it doesn't imply
    that humans height comes in discrete packets of 0.01 m. It means that
    I'm *somewhere* between 1.795 and 1.805 m tall, depending on my
    posture and the time of day, and "1.80" is just a convenient
    approximation. And it wouldn't be inaccurate to express my height as
    0x1.CC (=1.796875) or (base 12) 1.97 (=1.7986111...) meters, because
    these are within the tolerance of the measurement. So number base
    doesn't matter here.

    But even if the number base of a measurement doesn't matter, precision
    and speed of calculations often does. And on digital computers,
    non-binary arithmetic is inherently imprecise and slow. Imprecise
    because register bits are limited and decimal storage wastes them.
    (For example, representing the integer 999 999 999 requires 36 bits in
    BCD but only 30 bits in binary. Also, for floating point, only binary
    allows the precision-gaining "hidden bit" trick.) Slow because
    decimal requires more complex hardware. (For example, a BCD adder has
    more than twice as many gates as a binary adder.)

    > In my dreams, data is optimally represented in base e, and every number
    > is represented with a roughly equivalent amount of fudge-factor (except
    > for linear combinations of the powers of e).
    >
    > Heh, thankfully my dreams haven't come to fuition.


    Perhaps we'll have an efficient inplementation within the next
    102.1120... years or so ;-)

    > While decimal storage is useful for...money


    Out of curiosity: Is there much demand for decimal floating point in
    places that have fractionless currecy like Japanese Yen?

    > Perhaps a generalized BaseN module is called for. People
    > could then generate floating point numbers in any base (up to perhaps
    > base 36, [1-9a-z]).


    If you're going to allow exact representation of multiples of 1/2,
    1/3, 1/4, ..., 1/36, 1/49, 1/64, 1/81, 1/100, 1/121, 1/125, 1/128,
    1/144, etc., I see no reason not to have exact representations of
    *all* rational numbers. Especially considering that rationals are
    much easier to implement. (See below.)

    > ... Of course then you have the same problem with doing math on two
    > different bases as with doing math on rational numbers.


    Actually, the problem is even worse.

    Like rationals, BaseN numbers have the problem that there are multiple
    representations for the same number (e.g., 1/2=6/12, and 0.1 (2) = 0.6
    (12)). But rationals at least have a standardized normalization. We
    agree can agree that 1/2 should be represented as 1/2 and not
    -131/-262, but should BaseN('0.1', base=2) + BaseN('0.1', base=4) be
    BaseN('0.11', 2) or BaseN('0.3', 4)?

    The same potential problem exists with ints, but Python (and afaik,
    everything else) avoids it by internally storing everything in binary
    and not keeping track of its representation. This is why "print 0x68"
    produces the same output as "print 104". BaseN would violate this
    separation between numbers and their notation, and imho that would
    create a lot more problems than it solves.

    Including the problem that mixed-based arithmetic will require:
    * approximating at least one of the numbers, in which case there's no
    advantage over binary, or
    * finding a "least common base", but what if that base is greater than
    36 (or 62 if lowercase digits are distinguished from uppercase ones)?
     
    Dan Bishop, Jan 31, 2004
    #3
  4. On 31 Jan 2004 01:01:41 -0800, (Dan Bishop) wrote:

    >I disagree.


    <snip>

    >But even if the number base of a measurement doesn't matter, precision
    >and speed of calculations often does. And on digital computers,
    >non-binary arithmetic is inherently imprecise and slow. Imprecise
    >because register bits are limited and decimal storage wastes them.
    >(For example, representing the integer 999 999 999 requires 36 bits in
    >BCD but only 30 bits in binary. Also, for floating point, only binary
    >allows the precision-gaining "hidden bit" trick.) Slow because
    >decimal requires more complex hardware. (For example, a BCD adder has
    >more than twice as many gates as a binary adder.)


    I think BSD is a slightly unfair comparison. The efficiency of packing
    decimal digits into binary integers increases as the size of each
    packed group of digits increases. For example, while 8 BCD digits
    requires 32 bits those 32 bits can encode 9 decimal digits, and while
    16 BCD digits requires 64 bits, those digits can encode 19 decimal
    digits.

    The principal is correct, though - binary is 'natural' for computers
    where decimal is more natural for people, so decimal representations
    will be relatively inefficient even with hardware support. Low
    precision because a mantissa with the same number of bits can only
    represent a smaller range of values. Slow (or expensive) because of
    the relative complexity of handling decimal using binary logic.

    >> Perhaps a generalized BaseN module is called for. People
    >> could then generate floating point numbers in any base (up to perhaps
    >> base 36, [1-9a-z]).


    <snip>

    >> ... Of course then you have the same problem with doing math on two
    >> different bases as with doing math on rational numbers.

    >
    >Actually, the problem is even worse.
    >
    >Like rationals, BaseN numbers have the problem that there are multiple
    >representations for the same number (e.g., 1/2=6/12, and 0.1 (2) = 0.6
    >(12)). But rationals at least have a standardized normalization. We
    >agree can agree that 1/2 should be represented as 1/2 and not
    >-131/-262, but should BaseN('0.1', base=2) + BaseN('0.1', base=4) be
    >BaseN('0.11', 2) or BaseN('0.3', 4)?


    I don't see the point of supporting all bases. The main ones are of
    course base 2, 8, 10 and 16. And of course base 8 and 16
    representations map directly to base 2 representations anyway - that
    is why they get used in the first place.

    If I were supporting loads of bases (and that is a big 'if') I would
    take an approach where each base type directly supported arithmetic
    with itself only. Each base would be imported separately and be
    implemented using code optimised for that base, so that the base
    wouldn't need to be maintained by - for instance - a member of the
    class. There would be a way to convert between bases, but that would
    be the limit of the interaction.

    If I needed more than that, I'd use a rational type - I speak from
    experience as I set out to write a base N float library for C++ once
    upon a time and ended up writing a rational instead. A rational, BTW,
    isn't too bad to get working but that's as far as I got - doing it
    well would probably take a lot of work. And if getting Base N floats
    working was harder than for rationals, getting them to work well would
    probably be an order of magnitude harder - for no real benefit to 99%
    or more of users.

    Just because a thing can be done, that doesn't make it worth doing.

    >but what if that base is greater than
    >36 (or 62 if lowercase digits are distinguished from uppercase ones)?


    For theoretical use, converting to a list of integers - one integer
    representing each 'digit' - would probably work. If there is a real
    application, that is.


    --
    Steve Horne

    steve at ninereeds dot fsnet dot co dot uk
     
    Stephen Horne, Jan 31, 2004
    #4
  5. > If I needed more than that, I'd use a rational type - I speak from
    > experience as I set out to write a base N float library for C++ once
    > upon a time and ended up writing a rational instead. A rational, BTW,
    > isn't too bad to get working but that's as far as I got - doing it
    > well would probably take a lot of work. And if getting Base N floats
    > working was harder than for rationals, getting them to work well would
    > probably be an order of magnitude harder - for no real benefit to 99%
    > or more of users.


    I also wrote a rational type (last summer). It took around 45 minutes.
    Floating point takes a bit longer to get right.

    > Just because a thing can be done, that doesn't make it worth doing.


    Indeed :)

    - Josiah
     
    Josiah Carlson, Jan 31, 2004
    #5
  6. On Sat, 31 Jan 2004 09:35:09 -0800, Josiah Carlson
    <> wrote:

    >> If I needed more than that, I'd use a rational type - I speak from
    >> experience as I set out to write a base N float library for C++ once
    >> upon a time and ended up writing a rational instead. A rational, BTW,
    >> isn't too bad to get working but that's as far as I got - doing it
    >> well would probably take a lot of work. And if getting Base N floats
    >> working was harder than for rationals, getting them to work well would
    >> probably be an order of magnitude harder - for no real benefit to 99%
    >> or more of users.

    >
    >I also wrote a rational type (last summer). It took around 45 minutes.
    > Floating point takes a bit longer to get right.


    Was your implementation the 'not too bad to get working' or the 'doing
    it well'?

    For instance, there is the greatest common divisor that you need for
    normalising the rationals.

    I used the Euclidean algorithm for the GCD. Not too bad, certainly
    better than using prime factorisation, but as I understand it doing
    the job well means using a better algorithm for this - though I never
    did bother looking up the details.

    Actually, as far as I remember, just doing the arbitrary length
    integer division functions took me more than your 45 minutes. The long
    division algorithm is simple in principle, but I seem to remember
    messing up the decision of how many bits to shift the divisor after a
    subtraction. Of course in Python, that's already done.

    Maybe I was just having a bad day. Maybe I remember it worse than it
    really was. Still, 45 minutes doesn't seem too realistic in my memory,
    even for the 'not too bad to get working' case.


    --
    Steve Horne

    steve at ninereeds dot fsnet dot co dot uk
     
    Stephen Horne, Jan 31, 2004
    #6
  7. > Was your implementation the 'not too bad to get working' or the 'doing
    > it well'?


    I thought it did pretty well. But then again, I didn't really much
    worry about it or use it much. I merely tested to make sure it did the
    right thing and forgot about it.

    > For instance, there is the greatest common divisor that you need for
    > normalising the rationals.
    >
    > I used the Euclidean algorithm for the GCD. Not too bad, certainly
    > better than using prime factorisation, but as I understand it doing
    > the job well means using a better algorithm for this - though I never
    > did bother looking up the details.


    I also used Euclid's GCD, but last time I checked, it is a pretty
    reasonable algorithm. Runs in log(n) time, where n is the maximum of
    either value. Technically, it runs linear in the amount of space that
    it takes up, which is about as well as you can do.

    > Actually, as far as I remember, just doing the arbitrary length
    > integer division functions took me more than your 45 minutes. The long
    > division algorithm is simple in principle, but I seem to remember
    > messing up the decision of how many bits to shift the divisor after a
    > subtraction. Of course in Python, that's already done.


    Ahh, integer division. I solved a related problem with long integers
    for Python in a programming competition my senior year of college
    (everyone else was using Java, the suckers) in about 15 minutes. We
    were to calculate 1/n, for some arbitrarily large n (where 1/n was a
    fraction that could be represented by base-10 integer division). Aside
    from I/O, it was 9 lines.

    Honestly, I never implemented integer division in my rational type. For
    casts to floats,
    float(self.numerator)/float(self.denominator)+self.whole seemed just
    fine (I was using rationals with denominators in the range of 2-100 and
    total value < 1000).

    Thinking about it now, it wouldn't be very difficult to pull out my 1/n
    code and adapt it to the general integer division problem. Perhaps
    something to do later.

    > Maybe I was just having a bad day. Maybe I remember it worse than it
    > really was. Still, 45 minutes doesn't seem too realistic in my memory,
    > even for the 'not too bad to get working' case.


    For all the standard operations on a rational type, all you need is to
    make sure all you have is two pairs of numerators and denominators, then
    all the numeric manipulation is trivial:
    a.n = a.numerator * a.whole*a.denominator
    a.d = a.denominator
    b.n = b.numerator * b.whole*b.denominator
    b.d = b.denominator

    a + b = rational(a.n*b.d + b.n*a.d, a.d*b.d)
    a - b = rational(a.n*b.d - b.n*a.d, a.d*b.d)
    a * b = rational(a.n*b.n, a.d*b.d)
    a / b = rational(a.n*b.d, a.d*b.n)
    a ** b, b is an integer >= 1 (binary exponentiation)


    One must remember to normalize on initialization, but that's not
    difficult. Functionally that's how my rational turned out. It wasn't
    terribly full featured, but it worked well for what I was doing.

    - Josiah
     
    Josiah Carlson, Feb 1, 2004
    #7
  8. Batista, Facundo

    Paul Moore Guest

    Josiah Carlson <> writes:

    > One must remember to normalize on initialization, but that's not
    > difficult. Functionally that's how my rational turned out. It wasn't
    > terribly full featured, but it worked well for what I was doing.


    Straightforward rational implementations *are* easy. But when you
    start to look at some of the more subtle numerical issues, life
    rapidly gets hard.

    The key point (easy enough with Python, but bear with me) is that the
    numerator and denominator *must* be infinite-precision integers.
    Otherwise, rationals have as many rounding and representational issues
    as floating point numbers, and the characteristics of the problems
    differ in ways that make them *less* usable without specialist
    knowledge, not more.

    With Python, this isn't an onerous requirement, as Python Longs fit
    the bill nicely. But the next decision you have to make is how often
    to normalise. You imply (in your comment above) that you should only
    normalise on initialisation, but if you do that, your representation
    rapidly blows up, in terms of space used. Sure,
    8761348763287654786543876543/17522697526575309573087753086 is the same
    as 1/2, but the former uses a lot more space, and is going to be
    slower to compute with.

    But if you normalise every time, some theoretically simple operations
    can become relatively very expensive in terms of time. (Basically,
    things like addition, which suddenly require a GCD calculation).

    So you have to work out a good tradeoff, which isn't easy.

    There are other issues to consider, but that should be enough to
    demonstrate the sort of issues an "industrial strength" rational
    implementation must address.

    Of course, this isn't to say that every implementation *needs* to be
    industrial-strength. Only the user can say what's good enough for his
    needs.

    Paul.
    --
    This signature intentionally left blank
     
    Paul Moore, Feb 2, 2004
    #8

  9. > But if you normalise every time, some theoretically simple operations
    > can become relatively very expensive in terms of time. (Basically,
    > things like addition, which suddenly require a GCD calculation).


    If we are to take cues from standard Python numeric types, any
    mathematical calculation results in a new immutable object. Thusly,
    only normalizing on initialization is sufficient. Since that is the
    only time you ever get anything new, doing GCD on initialization is the
    minimum and maximum requirement.

    - Josiah
     
    Josiah Carlson, Feb 2, 2004
    #9
  10. Batista, Facundo

    Mel Wilson Guest

    In article <bvmh58$4hc$>,
    Josiah Carlson <> wrote:
    >
    >> But if you normalise every time, some theoretically simple operations
    >> can become relatively very expensive in terms of time. (Basically,
    >> things like addition, which suddenly require a GCD calculation).

    >
    >If we are to take cues from standard Python numeric types, any
    >mathematical calculation results in a new immutable object. Thusly,
    >only normalizing on initialization is sufficient. Since that is the
    >only time you ever get anything new, doing GCD on initialization is the
    >minimum and maximum requirement.


    I agree, but that means we do a lot of initializations,
    so the performance in doing a computation would be about the
    same.

    I tried a decimal floating-point package just lately, for
    fun, based on long mantissas and int exponents. I used this
    approach to normalization, because I think it's natural, but
    I've been scared to benchmark the package. I should, I
    guess.

    Regards. Mel.
     
    Mel Wilson, Feb 3, 2004
    #10
  11. Batista, Facundo

    Aahz Guest

    In article <>,
    Dan Bishop <> wrote:
    >
    >For money, it means that I have *exactly* $1.80. This is because
    >"dollars" are just a notational convention for large numbers of cents.
    > I can just as accuately say that have an (integer) 180 cents, and
    >indeed, that's exactly the way it would be stored in my financial
    >institution's database. (I know because I used to work there.) So
    >all you really need here is "int". But I do agree with the idea of
    >having a class to hide the decimal/integer conversion from the user.


    Really. What kind of financial institution was this? They didn't need
    to deal with any form of fractional pennies?
    --
    Aahz () <*> http://www.pythoncraft.com/

    "The joy of coding Python should be in seeing short, concise, readable
    classes that express a lot of action in a small amount of clear code --
    not in reams of trivial code that bores the reader to death." --GvR
     
    Aahz, Feb 5, 2004
    #11
  12. On 5 Feb 2004 09:18:12 -0500, (Aahz) wrote:

    >In article <>,
    >Dan Bishop <> wrote:
    >>
    >>For money, it means that I have *exactly* $1.80. This is because
    >>"dollars" are just a notational convention for large numbers of cents.
    >> I can just as accuately say that have an (integer) 180 cents, and
    >>indeed, that's exactly the way it would be stored in my financial
    >>institution's database. (I know because I used to work there.) So
    >>all you really need here is "int". But I do agree with the idea of
    >>having a class to hide the decimal/integer conversion from the user.

    >
    >Really. What kind of financial institution was this? They didn't need
    >to deal with any form of fractional pennies?


    Does it really matter if they did? They may not deal in whole pennies,
    but I seriously doubt that they need infinite precision - integers
    with a predefined scaling factor (ie fixed point arithmetic) will, I
    suspect, handle those few jobs that counting in pennies can't.

    For instance, while certainly exchange rates involve fractional
    amounts (specified to a fixed number of places), the converted amounts
    will be rounded as account balances are recorded to the nearest penny,
    unless I'm very badly mistaken. The same applies to interest - the
    results get rounded before the balance is affected.

    So if the exchange rate is 1.83779 dollars to the uk pound, who can't
    cope with the following code?

    exchange_rate = 183779

    result = pounds * exchange_rate / 100000

    Assuming that rounding matches the programming languages default
    behaviour, of course, and that the width of the integers is
    sufficient.


    That said, as I understand it, a lot of financial institutions have a
    lot of COBOL code. And from what I remember of programming in COBOL,
    the typical representation of numbers in both files and working
    storage uses decimal digits stored in a character string - at least
    that's what the picture strings specify in the source code. Given that
    the compiler knows the precision of every number, and assuming that
    there is no conversion to a more convenient representation internally,
    it shouldn't make much difference whether the number has a point or
    not.


    Personally, I wouldn't want to contradict Dan Bishops claims - he has
    the experience in a financial institution, not me - but I suspect
    there is a fair amount of code used in many financial institutions
    that does in fact use a decimal representation, if only because of old
    COBOL code.


    --
    Steve Horne

    steve at ninereeds dot fsnet dot co dot uk
     
    Stephen Horne, Feb 6, 2004
    #12
  13. Batista, Facundo

    Aahz Guest

    In article <>,
    Stephen Horne <> wrote:
    >On 5 Feb 2004 09:18:12 -0500, (Aahz) wrote:
    >>In article <>,
    >>Dan Bishop <> wrote:
    >>>
    >>>For money, it means that I have *exactly* $1.80. This is because
    >>>"dollars" are just a notational convention for large numbers of cents.
    >>> I can just as accuately say that have an (integer) 180 cents, and
    >>>indeed, that's exactly the way it would be stored in my financial
    >>>institution's database. (I know because I used to work there.) So
    >>>all you really need here is "int". But I do agree with the idea of
    >>>having a class to hide the decimal/integer conversion from the user.

    >>
    >>Really. What kind of financial institution was this? They didn't need
    >>to deal with any form of fractional pennies?

    >
    >Does it really matter if they did? They may not deal in whole pennies,
    >but I seriously doubt that they need infinite precision - integers
    >with a predefined scaling factor (ie fixed point arithmetic) will, I
    >suspect, handle those few jobs that counting in pennies can't.


    That's mostly true (witness Tim Peters's FixedPoint.py). If you really
    want to debate this issue, read Cowlishaw first:
    http://www2.hursley.ibm.com/decimal/decarith.html
    --
    Aahz () <*> http://www.pythoncraft.com/

    "The joy of coding Python should be in seeing short, concise, readable
    classes that express a lot of action in a small amount of clear code --
    not in reams of trivial code that bores the reader to death." --GvR
     
    Aahz, Feb 6, 2004
    #13
  14. Batista, Facundo

    Dan Bishop Guest

    Stephen Horne <> wrote in message news:<>...
    > On 5 Feb 2004 09:18:12 -0500, (Aahz) wrote:
    >
    > >In article <>,
    > >Dan Bishop <> wrote:
    > >>
    > >>For money, it means that I have *exactly* $1.80. This is because
    > >>"dollars" are just a notational convention for large numbers of cents.
    > >> I can just as accuately say that have an (integer) 180 cents, and
    > >>indeed, that's exactly the way it would be stored in my financial
    > >>institution's database. (I know because I used to work there.) So
    > >>all you really need here is "int". But I do agree with the idea of
    > >>having a class to hide the decimal/integer conversion from the user.

    > >
    > >Really. What kind of financial institution was this? They didn't need
    > >to deal with any form of fractional pennies?

    >
    > Does it really matter if they did? They may not deal in whole pennies,
    > but I seriously doubt that they need infinite precision - integers
    > with a predefined scaling factor (ie fixed point arithmetic) will, I
    > suspect, handle those few jobs that counting in pennies can't.


    And you would be right. For example, interest rates were always
    stored in thousandths of a percent.

    The only problem was that some of the third-party software we used
    made this scaling completely visible to the user. Our employees would
    occasionally forget the scaling factor, and this resulted in mistakes
    like having one of our CD's pay 445% interest instead of 4.45%.

    > That said, as I understand it, a lot of financial institutions have a
    > lot of COBOL code. And from what I remember of programming in COBOL,
    > the typical representation of numbers in both files and working
    > storage uses decimal digits stored in a character string - at least
    > that's what the picture strings specify in the source code.


    We had a lot of numbers in EBCDIC signed decimal. Even though our
    mainframe used ASCII.
     
    Dan Bishop, Feb 7, 2004
    #14
  15. Batista, Facundo

    Dan Bishop Guest

    Josiah Carlson <> wrote in message news:<bvjj49$c94$>...
    > > Was your implementation [of rationals] the 'not too bad to get working' or
    > > the 'doing it well'?

    ....
    > For all the standard operations on a rational type, all you need is to
    > make sure all you have is two pairs of numerators and denominators, then
    > all the numeric manipulation is trivial:

    ....
    > a + b = rational(a.n*b.d + b.n*a.d, a.d*b.d)
    > a - b = rational(a.n*b.d - b.n*a.d, a.d*b.d)
    > a * b = rational(a.n*b.n, a.d*b.d)
    > a / b = rational(a.n*b.d, a.d*b.n)


    Also,

    floor(a) = a.n // a.d
    a // b = floor(a / b)

    > a ** b, b is an integer >= 1 (binary exponentiation)


    It's even more trivial when b=0: The result is 1.

    And when b < 0, a ** b can be calculated as (1 / a) ** (-b)
     
    Dan Bishop, Feb 7, 2004
    #15
  16. Batista, Facundo

    Aahz Guest

    In article <>,
    Dan Bishop <> wrote:
    >Stephen Horne <> wrote in message news:<>...
    >> On 5 Feb 2004 09:18:12 -0500, (Aahz) wrote:
    >>>In article <>,
    >>>Dan Bishop <> wrote:
    >>>>
    >>>>For money, it means that I have *exactly* $1.80. This is because
    >>>>"dollars" are just a notational convention for large numbers of cents.
    >>>> I can just as accuately say that have an (integer) 180 cents, and
    >>>>indeed, that's exactly the way it would be stored in my financial
    >>>>institution's database. (I know because I used to work there.) So
    >>>>all you really need here is "int". But I do agree with the idea of
    >>>>having a class to hide the decimal/integer conversion from the user.
    >>>
    >>>Really. What kind of financial institution was this? They didn't need
    >>>to deal with any form of fractional pennies?

    >>
    >> Does it really matter if they did? They may not deal in whole pennies,
    >> but I seriously doubt that they need infinite precision - integers
    >> with a predefined scaling factor (ie fixed point arithmetic) will, I
    >> suspect, handle those few jobs that counting in pennies can't.

    >
    >And you would be right. For example, interest rates were always
    >stored in thousandths of a percent.
    >
    >The only problem was that some of the third-party software we used
    >made this scaling completely visible to the user. Our employees would
    >occasionally forget the scaling factor, and this resulted in mistakes
    >like having one of our CD's pay 445% interest instead of 4.45%.


    ....and that's a good argument for having a built-in type that handles
    the conversions automatically. Another issue is the different kinds of
    rounding. All in all, there are many kinds of already-solved problems
    that are taken care of by using the decimal float standard.
    --
    Aahz () <*> http://www.pythoncraft.com/

    "The joy of coding Python should be in seeing short, concise, readable
    classes that express a lot of action in a small amount of clear code --
    not in reams of trivial code that bores the reader to death." --GvR
     
    Aahz, Feb 11, 2004
    #16
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Christoph Becker-Freyseng

    PEP for new modules (I read PEP 2)

    Christoph Becker-Freyseng, Jan 15, 2004, in forum: Python
    Replies:
    3
    Views:
    391
    Gerrit Holl
    Jan 16, 2004
  2. Batista, Facundo

    PEP 327: Decimal Data Type

    Batista, Facundo, Jan 30, 2004, in forum: Python
    Replies:
    9
    Views:
    335
    Jeff Epler
    Feb 6, 2004
  3. Batista, Facundo

    RE: PEP 327: Decimal Data Type

    Batista, Facundo, Feb 2, 2004, in forum: Python
    Replies:
    5
    Views:
    362
    Bengt Richter
    Feb 6, 2004
  4. Batista, Facundo

    RE: PEP 327: Decimal Data Type

    Batista, Facundo, Feb 3, 2004, in forum: Python
    Replies:
    5
    Views:
    361
    Bengt Richter
    Feb 9, 2004
  5. Gilbert Fine
    Replies:
    8
    Views:
    928
    Zentrader
    Aug 1, 2007
Loading...

Share This Page