B
bradjpeek
The second edition of _The_Ruby_Way_ has an example similar to the
following:
irb(main):001:0> puts 'not equal' unless (3.2 - 2.0) == 1.2
not equal
=>nil
The point is to illustrate why you might want to use BigDecimal (i.e.
so 3.2 - 2.0 would in fact = 1.2).
require 'bigdecimal'
x = BigDecimal("3.2")
y = BigDecimal("2.0")
z = BigDecimal("1.2")
p x - y == z ? "equal" : "not equal" # prints "equal"
I'm fairly new to Ruby and don't do much programming, but when I saw
this example I was surprised that the default behavior is that 3.2 -
2.0 != 1.2
To me, this violates the "Principal of least surprise", but I guess it
isn't a big deal because I don't remember it being discussed in
Programming Ruby book (but it certainly may have been).
Do other languages work this way?
following:
irb(main):001:0> puts 'not equal' unless (3.2 - 2.0) == 1.2
not equal
=>nil
The point is to illustrate why you might want to use BigDecimal (i.e.
so 3.2 - 2.0 would in fact = 1.2).
require 'bigdecimal'
x = BigDecimal("3.2")
y = BigDecimal("2.0")
z = BigDecimal("1.2")
p x - y == z ? "equal" : "not equal" # prints "equal"
I'm fairly new to Ruby and don't do much programming, but when I saw
this example I was surprised that the default behavior is that 3.2 -
2.0 != 1.2
To me, this violates the "Principal of least surprise", but I guess it
isn't a big deal because I don't remember it being discussed in
Programming Ruby book (but it certainly may have been).
Do other languages work this way?