I wrote a function to do this which seems slightly faster, but could
perhaps stand some optimization:
def pack_int32(n)
str = ' '
str[3] = n >> 24
str[2] = n >> 16
str[1] = n >> 8
str[0] = n
str
end
Here are the benchmark results vs the other methods mentioned:
user system total real
[].pack(i): 6.234000 0.235000 6.469000 ( 6.500000)
pack_int32: 5.719000 0.015000 5.734000 ( 5.734000)
Marshal.dump: 6.594000 0.219000 6.813000 ( 6.813000)
I included Marshal.dump for completeness, but agree that it doesn't
appear to be meant for this sort of thing. Here's the source to run
the benchmark:
require 'benchmark'
number = 2_000_000
n = 1_000_000
Benchmark.bm(12) do |x|
x.report('[].pack(i):') { n.times do; [number].pack('i'); end }
x.report('pack_int32:') { n.times do; pack_int32(number); end }
x.report('Marshal.dump:') { n.times do; Marshal.dump(number); end }
end
Using only the number 2_000_000 seems to skew the results. I see your
results with your test, but if I change it slightly to use a variety
of integers, I get more balanced results:
require 'benchmark'
MAX = 2**30
n = 1_000_000
nums = (0..n).map{ (rand*MAX).to_i }
Benchmark.bmbm do |x|
x.report('pack(i):') { nums.each{ |num| [num].pack('i') } }
x.report('pack32:') { nums.each{ |num| pack_int32(num) } }
x.report('Dump:') { nums.each{ |num| Marshal.dump(num) } }
end
Rehearsal --------------------------------------------
pack(i): 5.813000 0.109000 5.922000 ( 5.984000)
pack32: 5.234000 0.000000 5.234000 ( 5.281000)
Dump: 5.906000 0.125000 6.031000 ( 6.063000)
---------------------------------- total: 17.187000sec
user system total real
pack(i): 5.687000 0.125000 5.812000 ( 5.875000)
pack32: 5.141000 0.016000 5.157000 ( 5.188000)
Dump: 6.000000 0.078000 6.078000 ( 6.141000)