B
Brad Volz
Hello,
I seem to have an issue with file descriptors that aren't being closed
when I attempt to put some parallelization into one of my scripts. I
am trying to make use of the forkoff gem, but I guess that I am not
using it correctly.
If it's useful, here is what the code looks like currently:
def measure_n ( start_time, stop_time, direction, pattern,
records )
counters = [ :flows,
ackets,
ctets ]
# single process
stats = Hash.new
counters.each { |c| stats[c] = 0 }
puts "starting single process execution"
single_start = Time.now.to_f
pattern.each do |p|
r = self.measure_1( start_time, stop_time, direction,
"#{direction} AS #{p.to_s}", records)
counters.each { |c| stats[c] += r[c] }
end
single_stop = Time.now.to_f
puts stats.inspect
puts "single exeution time: #{single_stop - single_start}"
# multiple processes
stats = Hash.new
counters.each { |c| stats[c] = 0 }
puts "starting multi process execution"
multi_start = Time.now.to_f
asn_stats = pattern.forkoff!
rocesses => 4 do |asn|
a = Netflow::Nfdump.new
a.measure_1(start_time,stop_time,direction,"#{direction} AS
#{asn}",records)
end
asn_stats.each do |asn|
counters.each { |c| stats[c] += asn[c] }
end
multi_stop = Time.now.to_f
puts stats.inspect
puts "multi execution time: #{multi_stop - multi_start}"
return stats
end
The part that I find really odd, is that I can replace the block that
I am attempting to parallelize with a simple:-
puts "#{asn}"
And the script dies in the same place -- at the 255'th element in the
array, which corresponds well to the number of file descriptors that I
can use:-
bradv:bvolz:$ ulimit -a | grep files
open files (-n) 256
Since I see this problem with a simple 'puts' does that mean that the
issue is not in my code, and perhaps lies elsewhere? Or have I
misunderstood how to make use of the forkoff gem? In either case, how
can I figure out what these open file descriptors are?
Thanks,
Brad
I seem to have an issue with file descriptors that aren't being closed
when I attempt to put some parallelization into one of my scripts. I
am trying to make use of the forkoff gem, but I guess that I am not
using it correctly.
If it's useful, here is what the code looks like currently:
def measure_n ( start_time, stop_time, direction, pattern,
records )
counters = [ :flows,
# single process
stats = Hash.new
counters.each { |c| stats[c] = 0 }
puts "starting single process execution"
single_start = Time.now.to_f
pattern.each do |p|
r = self.measure_1( start_time, stop_time, direction,
"#{direction} AS #{p.to_s}", records)
counters.each { |c| stats[c] += r[c] }
end
single_stop = Time.now.to_f
puts stats.inspect
puts "single exeution time: #{single_stop - single_start}"
# multiple processes
stats = Hash.new
counters.each { |c| stats[c] = 0 }
puts "starting multi process execution"
multi_start = Time.now.to_f
asn_stats = pattern.forkoff!
a = Netflow::Nfdump.new
a.measure_1(start_time,stop_time,direction,"#{direction} AS
#{asn}",records)
end
asn_stats.each do |asn|
counters.each { |c| stats[c] += asn[c] }
end
multi_stop = Time.now.to_f
puts stats.inspect
puts "multi execution time: #{multi_stop - multi_start}"
return stats
end
The part that I find really odd, is that I can replace the block that
I am attempting to parallelize with a simple:-
puts "#{asn}"
And the script dies in the same place -- at the 255'th element in the
array, which corresponds well to the number of file descriptors that I
can use:-
bradv:bvolz:$ ulimit -a | grep files
open files (-n) 256
Since I see this problem with a simple 'puts' does that mean that the
issue is not in my code, and perhaps lies elsewhere? Or have I
misunderstood how to make use of the forkoff gem? In either case, how
can I figure out what these open file descriptors are?
Thanks,
Brad