performance and timing issues udp socket

M

mastermagrath

Hi all,

I've written a fairly simple tool that essentially creates a new udp
IO::Socket::INET object.
A loop is ran which uses print to pass in data sizes between 1 to 1024
bytes.
Each increment is separated by a time of X milliseconds so this
essentially allows selection of number of offered packets per second.
The main thread creates a simple tk gui interface that allows the user
to select the byte size and packets per second.
The gui displays what the (theoretical) throughput should be based on
the selected byte size * (packets per sec) but also tallys the actual
number of packets sent and thus the actual throughput.
As a test bed i simply connect to the loop back address 127.0.0.1 of my
PC. (yes i use windows!!)

Everything works perfectly up until the user goes above ~30 packets per
sec. At this point the theoretical and actual throughputs and packets
per sec start to diverge.
The actual number of packets sent starts to fall below what the user
selected (theoretical).

I use Win32::Sleep((1 / $packetspersec) * 1000); in the loop to change
the increment time to match the user selected packets per sec.

Any ideas what the problem might be?
Also, there seems to be an absolute maximum of around 100 packets per
sec that is possible. Thing is i have a small console based tool
(written in C++ i think) that can generate much higher packet rates for
any byte size and which does actually transmit what it says it does.
Is there timing issues with threads or do i need to look into setting
my script to a higher priority?

Thanks in advance.
 
X

xhoster

mastermagrath said:
Hi all,

I've written a fairly simple tool that essentially creates a new udp
IO::Socket::INET object.
A loop is ran which uses print to pass in data sizes between 1 to 1024
bytes.
Each increment is separated by a time of X milliseconds so this
essentially allows selection of number of offered packets per second.
The main thread creates a simple tk gui interface that allows the user
to select the byte size and packets per second.
The gui displays what the (theoretical) throughput should be based on
the selected byte size * (packets per sec) but also tallys the actual
number of packets sent and thus the actual throughput.
As a test bed i simply connect to the loop back address 127.0.0.1 of my
PC. (yes i use windows!!)

Everything works perfectly up until the user goes above ~30 packets per
sec. At this point the theoretical and actual throughputs and packets
per sec start to diverge.
The actual number of packets sent starts to fall below what the user
selected (theoretical).

I use Win32::Sleep((1 / $packetspersec) * 1000); in the loop to change
the increment time to match the user selected packets per sec.

What is the minimum time granularity of Win32::Sleep? If you can't find it
documented, you could test it pretty easily (in the absense of all the
socket code).

Any ideas what the problem might be?
Also, there seems to be an absolute maximum of around 100 packets per
sec that is possible.

Then is sounds like that is probably the minimim granularity of
Win32::Sleep.

Xho
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top