Algorithm - UDP Throughput Calculation

G

Gordon Beaton

1 x 1000byte UDP packet every second, the measured rate at the
server is very close.

If i keep the same rate but this time use 20 x 50byte UDP packets
every second the measured rate is much lower abut again i know there
are no dropped packets.

How much lower? There is less overhead per byte with larger packets,
so I would expect a somewhat lower throughput with the smaller
packets.

How long did you run the test with many small packets? The receiver
won't drop any packets until the socket receive buffer fills so that
an incoming packet won't fit. Perhaps the buffer is filling slowly, at
a rate that matches the discrepancy you've measured in your
application. If you run it long enough with a slow server, I think you
will eventually see dropped packets unless you provide some flow
control.

/gordon
 
M

mastermagrath

Hi all,

Wonder can anyone help. After using perl happily i am turning my
attention to Java as a better language for GUI development. However i
came across a strange thing that i'm sure i'll come up against in Java
given that i'll be using the same algorithm. I had written a udp tool
that streams x packets/sec of x bytes long to a server i also wrote.
The server basically listens to the relevent port and calculates the
average receive rate of the UDP packets.

The way i did this in the server was to essentially take a time tick
when the first packet was received - the start time. Using an averaging
window of 10 packets, once the 10th packet is received (not including
the very first one) the time is ticked again. The elapsed time is then
the difference in the 2 ticks. The received throughput is then the
total number of bytes in the 10 packets divided by the elapsed time.

When i run this it works fine and the transmit rate versus the received
rate match up pretty well for any packet size. The trouble is as i
increase the sent packet rate the calculated receive rate in the server
gets progressively less than the real throughput. There are no dropped
packets however. So lets say i choose a send rate of 1000 bytes/sec by
sending
1 x 1000byte UDP packet every second, the measured rate at the server
is very close.
If i keep the same rate but this time use 20 x 50byte UDP packets every
second the measured rate is much lower abut again i know there are no
dropped packets.

I guess my question is, is there a particular algorithm programmers use
to calculate this?

Thanks in advance
 
R

Roedy Green

The way i did this in the server was to essentially take a time tick
when the first packet was received - the start time. Using an averaging
window of 10 packets, once the 10th packet is received (not including
the very first one) the time is ticked again. The elapsed time is then
the difference in the 2 ticks. The received throughput is then the
total number of bytes in the 10 packets divided by the elapsed time.

Consider that on sending, the time you are measuring is the time to
send 9 packets and register one more packet for sending. Perhaps it
returns immediately rather that waiting until every last bit has gone
out. You have layers of OS and hardware buffering to consider.

On receiving, you are measuring the time to wait for the first packet
to start arriving, plus the time for ten packets to be transmitted.

If your link is very fast, the problem could come that the resolution
of the timer you are using at one end is too low. Consider using a
higher res timer. see http://mindprod.com/jgloss/time.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top