cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
1346
Views
5
Helpful
7
Replies
Highlighted
Beginner

How can i simulate 50% load on a point to point WAN circuit in order to measure latency for a predetermined SLA?

I've just constructed a remote DMVPN spoke that has a 300Mbps private point to point circuit back to the data center.  My manager promised an SLA of < 20ms at 50% bandwidth utilization (I think he made that up?!).  I'm trying to sort out the best way to test/document/baseline this statement for the customer.  My questions is...

How best can I simulate 50% load over the circuit? - My thoughts would be to configure some sort of maximum bandwidth QoS between 2 hosts and then push a large amount of traffic across the link and then average out ICMP latency statistics from a ping between the same two hosts.

Without getting into too much detail, the remote site is using ASR1001's (DMVPN spoke) with tunnel interfaces back to ASR1002's (DMVPN hub).  3750x at the access layer.

 

3750x > mm fiber > asr1001 (spoke) > 300Mbps circuit > 3750x > asr1002 (hub) > core infrastructure.

 

Thanks in advance!

 

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

On the client side of Iperf use the -b option to specify the bandwidth available. [UDP]

Hope this helps.

See here:

http://openmaniak.com/iperf.php

UDP tests: (-u), bandwidth settings (-b)
Also check the "Jperf" section.

The UDP tests with the -u argument will give invaluable information about the jitter and the packet loss. If you don't specify the -u argument, Iperf uses TCP. 
To keep a good link quality, the packet loss should not go over 1 %. A high packet loss rate will generate a lot of TCP segment retransmissions which will affect the bandwidth.

The jitter is basically the latency variation and does not depend on the latency. You can have high response times and a very low jitter. The jitter value is particularly important on network links supporting voice over IP (VoIP) because a high jitter can break a call.
The -b argument allows the allocation if the desired bandwidth. 

 Client side:
 

#iperf -c 10.1.1.1 -u -b 10m

------------------------------------------------------------ 
Client connecting to 10.1.1.1, UDP port 5001 
Sending 1470 byte datagrams 
UDP buffer size: 108 KByte (default) 
------------------------------------------------------------ 
[ 3] local 10.6.2.5 port 32781 connected with 10.1.1.1 port 5001 
[ 3]   0.0-10.0 sec   11.8 MBytes   9.89 Mbits/sec 
[ 3] Sent 8409 datagrams 
[ 3] Server Report: 
[ 3]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.617 ms   9/ 8409   (0.11%) 


 Server side:
 

#iperf -s -u -i 1

------------------------------------------------------------ 
Server listening on UDP port 5001 
Receiving 1470 byte datagrams 
UDP buffer size: 8.00 KByte (default) 
------------------------------------------------------------ 
[904] local 10.1.1.1 port 5001 connected with 10.6.2.5 port 32781 
[ ID]   Interval         Transfer        Bandwidth         Jitter        Lost/Total Datagrams 
[904]   0.0- 1.0 sec   1.17 MBytes   9.84 Mbits/sec   1.830 ms   0/ 837   (0%) 
[904]   1.0- 2.0 sec   1.18 MBytes   9.94 Mbits/sec   1.846 ms   5/ 850   (0.59%) 
[904]   2.0- 3.0 sec   1.19 MBytes   9.98 Mbits/sec   1.802 ms   2/ 851   (0.24%) 
[904]   3.0- 4.0 sec   1.19 MBytes   10.0 Mbits/sec   1.830 ms   0/ 850   (0%) 
[904]   4.0- 5.0 sec   1.19 MBytes   9.98 Mbits/sec   1.846 ms   1/ 850   (0.12%) 
[904]   5.0- 6.0 sec   1.19 MBytes   10.0 Mbits/sec   1.806 ms   0/ 851   (0%) 
[904]   6.0- 7.0 sec   1.06 MBytes   8.87 Mbits/sec   1.803 ms   1/ 755   (0.13%) 
[904]   7.0- 8.0 sec   1.19 MBytes   10.0 Mbits/sec   1.831 ms   0/ 850   (0%) 
[904]   8.0- 9.0 sec   1.19 MBytes   10.0 Mbits/sec   1.841 ms   0/ 850   (0%) 
[904]   9.0-10.0 sec   1.19 MBytes   10.0 Mbits/sec   1.801 ms   0/ 851   (0%) 
[904]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.618 ms   9/ 8409  (0.11%) 

Please rate useful posts & remember to mark any solved questions as answered. Thank you.

View solution in original post

7 REPLIES 7
Highlighted
Rising star

You can use a tool like Iperf to pump traffic down a link to measure bandwidth and performance: http://iperf.fr/

You can front end it with a GUI tool Jperf if you don't like using the command line: http://www.softpedia.com/get/Network-Tools/Network-Testing/JPerf.shtml

Highlighted

Also like to add to Sean's response - you can use ttcp on the ASRs and use with Iperf or Jperf.

hth

Please rate useful posts & remember to mark any solved questions as answered. Thank you.
Highlighted

Thanks for the response.  I'm familiar with iperf, the challenge is how to simulate 50% bandwidth load (150Mbps) over the available 300Mbps link.  If I push a large file, all available bandwidth is used.

Additionally, this is sort of a capacity planning thing.  I specked the deployed hardware to support 1Gbps through to the core encrypted, so I'd like to define a test I can simulate every 2-3 months to plan for an increase to bandwidth.

 

 

Highlighted

If you think 1 host can push that much data, then just put a policy in place to limit the traffic between the hosts to 150 over the WAN link.
 

Highlighted

On the client side of Iperf use the -b option to specify the bandwidth available. [UDP]

Hope this helps.

See here:

http://openmaniak.com/iperf.php

UDP tests: (-u), bandwidth settings (-b)
Also check the "Jperf" section.

The UDP tests with the -u argument will give invaluable information about the jitter and the packet loss. If you don't specify the -u argument, Iperf uses TCP. 
To keep a good link quality, the packet loss should not go over 1 %. A high packet loss rate will generate a lot of TCP segment retransmissions which will affect the bandwidth.

The jitter is basically the latency variation and does not depend on the latency. You can have high response times and a very low jitter. The jitter value is particularly important on network links supporting voice over IP (VoIP) because a high jitter can break a call.
The -b argument allows the allocation if the desired bandwidth. 

 Client side:
 

#iperf -c 10.1.1.1 -u -b 10m

------------------------------------------------------------ 
Client connecting to 10.1.1.1, UDP port 5001 
Sending 1470 byte datagrams 
UDP buffer size: 108 KByte (default) 
------------------------------------------------------------ 
[ 3] local 10.6.2.5 port 32781 connected with 10.1.1.1 port 5001 
[ 3]   0.0-10.0 sec   11.8 MBytes   9.89 Mbits/sec 
[ 3] Sent 8409 datagrams 
[ 3] Server Report: 
[ 3]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.617 ms   9/ 8409   (0.11%) 


 Server side:
 

#iperf -s -u -i 1

------------------------------------------------------------ 
Server listening on UDP port 5001 
Receiving 1470 byte datagrams 
UDP buffer size: 8.00 KByte (default) 
------------------------------------------------------------ 
[904] local 10.1.1.1 port 5001 connected with 10.6.2.5 port 32781 
[ ID]   Interval         Transfer        Bandwidth         Jitter        Lost/Total Datagrams 
[904]   0.0- 1.0 sec   1.17 MBytes   9.84 Mbits/sec   1.830 ms   0/ 837   (0%) 
[904]   1.0- 2.0 sec   1.18 MBytes   9.94 Mbits/sec   1.846 ms   5/ 850   (0.59%) 
[904]   2.0- 3.0 sec   1.19 MBytes   9.98 Mbits/sec   1.802 ms   2/ 851   (0.24%) 
[904]   3.0- 4.0 sec   1.19 MBytes   10.0 Mbits/sec   1.830 ms   0/ 850   (0%) 
[904]   4.0- 5.0 sec   1.19 MBytes   9.98 Mbits/sec   1.846 ms   1/ 850   (0.12%) 
[904]   5.0- 6.0 sec   1.19 MBytes   10.0 Mbits/sec   1.806 ms   0/ 851   (0%) 
[904]   6.0- 7.0 sec   1.06 MBytes   8.87 Mbits/sec   1.803 ms   1/ 755   (0.13%) 
[904]   7.0- 8.0 sec   1.19 MBytes   10.0 Mbits/sec   1.831 ms   0/ 850   (0%) 
[904]   8.0- 9.0 sec   1.19 MBytes   10.0 Mbits/sec   1.841 ms   0/ 850   (0%) 
[904]   9.0-10.0 sec   1.19 MBytes   10.0 Mbits/sec   1.801 ms   0/ 851   (0%) 
[904]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.618 ms   9/ 8409  (0.11%) 

Please rate useful posts & remember to mark any solved questions as answered. Thank you.

View solution in original post

Highlighted

Thank you for this info/example!

MikeG

Highlighted
Beginner

Yes, I realized using udp you can specify bandwidth and then validated my results using netflow/solarwinds.  I then initiated a ping over the link and documented the RTT.  I also learned in this processes, that RTT delay isn't really introduced until you reach 95%+ of your bandwidth, but this was a good exercise in learning how to baseline and plan for future bandwidth growth.