cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1472
Views
19
Helpful
9
Replies

Can I introduce a delay on and to end packet delivery ?

J.GUILFOYLE
Level 1
Level 1

Hi

If possible how can I introduce delay between a source and destination ?

If you look at my Topology here :-

http://putfile.com/pic.php?pic=1/1515563511.gif&s=x12

I have an Openview Main Station on the Top LAN and a Remote station on the bottom LAN.

I have tried to mimic a customers Network.

On an end to end ping the customer gets a delay of 14 ms where I get <10ms (he has different equipment etc)

Is there anyway I can put latency on the routed Network ?

I'm running a simple OSPF network on each of the routers with.

Also what is the best way to look at BW utilisation on the serial ports of the routers ?

Id appreciate any advice.

Br

Jimmy

9 Replies 9

mheusinger
Level 10
Level 10

Hello Jimmy,

very interesting task, usually people want the opposite :-)

Well you can slow down transmission with the help of a shaper on BBR1.

class-map Slow

match ip address 100

policy-map MyPol

class Slow

shape 8000

interface Serial0

service-policy output MyPol

access-list 100 permit ip host any

This way the traffic from the NMS will be throttled down to 8 kbps.

The interface utilization can be checked with show interface.

Hope this helps! Please rate all posts

Regards, Martin

Hi Martin

Many thanks for the reply.

why would I want to induce latency ?

We have a customer who has an HP openview Main Station and also a remote station. These connect using NFS.

They are connected via WAN links.

This customer in the traffic path has an E1 Controller and some MPLS also.

It takes 15 Minutes for the HP NMS to open a session to the main station.

I believe this is a BW issue but the customer says that his ROUTER-->ROUTER E1 links is only 50% utilised.

This of course may be true but there are other segments in his network that he cant vouch for as there is also some VPN connectivity (not via the www) on his internal network.

On an end to end ping between the two machines he gets and average ping of 14ms

So he believes his slow connection times is Latency which is also very closely connected with BW as if there's not enough BW , packets will be buffered, dropped, re-transmitted which introduces latency---Would you agree ?

In my simulation I only have some 2500 and x1 2600.

The BW has been set at 4000000 but is irrelevant as I believe I have a max of 1544 (T1) on my routers. I could actually quite easily create a bottle neck at the remote station end by reducing the BW or would using you're policy maps be a better way ?

I have shown below the access router threat the Remote Station connects too.

Can you help interpret the counters/utilisation on my CISCO router ?

I can also add the routers to HP openview if it would help for stats ?

Appreciate any advice.

Thanks

Jimmy

p1r1#show int s1

Serial1 is up, line protocol is up

Hardware is HD64570

Internet address is 12.1.1.2/24

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

reliability 255/255, txload 1/255, rxload 1/255

Encapsulation HDLC, loopback not set

Keepalive set (10 sec)

Last input 00:00:04, output 00:00:01, output hang never

Last clearing of "show interface" counters 10:47:55

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: weighted fair

Output queue: 0/1000/64/0 (size/max total/threshold/drops)

Conversations 0/2/256 (active/max active/max total)

Reserved Conversations 0/0 (allocated/max allocated)

Available Bandwidth 1158 kilobits/sec

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

128859 packets input, 13470998 bytes, 0 no buffer

Received 4530 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

129625 packets output, 10855402 bytes, 1 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

In my opinion, 14ms is a very good latency, and that should not be responsible for the application opening bery slowly. If the claim that the E1 is just 50% utilised, then bandwidth should not be the issue as well, unless there are other bottle necks along the line. I think there are a number of other things you could watch out for

1. Input and Output errors. This could leads to retransmissions, that could significantly slow down the link.

2. MTU issues an fragmentation. More especially since you mentioned that the path crosses an MPLS domain, fragmentation of packets because of the packet exceeding the MTU of the interface can lead to slow application performance.

3. Routing Issues. Multiple paths with some invalid could lead to the packets being dropped along the path.

4. Lack of Router Resources such as memory and CPU, can lead to the packets being dropped.

With respect to monitoring utilisation, watch out for the following two lines, from the output of the show interface.

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

They give a 5 minute utilisation of the interface. From the above the interface is presently sleeping as utilisation is 0bps.

You will need to give some specifics of the customer network for proper troubleshooting of the problem.

ian.selkirk
Level 1
Level 1

A couple of tips for confirming congestion:

1. Reduce the interface load interval - The 5 minute input rate and output rate averages the result so you need a constant traffic rate for over 5 minutes before the reading is true following an idle period of longer than 5 minutes. type "load-interval 30" under the WAN interface config for a 30 second average.

2. Ask the client to "ping" the target during the application loading process, the response times will indicate congestion through packet drops and increased round trip times. You can increase the benefit of "ping" by increasing the MTU to 1500 (ping x.x.x.x -t -l 1500). The MTU should be at least 1526 to allow for MPLS overhead to allow the application to operate the full 1500Byte frame allowed over Ethernet, otherwise fragmentation could be introducing delay, again this can be tested through the use of "ping" but with the dont-fragment function set (ping x.x.x.x -t -f -l 1500). With this enabled the packets will not incur delay though fragmentation, instead they will be dropped and timeouts will occur.

NB - To use the same ping functions on the router rather than the NMS station, use extended ping.

I don't think your problem is likely to be congestion in this case as you would expect to see drops in the interface counter output, although the counters were reset only 10 hours prior to the command being issued:

Last clearing of "show interface" counters 10:47:55

Hi Ian

Some really interesting information you send me there.

I have some more info for you guys.

We ran a Throughput check (using Q Check ) end to end on the Machines concerned and it comes out constantly between 7-8mg.

We originally thought there was a WAN E1 link in the chain but there is no WAN link, just a mixture of Layer3, L2 and MPLS.

Now I what I don't have is an ethereal trace of what is happening on the Remote Station , but I do have a trace of the same application in my test network.

My test network has even less Bandwidth (2-3 Mg) than threes but my remote station opens up in under 2 minutes.

I have attached this trace to show you guys what is happening between my machines.

So on there remote machine I should ping x.x.x.x -t -f -l 1500 which will not allow the packet to be fragmented (-l) and so if there MPLS system is fragmenting our packets and we have set a flag for no fragmentation then it will drop the packets and would this show without the application running ?

Do you trust Q Check ?

We do which points that this is definitely not a BW issue

Many thanks

Jimmy

Hello Jimmy,

I just scanned through your trace and found some quite unusual thing in there. The IP header checksums of the packets coming from 192.168.201.170 are ALL zero !?!

Ethereal claims that they are not what they should be. This is strange. In case this is really true I wonder that there is any communication at all.

So can you check, whether 192.168.201.170 behaves the same for other traffic? Can you just telnet to the default gateway and trace the traffic?

In case it is reproduced, can you try from another host to repeat the transfer?

Is this problem specific to the host? It would point to a faulty NIC or really messed up TCP/IP stack.

Could still be an artefact or analyzer problem.

I am stumped.

Regards, Martin

Hi mheusinger

This was my test system remote connecting to the main machine and that was very strange and it actually worked !

I actually managed to get a trace from the customer remote machine, unfortunately it was only one way from the remote-->main and not a simultaneous trace from the main to the remote.

The file is quite large so I had too wanrar it up, but the IP header checksums look ok in there trace

I have asked the customer to experiment with the ping x.x.x.x -t -f -l 1500 to see if fragmentation is occurring as if you look at the trace the DF bit is set on traffic from the remote to the main.

I cant say I'm an ethereal expert but I cant seem to find anything conclusive from the trace ?

Any help is appreciated !

Br

Jimmy

Really appreciate this advice !

Hi Gents

Just for you're info, our customer typed :-

ping x.x.x.x -t -f -l 1500

Which resulted in :-

Packet needs to be fragmented but DF set.

Packet needs to be fragmented but DF set.

Packet needs to be fragmented but DF set.

Any comments here ?

Br

Jimmy

Hello,

on a Microsoft box you will have to use

ping x.x.x.x -t -f -l 1472

because they do not count the IP/ICMP header there, which results in 28 Bytes additionally.

So use the command given and 1500 Byte IP packets should be created with DF bit set.

Hope this helps! Please rate all posts.

Regards, Martin