cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1921
Views
14
Helpful
30
Replies

queue time doesnot match

maya davidson
Level 1
Level 1

Hello, 

I have a network topology as below:

R2 - R3 - R4
         |
         R1

My bandwidth are 100Mbps, queue sizes are 1000 packets, fifo
and I am sending 12500 frame/second and 1000byte each frame , 
from R2- R3 
and another one R1-R3
Loss is around 1.96% for both flows, but the delay is 0.007s for the first flow and 0.4s for the second flow, 
which in theory i can calculate the queuing time as below, 
queue sizes / (bandwidth / packetsize * = 0.08s

the R2 and R3 are from 9200 cisco and R1 c1111 

I don't know where I am getting wrong that with the same configs i am getting delays which none of them are correct. 

Thanks for your help in advance

30 Replies 30

Joseph W. Doherty
Hall of Fame
Hall of Fame

Sorry, but I'm unable to understand both what you're doing and what's the perceived problem.

Although 1000 bytes at 12,500 per second does make for 100 Mbps, does this 1000 also include the 20 bytes of Ethernet L1 overhead?

If you send 100 Mbps to a FE interface there should be no queuing nor packet loss.

Hello, 
yes it does include that and 1000 bytes is the total frame size, and the interfaces are gigabethernet which i set it to 100 Mbps by speed command 
I also get gap between these two flows when i'm sending 1500 bytes and 130000 packet/second, 
the losses make sense but not the delays 

Hello @maya davidson ,

you need to take in account the inter frame gap so the L1 bandwidth is actually something more the 20 bytes 20.1 to be added to the L2 frame.

By the way what are you using to generate and receive the test traffic flows ?

Hope to help

Giuseppe

 

@Giuseppe Larosa ,thanks for your reply, 
1000 bytes is the final packet size with all of considerations. I'm using IXIA, 
My question is what thing i should look into the routers because with the same configs doesn't make sense this delay gap 

Hello @maya davidson ,

ok you are using an IXIA that is a good instrument for testing routers and multilayer switches.

You are sending traffic over two different interfaces and the traffic flows are terminated on a third port out of R3.

>> the R2 and R3 are from 9200 cisco and R1 c1111 

So R2 and R3 are two Catalyst 9200 and R3 is  a low end router C1111.

Be aware also that the traffic we send at constant packet rate with instruments like IXIA are good for lab but not realistic.

>> Loss is around 1.96% for both flows, but the delay is 0.007s for the first flow and 0.4s for the second flow, 

I don't know how you made this measure . However QoS implementation of a SW based router and that of a Catalyst switch are quite different.

12500 frames per second is your total packet rate on both flows not the packet rate of each path/flow ?

12500*1000*8 = 100,000,000 bps

Because otherwise as already noted your packet losses should be around 50%.

Hope to help

Giuseppe

 

Ramblin Tech
Spotlight
Spotlight

I am missing something very basic...

Where are the Ixia ports connected? Ixia-->R2-->R3-->R4-->Ixia and Ixia-->R1-->R3-->R4-->Ixia ?

Two 100Mbps flows ingressing R3 and egressing a single 100Mbps interface in order to cause queueing for the purpose of measuring queue latency? If that were the case, then loss from each flow would be 50%, not 2% so I am definitely missing something.

Regardless of the answer above, forwarding latency (switching a packet from ingress to egress interface) will not be the same between a Catalyst 9200 switch (forwards in h/w) and a Cisco ISR 1111 (forwards in s/w). Queueing latency will be additive to the switch/router forwarding and transmission (clocking the bits onto the media) latencies.

Disclaimer: I am long in CSCO

IXIA ports are connected to all routers, 
So by sending traffic from R1 to R3 I mean sending traffic from port connected to R1 and receiving it on port connected to R3,

Also, Can you explain more about the last paragraph?or if there is any link so i can read it I really appreciate it 

Thanks.

OK, so a single flow with all interfaces at 100Mbps...

Ixia-->R1-->R3-->Ixia

Latency = Ixia transmission + media propagation + R1 forwarding + R1 queueing + R1 transmission+ media propagation + R3 forwarding + R3 queueing + R3 transmission+ media propagation

In testbeds like these, media length is often measured in a few feet (or meters), so media propagation time (typically 2/3 c) will be measured in a few nanoseconds, which we can ignore when the other latency components will be measured in milliseconds or microseconds. Also, with a single flow and the interface speeds all being the same (100Mbps), there should be no queueing due to ingress-egress speed mismatches (assuming R1 and R3 can forward at 100Mbps), so we can ignore queueing delays. That leaves us with:

Latency = Ixia transmission + R1 forwarding + R1 transmission + R3 forwarding + R3 transmission.

The transmission times will all be the same at (1000Bytes/frame * 8bits/Byte) / 100,000,000bit/sec = 0.00008sec/frame.

Latency = 80usec + R1 forwarding + 80usec + R3 forwarding + 80usec

The forwarding times are how long it takes for the router to look-up the frame's (or packet's) next-hop in some table (MAC address table for L2 forwarding, FIB for L3, LFIB for MPLS, etc), determine the egress interface for the next-hop, re-write headers as required, and "switch" the frame/packet to the egress interface for transmission. In addition to merely forwarding, a router can also spend time applying "services" to a packet like firewall rules, ACLs, NAT'ing, QoS, etc.  Specialized h/w (ASIC/NPU) in R3 can do this much faster than s/w running on a CPU as in R1.

What are the forwarding latencies through the 1111 and 9200? Connect each directly to the Ixia and test individually to measure each element's forwarding latency.

Disclaimer: I am long in CSCO

Jim, didn't see your reply until after I posted my reply, but I believe we're saying about the same thing, well except for my latency being in ms and yours in microseconds.  ; )

Ah, I think I may finally understand.

I'm aware of Ixia, but have never used it.

However, suppose your 1000 byte frames are just that, i.e. they don't account for L1 overhead, that both I (initially) and @Giuseppe Larosa mentioned.  If not, you're placing 1020 bytes on the wire that were only expecting 1000 bytes, i.e. if so, you're oversubscribing a FE interface at your frames PPS rate.  Hmm, by how much, well 1020/1000 = a 2% overage or the reciprocal of (about) .98039 and taking that from 1 is about .0196 or 1.96%.  That last number seem somewhat familiar?

Your queuing latency, really had me confused until I read @Ramblin Tech reply.  What you're looking at is forwarding latency, I believe.

At 100 Mbps, it will take 0.0816 ms (another number that should look familiar) to transmit or receive a 1020 byte frame with L1 overhead.  Such would be the time for a NIC to move those bits on or off 100 Mbps media.  Next, though, we need to get the frame to the egress port, and that requires time for the processing.  How much time, is very device dependent and what logic has to be applied in making the forwarding decision.

Switches, at Jim also mentions, tend to have hardware to make forwarding as rapidly as possible.  Software based routers, will generally be slower, possibly/often much slower.  (For example, a lengthy ACL will likely incur much more forwarding latency on a software based router than a switch.)  So, any time there's an excess of the .08 ms, it should be device processing forwarding latency.

BTW, traditionally, the fastest L2 forwarding was done by hubs.  Whose forwarding delay might only be 1 bit time, at the media's wire-rate.  Such a hub would be about 1000x faster than a switch, in the realm of forwarding latency.  (This is why cut-through switches came to be, to offset some of the the inherent latency of "store-and-forward" switches.)

Also BTW, even a software based router's forwarding latency is usually a non-problem, as long as it can keep up with the line-rate and you don't have lots and lots of hops.  (Again, though, there are some apps that want to minimal end-to-end latency getting a packet, and that's where you'll often see cut-through switches, fewer hops, low utilization links, higher bandwidth links, least physical distance possible, all to mitigate various causes of latency.)

Thanks @Joseph W. Doherty  ,  @Ramblin Tech and @Giuseppe Larosa . 

I'm sorry for confusion, the delay i mentioned in my query is the end to end delay measured by IXIA, 
My thought was considering negligible processing delay and propagation delay(as the links are in 1-2 meters), this delay representing queuing delay. 
I have still something that doesn't make sense to me is how these devices calculate queuing time ? because changing the buffer sizes for output queue by using "hold-queue 1000 out" didn't change the delay at all. 
at least for catalyst c9200, given the output rate 100Mbps, 1020 bytes frame size (which this almost give 12254 packet/sec), I should have (queue size) / output rate  = 1000 / 12254 = 0.08s at least for queuing time but I don't see this in cisco catalyst 9200 and the delay is lower than my calculation, I don't know which part I am getting wrong and I really appreciate your help

Hi @maya davidson 

I believe the basic misunderstanding here is that there will be queueing, and queueing delays, in the absence of congestion. In our case here, the arrival rate at the ingress interface equals the service (transmission) rate at the egress interface. Packets will be transmitted out the egress interface as quickly as they arrive at the ingress interface, hence there will be no/minimal congestion, no/minimal packets queued, and no/minimal queuing delay. You are seeing this yourself in that changing the buffer size does not change the latency measurement. 

In order to see and measure queueing latency, you will have to congest the egress interface by offering more ingress traffic than the egress interface can transmit. This over-subscription will congest the interface and cause excess packets to be queued, with the max number of packets enqueued being determined by the hold-queue configuration. 

Disclaimer: I am long in CSCO

Laugh, again Jim describes similar points, as my prior reply, but posts about a minute sooner.

Again, two points, first, your egress FIFO queue depth (routers only?) may be hold-queue and tx-ring.  Secondly, hold-queue might be ignored on switches.

If your Ixis is using a physical speed of 100 Mbps, and your test device also has an physical egress speed of 100 Mbps, no extended queue should form on the test device.  Possibly it might on the Ixia if it is generating 1000 byte frames at your noted PPS.

However, if ingress to test device is gig but egress FE, then if 1020 wire bytes, egress will queue.

Somethings to keep in mind about egress queuing.  A software based router should use the 1000 deep hold queue, but interfaces often have their own hardware tx queue.  Also, software based router do software based buffer management.

I suspect switches may totally ignore your hold queue settings, and manage egress queuing via their hardware.

Basically, the forgoing is a long way of saying the inate differences between software based routers and hardware based switches may account for your test result differences.

Review Cisco Networking for a $25 gift card