cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
16750
Views
10
Helpful
13
Replies

Performance issue 10Gb/s to 1Gb/s ( 6880-X/6800-IA) 15.2(1)SY0a

hsticher
Level 1
Level 1

For a TCP-session where a server sends data over a Te-Interface to a client connected at a Gi-Interface  the traffic throughput significant slows down. This happens because the client is not able to act fast enough for the server and the switch drops the packets. Therefore the client informs the server to resend the lost packets, which slows down the throughput between server and client to 25% of the nominal data rate of the 1Gb/s interface.

Server-- > --10Gbit/s-->--C6880-X-->--10Gbit/s-->--C6800-IA-->--1Gbit/s-->--Client

Is there any possibility to configure a flow control between client and switch to reduce the drops to get a better performance?

1 Accepted Solution

Accepted Solutions

Hi...

good to hear it's working.

This is actually not the Cisco recommended config. It is just an example on how to tune the buffers (which resolves the issue). The problem is that the cisco provided default allocations are not good.

So you should tune the buffer allocation according to your QoS configuration.
If you don't run QoS and there is just default traffic, you should assign as much as possible to the default queue.

Let me know if you need help with it.

PS: Please mark correct answers...

Regards,
Markus

View solution in original post

13 Replies 13

I think this situation flow control can't help.

Flow control works over directly connected links and the design itself is not looking good since you may end up having lot of output drops on gig interface on switch.

 

Hope this helps.

 

Thanks,

Madhu

 

Sorry, but I'm not sure if I understand what you mean. I'm searching for a method, to set up the communication (TCP stream) between server (W2k12) and client (W7/W8)  to minimize the drops at the client side. In other words to reduce the sending data rate of the server to a level the client is capable to accept.  

Kind regards

Hans-Juergen

If a Client, connected at a 6800-IA port, starts a download from a server, connected with a 10Gb interface, the 6800-IA port drops all traffic higher than round about 250Mb, because there is only one default queue for every 6800-IA port that can't buffer that burst of traffic coming from the server.
In other words, you will never reach 1Gbit/s at a 6800-IA port when a 10Gb server sends the data. If you need 1Gbit/s download rate for your clients you need other switches for your clients.

 

Answer from CISCO: It is how it is, no change to prevent the drops at 6800-IA clients to achieve a good throughput  by sending data from the 10G server to a 1G client.

Instant Access host ports and fex-fabric uplink ports operate in trust-DSCP mode with no packet modification. C6800IA client supports 4 queues: 1 priority and 3 standard (1P3Q3T) queues for C6800IA host ports and 10G fabric uplinks toward Instant Access parent. A packet already marked with DSCP will be queued based on default table-map preconfigured as shown in the following output and queued into priority or standard queue based on its DSCP marking

H-C240-F26-S02#show queueing interface gigabitEthernet101/2/0/31
Interface GigabitEthernet101/2/0/31 queueing strategy:  Weighted Round-Robin

  Port QoS is disabled globally
  Queueing on Gi101/2/0/31: Tx Enabled Rx Disabled

Trust boundary disabled

  Trust state: trust DSCP
  Trust state in queueing: trust DSCP
  Default COS is 0
    Queueing Mode In Tx direction: mode-dscp
    Transmit queues [type = 1p3q3t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       1         Priority            3
       2         WRR                 3
       3         WRR                 3
       4         WRR                 3

Hi chaps,

I decided to re-open the thread as it seems to be quite useful, so lets just add some more content to it..

Right, so, in short. I have noticed that I have some troubles with traffic being marked with DSCP 0x08 (which is CS1). By default, 6800IA buffers are not allocated for that queue (strange?!) so I did that as per this document. However, that didn't help much..

 

Here is the config that I used:

class-map type lan-queuing match-any priority
 match dscp cs4 33 cs5 41 42 43 44 45 
 match dscp ef 47 
!
class-map type lan-queuing match-any queue-4
 match dscp 8 9 11 13 15 
 match dscp af11 af12 af13 
!
class-map type lan-queuing match-any queue-3
 match dscp 1 2 3 4 5 6 7
 match dscp 25 
!
policy-map type lan-queuing FEX-QOS-OUT
class priority
 priority
 queue-buffers ratio 10
class queue-4
 bandwidth remaining percent 30
 queue-buffer ratio 30
class queue-3
 bandwidth remaining percent 30
 queue-buffer ratio 30
class class-default
 queue-buffer ratio 30
!

interface gi 107/1/0/1
 service-policy type lan-queuing output FEX-QOS-OUT

And here are the results:

#show queueing interface gi107/1/0/1                    

Interface GigabitEthernet107/1/0/1 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled globally
  Queueing on Gi107/1/0/1: Tx Enabled Rx Disabled

Trust boundary disabled
  Trust state: trust COS
  Trust state in queueing: trust DSCP
  Default COS is 0
    Class-map to Queue in Tx direction

    Class-map           Queue Id
    ----------------------------
    priority                  1
    queue-4                   4
    queue-3                   3
    class-default             2
    Queueing Mode In Tx direction: mode-dscp

    Transmit queues [type = 1p3q3t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       1         Priority            3
       2         WRR                 3
       3         WRR                 3
       4         WRR                 3

    WRR bandwidth ratios:   40[queue 2]  30[queue 3]  30[queue 4] 
    queue-limit ratios:     10[Pri Queue]  30[queue 2]  30[queue 3]  30[queue 4] 
    queue thresh dscp-map
    ---------------------------------------
    1     1      32 33 40 41 42 43 44 45 46 47 
    1     2      
    1     3      
    2     1      0 16 17 18 19 20 21 22 23 24 26 27 28 29 30 31 34 35 36 37 38 39 48 49 50 51 52 53 54 5556 57 58 59 60 61 62 63 
    2     2      
    2     3      
    3     1      1 2 3 4 5 6 7 25 
    3     2     
    3     3      
    4     1      8 9 10 11 12 13 14 15 
    4     2      
    4     3      

#remote command fex 107 show mls qos int gi1/0/1 queuein    

GigabitEthernet1/0/1
Egress Priority Queue : enabled
Shaped queue weights (absolute) :  25 0 0 0
Shared queue weights  :  0 102 76 76
The port bandwidth limit : 100  (Operational Bandwidth:100.0)
The port is mapped to qset : 1 

#remote command fex 107 show mls qos int gi1/0/1 buffers

GigabitEthernet1/0/1
The port is mapped to qset : 1 
The allocations between the queues are : 10 30 30 30  

#remote command fex 107 show mls qos int gi1/0/1 stati      

GigabitEthernet1/0/1 (All statistics are in packets)
[cut]
  output queues enqueued: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:        4160         673        1122 
 queue 1:     2310497           0           0 
 queue 2:     3995878           0        3057 
 queue 3:     4633158           0           0 
          
  output queues dropped: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0 
 queue 1:         591           0           0 
 queue 2:           0           0           0 
 queue 3:        2850           0           0 

 My short questions are:

 

1. Do you see anything wrong with my config?

2. Why there are packet drops in queue1 and queue3 being observed, when the traffic I am generating is less than 100mbps, which theoretically shouldn't cause any congestions or whatsoever?

3. Is there anyway possible to manipulate thresholds? (perhaps using random detect within the classes or? I actually tried to do that but my config has been removed from FEX interfaces). I couldn't find anything written about that.

 

Many thanks in advance for your time.

D.

 

 

 

Markus Benz
Level 1
Level 1

Hi,

I was running into the same problem on a customer implementation.

This seems to be a bug in the 15.2SY releases.

I am currently running a TAC case regarding it.

 

You can downgrade to 15.1SY and the problems should disappear.

If you're not running more then 21 ia modules  you should probably downgrade until the bug is fixed.

 

Please post any solution in case you found a workaround for the 15.2SY release.

 

Thanks,

Markus

Hi,

 

i have the same problem @ two customers with 6880 and IA and currently also a case open with cisco with no results yet. Can you please share the resulting bug id from your case so I can point my TAC Engineer in the right direction? 

 

br

Andy

Update:

I just put following configuration to my customers environment. Which seems to solve the problem for the moment.

Change this config according to your QoS settings.

 

class-map type lan-queuing match-any TEST-32
 match dscp 32
class-map type lan-queuing match-any TEST-24
 match dscp 24 
 
policy-map type lan-queuing TEST
 class type lan-queuing TEST-32
    priority
    queue-buffers ratio 20
 class type lan-queuing TEST-24
    bandwidth remaining percent 10
    queue-buffers ratio 20
 class class-default
    queue-buffer ratio 60

interface GigabitEthernet1xx/x/x/x
 service-policy type lan-queuing output TEST

 

Hope this helps...

Regards,

Markus

Many thanks for your answer. I tried what you suggested but it results to the following error message:
H-C240-F26-S02(config)#int gi101/2/0/31
H-C240-F26-S02(config-if)# service-policy type lan-queuing output TEST


  HQM features are not supported in interface GigabitEthernet101/2/0/31
  MQC features are not supported for this interface

 

Interface configuration:
H-C240-F26-S02#sh run int gi101/2/0/31
Building configuration...

Current configuration : 294 bytes
!
interface GigabitEthernet101/2/0/31
 description F34-01
 switchport
 switchport trunk allowed vlan 1,6
 switchport mode access
 switchport port-security maximum 2
 switchport port-security violation restrict
 load-interval 30
 spanning-tree portfast edge
 spanning-tree bpduguard enable
end

I didn't found any explanation at CISCO what that means and how to circumvent this issue. Do you  have any idea?

Hi,

yes, you need the latest software release. I think buffer tuning was added in the latest release and was not possible before. I have now fine tuned the buffer tuning for my environment and we achieve line rate now. So this should really solve the problem.

Regards,
Markus

Hi,

thanks for your answer.

After solving the ISSU/eFSU Upgrade disaster, I finally  upgraded both VSS-Switches  to Version 15.2(1)SY1 and also the FEX-stacks to Version 15.2(3m)E1.
After that, I configured the switches and FEX as CISCO advised.

Router(config)#
class-map type lan-queuing match-all cs1
match dscp cs1
 
class-map type lan-queuing match-all cs2
match dscp cs2
 
policy-map type lan-queuing policy1
class type lan-queuing cs1
priority
class type lan-queuing cs2
    bandwidth remaining percent 10

Router(config)#int gi101/0/2/31
Router(config-if)#service-policy type lan-queuing output policy1
Propagating [attach] lan queueing policy "policy1" to Gi101/1/0/1,....

Router(config-if)#no service-policy type lan-queuing output policy1
Propagating [remove] lan queueing policy "policy1" to Gi101/1/0/1,....


Router(config)#do remote command fex 101 show mls qos interface gi2/0/31 queueing

GigabitEthernet2/0/31
Egress Priority Queue : enabled
Shaped queue weights (absolute) :  0 0 0 0
Shared queue weights  :  0 84 84 84
The port bandwidth limit : 100  (Operational Bandwidth:100.0)
The port is mapped to qset : 1
    
Now, the tremendous drop counts and retransmissions, that throttled the throughput, are history. 

Hi...

good to hear it's working.

This is actually not the Cisco recommended config. It is just an example on how to tune the buffers (which resolves the issue). The problem is that the cisco provided default allocations are not good.

So you should tune the buffer allocation according to your QoS configuration.
If you don't run QoS and there is just default traffic, you should assign as much as possible to the default queue.

Let me know if you need help with it.

PS: Please mark correct answers...

Regards,
Markus

Hi chaps,

I decided to re-open the thread as it seems to be quite useful, so lets just add some more content to it..

Right, so, in short. I have noticed that I have some troubles with traffic being marked with DSCP 0x08 (which is CS1). By default, 6800IA buffers are not allocated for that queue (strange?!) so I did that as per this document. However, that didn't help much..

 

Here is the config that I used:

class-map type lan-queuing match-any priority
 match dscp cs4 33 cs5 41 42 43 44 45 
 match dscp ef 47 
!
class-map type lan-queuing match-any queue-4
 match dscp 8 9 11 13 15 
 match dscp af11 af12 af13 
!
class-map type lan-queuing match-any queue-3
 match dscp 1 2 3 4 5 6 7
 match dscp 25 
!
policy-map type lan-queuing FEX-QOS-OUT
class priority
 priority
 queue-buffers ratio 10
class queue-4
 bandwidth remaining percent 30
 queue-buffer ratio 30
class queue-3
 bandwidth remaining percent 30
 queue-buffer ratio 30
class class-default
 queue-buffer ratio 30
!

interface gi 107/1/0/1
 service-policy type lan-queuing output FEX-QOS-OUT

And here are the results:

#show queueing interface gi107/1/0/1                    

Interface GigabitEthernet107/1/0/1 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled globally
  Queueing on Gi107/1/0/1: Tx Enabled Rx Disabled

Trust boundary disabled
  Trust state: trust COS
  Trust state in queueing: trust DSCP
  Default COS is 0
    Class-map to Queue in Tx direction

    Class-map           Queue Id
    ----------------------------
    priority                  1
    queue-4                   4
    queue-3                   3
    class-default             2
    Queueing Mode In Tx direction: mode-dscp

    Transmit queues [type = 1p3q3t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       1         Priority            3
       2         WRR                 3
       3         WRR                 3
       4         WRR                 3

    WRR bandwidth ratios:   40[queue 2]  30[queue 3]  30[queue 4] 
    queue-limit ratios:     10[Pri Queue]  30[queue 2]  30[queue 3]  30[queue 4] 
    queue thresh dscp-map
    ---------------------------------------
    1     1      32 33 40 41 42 43 44 45 46 47 
    1     2      
    1     3      
    2     1      0 16 17 18 19 20 21 22 23 24 26 27 28 29 30 31 34 35 36 37 38 39 48 49 50 51 52 53 54 5556 57 58 59 60 61 62 63 
    2     2      
    2     3      
    3     1      1 2 3 4 5 6 7 25 
    3     2     
    3     3      
    4     1      8 9 10 11 12 13 14 15 
    4     2      
    4     3      

#remote command fex 107 show mls qos int gi1/0/1 queuein    

GigabitEthernet1/0/1
Egress Priority Queue : enabled
Shaped queue weights (absolute) :  25 0 0 0
Shared queue weights  :  0 102 76 76
The port bandwidth limit : 100  (Operational Bandwidth:100.0)
The port is mapped to qset : 1 

#remote command fex 107 show mls qos int gi1/0/1 buffers

GigabitEthernet1/0/1
The port is mapped to qset : 1 
The allocations between the queues are : 10 30 30 30  

#remote command fex 107 show mls qos int gi1/0/1 stati      

GigabitEthernet1/0/1 (All statistics are in packets)
[cut]
  output queues enqueued: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:        4160         673        1122 
 queue 1:     2310497           0           0 
 queue 2:     3995878           0        3057 
 queue 3:     4633158           0           0 
          
  output queues dropped: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0 
 queue 1:         591           0           0 
 queue 2:           0           0           0 
 queue 3:        2850           0           0 

 My short questions are:

 

1. Do you see anything wrong with my config?

2. Why there are packet drops in queue1 and queue3 being observed, when the traffic I am generating is less than 100mbps, which theoretically shouldn't cause any congestions or whatsoever?

3. Is there anyway possible to manipulate thresholds? (perhaps using random detect within the classes or? I actually tried to do that but my config has been removed from FEX interfaces). I couldn't find anything written about that.

 

Many thanks in advance for your time.

D.

 

 

 

 

Review Cisco Networking for a $25 gift card