cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Bookmark
|
Subscribe
|
829
Views
0
Helpful
9
Replies

QoS issue on 2960x

Amafsha1
Level 2
Level 2

I've noticed that all branches that have a 2960x switch seem to not be able to download anything from the share back at the main office at speeds more than 100-200k down - seems to keep getting capped.  Uploading from the branch to the File server back here is usually always solid speeds, more than 1.5 meg, but I can't get a branch to download faster regardless of the circuit size.  Does anyone suggest what to look at?  My co-worker told me its a QoS issue(guess I think).

 

The QoS configs are below.  

 

 

!
mls qos map policed-dscp 0 10 18 24 46 to 8
mls qos map cos-dscp 0 8 16 24 32 46 48 56
mls qos srr-queue output cos-map queue 1 threshold 3 4 5
mls qos srr-queue output cos-map queue 2 threshold 1 2
mls qos srr-queue output cos-map queue 2 threshold 2 3
mls qos srr-queue output cos-map queue 2 threshold 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 0
mls qos srr-queue output cos-map queue 4 threshold 3 1
mls qos srr-queue output dscp-map queue 1 threshold 3 32 33 40 41 42 43 44 45
mls qos srr-queue output dscp-map queue 1 threshold 3 46 47
mls qos srr-queue output dscp-map queue 2 threshold 1 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 2 threshold 1 26 27 28 29 30 31 34 35
mls qos srr-queue output dscp-map queue 2 threshold 1 36 37 38 39
mls qos srr-queue output dscp-map queue 2 threshold 2 24
mls qos srr-queue output dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63
mls qos srr-queue output dscp-map queue 3 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 4 threshold 1 8 9 11 13 15
mls qos srr-queue output dscp-map queue 4 threshold 2 10 12 14
mls qos queue-set output 1 threshold 1 100 100 50 200
mls qos queue-set output 1 threshold 2 125 125 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 50 200
mls qos queue-set output 1 buffers 15 25 40 20
mls qos
!
!
spanning-tree mode pvst
spanning-tree extend system-id
auto qos srnd4
!
!
!
!
vlan internal allocation policy ascending
!
!
class-map match-all AUTOQOS_VOIP_DATA_CLASS
match ip dscp ef
class-map match-all AUTOQOS_DEFAULT_CLASS
match access-group name AUTOQOS-ACL-DEFAULT
class-map match-all AUTOQOS_VOIP_SIGNAL_CLASS
match ip dscp cs3
!
policy-map AUTOQOS-SRND4-CISCOPHONE-POLICY
class AUTOQOS_VOIP_DATA_CLASS
set dscp ef
police 128000 8000 exceed-action policed-dscp-transmit
class AUTOQOS_VOIP_SIGNAL_CLASS
set dscp cs3
police 32000 8000 exceed-action policed-dscp-transmit
class AUTOQOS_DEFAULT_CLASS
set dscp default
police 900000000 8000 exceed-action policed-dscp-transmit

!
!
!
Typical switchport all configured the same across everywhere:

 

Branch 2960x switch:

interface GigabitEthernet1/0/7
switchport access vlan 20
switchport voice vlan 300
srr-queue bandwidth share 1 30 35 5
priority-queue out
mls qos trust device cisco-phone
mls qos trust cos
auto qos voip cisco-phone
spanning-tree portfast
service-policy input AUTOQOS-SRND4-CISCOPHONE-POLICY

 

 

Thank you in advance.  This issue has been very difficult to resolve. 

9 Replies 9

Joseph W. Doherty
Hall of Fame
Hall of Fame
Are any of your interfaces showing drops?

What's the WAN physical topology?

We are connected to a RAD ETX device by the ISP (10 meg circuit).  ISR4321 for our Router, 2960x for the switch.  Even during hours where no one is working and the circuit is not being used, can never download anything from the share(server at main office) at more than 200k, but uploading something to the share is like 1.5 meg consistent. Also ran iPerf to verify this while also looking at another branch with the same exact setup, configs, hardware and circuit and iPerf showed great results both way, but for the branch in question only showed good one way.  there are drops, but they are negligible.  The interface that connects to the ISP usually gets small amount of "total output drops"  counters, but it's like 2 thousand packets over the span of a whole day and the same happens at other places. 

 

 

Ok, so Initiated  a large file copy right now and let's say the interface on our Router that connects to the ISP is receiving like shown below at 1.65m-bits/sec and when checking the trunk I see the same speeds down to the switch, could that possibly rule out the issue being on our side since the first entry point is only getting 1.65meg and since the trunk is pretty much recieving the same that shows that the packets are coming into our first interface at this Rate and there is nothing we can do about it?  Sorry if that's a confusing question. If this is the speed that int g0/0/1 on Router is reporting only 1.65, then that's the speed it MUST be coming in right?  it can't like be throttled at the interface or anything right?

 

Branch Router#sh int g0/0/1 hum | in 5 min
5 minute input rate 1.65 mega-bits/sec , 181 pps
5 minute output rate 320.0 kilobits , 163 pps

 

 

 

 

 

 

 

Router#

GigabitEthernet0/0/1 is up, line protocol is up
Hardware is ISR4321-2x1GE,
Description: //  to Eth 10 meg circuit ISP - MPLS
Internet address is 67.x.x.82/30
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 4/255
Encapsulation ARPA, loopback not set
Keepalive not supported
Full Duplex, 100Mbps, link type is force-up, media type is RJ45
output flow-control is on, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 02:32:10, output 00:00:36, output hang never
Last clearing of "show interface" counters 4d23h
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 10111
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
5 minute input rate 1722000 bits/sec, 198 packets/sec
5 minute output rate 470000 bits/sec, 173 packets/sec
38433963 packets input, 45823617870 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
32495651 packets output, 7238655767 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
5 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
Driver Configuration Block:
Admin State Up MTU 1500 Speed 100mbps Duplex full Autoneg Off Stats Interval 5

 


GigabitEthernet 0/0/1 Cumulative Statistics:

Input packet count 79044330
Input bytes count 87145158368
Input mcast packets 0
Input bcast packets 0
Input CRC errors 0
Input overruns 0
Runt packets 0
Giant packets 0
Input pause frames 0
Output packet count 70161100
Output bytes count 19084670970
Output mcast packets 21766
Output bcast packets 49880
Output underruns 0
Output pause frames 0

 

When I asked about WAN topology, I was hoping you could explain you WAN topology, end-to-end. You mention branches, plural, and using an ISP, so you're tunneling across the Internet? The problem is only seen at one branch?

We own MPLS service.  We connect up 100 or so branches through our MPLS provider.  Correct, this is only happening to this branch.  I baselined a bunch of other branches that have 10 meg circuits and did iPerf.  Every branch is fine except this one.  I engaged Cisco TAC.  They're saying the issue is coming from the LEC.  LEC denied this for the 3rd time as they did frame flood testing.  This is starting to become really bizzare. 

Does this branch have notably higher latency?

Has your bandwidth testing been TCP or UDP based?

Just took the circuit down and plugged our laptop directly into the LEC.  same results.   All tests were done with UDP and TCP.  same results

Well, if you're unable to get the expected bandwidth using UDP, I too would wonder whether there's an issue within your provider's network. (Providers do occasionally have issues which sometimes require you "to prove" for them to resolve such.)

They kicked the ticket right back to us saying we didn't specify a window size and our tested must've defaulted to 8k TCP window size.  I'm looking through documentation to see what the default window size on iperf3 is becuase it doesen't say in the CLI.  I'll probably do some pcaps to find out.

 

Joseph, would you know the syntax for properly doing a udp iperf test? 

"Joseph, would you know the syntax for properly doing a udp iperf test? "

Sorry, I don't know off-the-top-of-my-head, but RWIN doesn't come into play with UDP testing. Many factors impact TCP bandwidth testing, but if you used UDP too, UDP can just generate packets at a certain bandwidth, the TCP factors don't pertain.