cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1391
Views
5
Helpful
5
Replies

C3560 Etherchannel load balance issue

vmanhng01
Level 1
Level 1

Dear all,

in my company using C3560X model and configuration Port-channel between 2 core switch as below:

MOB_CAM_SVR_L3_1_99.61# sh run int po20
Building configuration...

Current configuration : 140 bytes
!
interface Port-channel20
description [Trunk to MOB_CAM_SVR_L3_2_99.62]
switchport trunk encapsulation dot1q
switchport mode trunk
end

--------------------

MOB_CAM_SVR_L3_1_99.61#sh int po20
Port-channel20 is up, line protocol is up (connected)
Hardware is EtherChannel, address is d8b1.9032.f6b0 (bia d8b1.9032.f6b0)
Description: [Trunk to MOB_CAM_SVR_L3_2_99.62]
MTU 1500 bytes, BW 6000000 Kbit, DLY 10 usec,
reliability 255/255, txload 67/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is unknown
input flow-control is off, output flow-control is unsupported
Members in this channel: Gi0/43 Gi0/44 Gi0/45 Gi0/46 Gi0/47 Gi0/48
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 17:51:20
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 22966759
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 46105000 bits/sec, 18605 packets/sec
5 minute output rate 1593771000 bits/sec, 186491 packets/sec
1247115948 packets input, 362800707662 bytes, 0 no buffer
Received 1479979 broadcasts (1407265 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1407265 multicast, 0 pause input
0 input packets with dribble condition detected
12139453377 packets output, 12961881582539 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

----------------------

sh interface for each member of port-channel:

GigabitEthernet0/43 is up,
Total output drops: 5677292
5 minute input rate 11180000 bits/sec, 5031 packets/sec
5 minute output rate 426688000 bits/sec, 51771 packets/sec
GigabitEthernet0/44 is up,
Total output drops: 6450945
5 minute input rate 14099000 bits/sec, 5318 packets/sec
5 minute output rate 412218000 bits/sec, 49959 packets/sec
GigabitEthernet0/45 is up,
Total output drops: 4454822
5 minute input rate 9957000 bits/sec, 4477 packets/sec
5 minute output rate 302825000 bits/sec, 34663 packets/sec
GigabitEthernet0/46 is up,
Total output drops: 111606
5 minute input rate 1659000 bits/sec, 1923 packets/sec
5 minute output rate 148701000 bits/sec, 19528 packets/sec
GigabitEthernet0/47 is up,
Total output drops: 2980285
5 minute input rate 6586000 bits/sec, 1050 packets/sec
5 minute output rate 163475000 bits/sec, 16400 packets/sec
GigabitEthernet0/48 is up,
Total output drops: 2934729
5 minute input rate 6039000 bits/sec, 924 packets/sec
5 minute output rate 162015000 bits/sec, 16298 packets/sec

Total traffic go through this Po about ~1.7Gbps, but it can not share same bandwidth between 6 port of Po. and we got too many Drop packet

below is load-blance type configed on device:

MOB_CAM_SVR_L3_1_99.61#sh etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address

any body know how to share bandwitch between 6 port more symmetrical.

thank you for read.

2 Accepted Solutions

Accepted Solutions

Hi,

 

Etherchannel can't do load sharing. What does it mean, if you have 2 physical interfaces in etherchannel, it does not mean that you will get 50-50% load on each physical. For each flow (flow is determined by etherchannel load-balance algorithm) only and only one physical interface is used. As I see, in your case algorithm is src-dst-ip, so between the same nodes port-channel will also use the same physical interfaces even for different applications.

Unfortunately, 3560X does not support algorithm based on L4 information.

 

Note: Output drops may be IOS bug, you may not have drops in reality. Based on version do search bug.

 

HTH,

HTH,
Please rate and mark as an accepted solution if you have found any of the information provided useful.

View solution in original post

"i don'n understand why Gi port 1000M/s capacity, but when traffic up to ~400-500M/s it got many drop packet like that?"

That's often due to "bursting". Your 400-500 Mbps is overall average, over some time period, such as 5 minutes. Bursts happen at the millisecond level. (You might also want to research "microbursts".)

The Catalyst 3650/3750 are a bit infamous for interface drops because they don't have a lot of buffer RAM and their buffer management, especially if default QoS is enabled, as its "reserves" RAM per port (and QoS queue). Sometimes reconfiguration of buffers settings can dramatically reduce interface drops. (I had one case where I went from multiple drops per second to a few drops per day.)

View solution in original post

5 Replies 5

vmanhng01
Level 1
Level 1

Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches - Cisco

"i don'n understand why Gi port 1000M/s capacity, but when traffic up to ~400-500M/s it got many drop packet like that?"

That's often due to "bursting". Your 400-500 Mbps is overall average, over some time period, such as 5 minutes. Bursts happen at the millisecond level. (You might also want to research "microbursts".)

The Catalyst 3650/3750 are a bit infamous for interface drops because they don't have a lot of buffer RAM and their buffer management, especially if default QoS is enabled, as its "reserves" RAM per port (and QoS queue). Sometimes reconfiguration of buffers settings can dramatically reduce interface drops. (I had one case where I went from multiple drops per second to a few drops per day.)

Hi,

 

Etherchannel can't do load sharing. What does it mean, if you have 2 physical interfaces in etherchannel, it does not mean that you will get 50-50% load on each physical. For each flow (flow is determined by etherchannel load-balance algorithm) only and only one physical interface is used. As I see, in your case algorithm is src-dst-ip, so between the same nodes port-channel will also use the same physical interfaces even for different applications.

Unfortunately, 3560X does not support algorithm based on L4 information.

 

Note: Output drops may be IOS bug, you may not have drops in reality. Based on version do search bug.

 

HTH,

HTH,
Please rate and mark as an accepted solution if you have found any of the information provided useful.

Thank Kanan Huseynli

as i check on other model (C6509 and 4506) traffic also not symmetrical between each port of etherchannel (in case 2port, 4port and 8port member).

i'll try to search bug IOS for this issue.

In high level catalyst switches you can configure slgoritha based on L4 parameters such as src-dst-port. Even there must be options mixing different type of mechanisms (both ip and port).

 

In any case for the same flow port-channel will choose one physical link.

 

Actually, in reality to test you need big traffic between the same nodes but with different applications. If algorithm does not choose the same port based on hashing result for two different flows, then you can track interface statistics and get approximate load on each physical ports.

HTH,
Please rate and mark as an accepted solution if you have found any of the information provided useful.