cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1057
Views
20
Helpful
12
Replies

Etherchannels

benolyndav
Level 4
Level 4

Hi

see below the lines in BOLD, these two ports are members of a 2 port etherchannel, i am wondering why there is such a differnece  on the  txload , rxload  numbers any help would be appreciated.???

 

Thanks

 

SWITCH-STACK#sh int Gi1/0/11

GigabitEthernet1/0/11 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is 00cc.fc35.888b (bia 00cc.fc35.888b)

  Description: INSIDE

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 1/255, rxload 93/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:11, output 00:00:02, output hang never

  Last clearing of "show interface" counters 1d00h

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

 Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 364765000 bits/sec, 75167 packets/sec

  5 minute output rate 0 bits/sec, 0 packets/sec

     3165755027 packets input, 2006993785633 bytes, 0 no buffer

     Received 3136 broadcasts (3136 multicasts)

     0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog, 3136 multicast, 0 pause input

     0 input packets with dribble condition detected

 

SWITCH-STACK#sh int Gi2/0/11

GigabitEthernet2/0/11 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is 00cc.fc2f.400b (bia 00cc.fc2f.400b)

  Description: OUTSIDE

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 193/255, rxload 101/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input never, output 00:00:29, output hang never

  Last clearing of "show interface" counters 1d00h

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 131296062

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 399390000 bits/sec, 72003 packets/sec

  5 minute output rate 758687000 bits/sec, 145784 packets/sec

     2655861862 packets input, 1844297527152 bytes, 0 no buffer

     Received 109900 broadcasts (3142 multicasts)

     0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog, 3142 multicast, 0 pause input

     0 input packets with dribble condition detected

     5801364010 packets output, 3853250820809 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 unknown protocol drops

     0 babbles, 0 late collision, 0 deferred

     0 lost carrier, 0 no carrier, 0 pause output

     0 output buffer failures, 0 output buffers swapped out

1 Accepted Solution

Accepted Solutions

If different hosts, with different IP, src-dst-ip should be fine.  Where you can run into an issue, is when using a -mac choice, where a router is involved, as traffic that transits the router will use the router's MAC as src or dst.

View solution in original post

12 Replies 12

Scott Hodgdon
Cisco Employee
Cisco Employee

MassB,

Not knowing which switches these are, I can only comment in a generals sense.

All Cisco Catalyst switches (and probably Nexus and IE switches as well, but I never worked on those) do per-flow load-balancing. This load balancing is determined by an algorithm that looks at come combination of MAC, IP and L4 Port information. It is very possible that packet flows are hashed to similar links depending on what that information is. Also, if one flow is 100mbps, the next is 1mbps and then the next is 100mbps, it is possible that the 100mbps flows are both using one link (based on MAC, IP, L4 Port information) while the 1mbps flow uses the other link.

To get the most even distribution, it is best to use the most information possible in the hash function. If you want to change the information used in the hash, this can be done via CLI. Also consider that both ends of an EtherChannel can have separate hash algorithms.

I hope this helps to understand why you may be seeing this discrepancy in link utilizations.

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking and Cloud Group

Richard Burts
Hall of Fame
Hall of Fame

I believe that @Scott Hodgdon makes several good points, especially that load balancing is done per flow. So it is quite possible that one member of the Etherchannel will have more traffic than the other because that member gets more flows than the other and also that some flows may have much more traffic than other flows. 

 

I would add 2 other observations:

- we frequently describe Etherchannel as balancing. But it would be more accurate to describe it as sharing. Balancing sort of implies equalizing the load (and I believe that is the basis of the original post). But Etherchannel does not attempt to equalize the load but to share the load - and quite importantly - to provide redundancy so that if one member of the Etherchannel fails then the traffic will be carried on the other member and connectivity will be maintained.

- txload is the result of hashing on our side. rxload reflects hashing on the peer member of the Etherchannel. So differences between txload and rxload should not be surprising.

HTH

Rick

Leo Laohoo
Hall of Fame
Hall of Fame
GigabitEthernet1/0/11 is up, line protocol is up (connected)
     reliability 255/255, txload 1/255, rxload 93/255

That is Gi 1/0/11, right?

GigabitEthernet2/0/11 is up, line protocol is up (connected)
     reliability 255/255, txload 193/255, rxload 101/255

And this is Gi 2/0/11, correct? 

GigabitEthernet2/0/11 is up, line protocol is up (connected)
 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 131296062

But do not forget this.  "Total output drops" is equally important.

The 24-hour Total output drops value is disconcerting. 

This is one reason why Gi 2/0/11 is transmitting while Gi 1/0/11 is not.  Maybe the downstream client's etherchannel is not configured to receive inbound traffic (from the switch to the client) on the other port (Gi 1/0/11)?

 

 

 Hi Leo

These are  two 2960x Switches in a stack and the downstream device is FTD

This is the load balancing info below

 

 

SWITCH-STACK#show etherchannel load-balance

EtherChannel Load-Balancing Configuration:

        src-mac

 

EtherChannel Load-Balancing Addresses Used Per-Protocol:

Non-IP: Source MAC address

  IPv4: Source MAC address

  IPv6: Source MAC address

--------------------------------------------------------------------------------------

could this be causing issues we have been seeing some slowness of late

Total output drops: 131296062

Well, you're in luck.  @Joseph W. Doherty is our in-house QoS specialist (outside Cisco TAC).  

Joseph W. Doherty
Hall of Fame
Hall of Fame

Not all Cisco switches offer the same Etherchannel hashing algorithms nor do all use the same default.

So, it's a good ideal to insure the best hashing algorithm is being used for your traffic (NB: sometimes the switch doesn't offer an hashing algorithm suitable for your traffic).

If src-dest-ip is an available choice, that's often a good or best choice.

 Hi Joseph

These are  two 2960x Switches in a stack and the downstream device is FTD

This is the load balancing info below

 

 

SWITCH-STACK#show etherchannel load-balance

EtherChannel Load-Balancing Configuration:

        src-mac

 

EtherChannel Load-Balancing Addresses Used Per-Protocol:

Non-IP: Source MAC address

  IPv4: Source MAC address

  IPv6: Source MAC address

--------------------------------------------------------------------------------------

could this be causing issues we have been seeing some slowness of late

Total output drops: 131296062

 

Possibly a contributing factor.

src-mac works for multiple hosts, same L2 domain, transmitting traffic.  Bad choice for single host transmitting traffic to multiple hosts.  (It, and dst-mac, were, I recall, some of the only Etherchannel options on some early low end switches.)

Try configuring "port-channel load-balance src-dst-ip".  (This is a good option except for single host to single host, that has multiple flows.  Then you want an option using port numbers too, but generally that option is only found on high end switches.)

If that doesn't help, or doesn't help enough regarding decreasing your drop rate, we can try some buffer tuning, which requires activation of QoS.   Do you know whether QoS is enabled on your switch?  ("mls qos" in your config, or look at results of "show mls qos int X".)

BTW, QoS buffer tuning might be worth considering, to reduce drop rate, even if we get a good load share on both links, which in turn, may reduce drops.  Why?  Well, if you lose a link, ideally you don't want performance to totally tank, needlessly, when running on just one link.

Hi 

No QOS on the 2960x Switches, i'm considering src-dst-ip, you say nota good choice for single host to single host,  would 29060x stack to FTD HA be included in that, ?? also so Port-channel on Switch Stack to FTD port-Channel 10 with subinterfaces .50.60.70, if i change the load balancing algorithm on the Switch side will it drop the interfaces on either devices each side of the Channel, I have also noticed that the FTD seems to be load balncing great to the switch stack it looks like a pretty even split, do the FTD's use src-dst-ip ?I cant find it anywhere .?

". . . i'm considering src-dst-ip, you say nota good choice for single host to single host . . ."

Correct.  Because, you want the selected hash attributes to vary.  If they don't vary, all the flows with same value map to the same link.  So, assuming each of those single hosts are using a single IP, nothing varies using src-dst-ip.

". . . would 29060x stack to FTD HA be included in that, ??"

The switch, although physically transmitting across the Etherchannel, it's not the actual source of the traffic, correct?  It's the sending host(s) that's of interest.

". . . if i change the load balancing algorithm on the Switch side will it drop the interfaces on either devices each side of the Channel . . ."

Don't believe it will drop a link but possibly a few inflight packets might be lost.

Hi

 

yes all different hosts sending traffic from the LAN which can be seen in the Firewall Logs, just one think thought the switch has a connection to a router downstream, so Distribution stack > router > then 2960 stack > then FTD's can you see any issue here with      src-dst-ip.????

If different hosts, with different IP, src-dst-ip should be fine.  Where you can run into an issue, is when using a -mac choice, where a router is involved, as traffic that transits the router will use the router's MAC as src or dst.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card