cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3644
Views
0
Helpful
10
Replies

Core - 6509 Etherchannel to Nexus 5020

Bilal Nawaz
VIP Alumni
VIP Alumni

Hi Everyone,

Im not too familliar with Nexus 5K's so I seek some advice. At the moment we have a portchannel (8x individual interfaces bundled) between a 6509 and the nexus 5020. What is the best setup for maximum throughput. How should i be configuring the Nexus 5020 and the 6509.

Before this I'd like to share my experience with the 6509 and a server/workstation creating a Port-Channel. These used two interfaces. The Server/Workstation has teamed NIC's and are in SLB mode (Load-Balancing Mode). For the Port-Channel to come as 'up' i had to set the mode to ON - 'Enables the Etherchannel unconditionally'. The Server wants to negotiate LACP but wouldn't work when the Etherchannel mode was active. So i'm not sure if im correct in saying I should do likewise to the Nexus 5020 since it tries to negotiate LACP? Its mode is in active.

Thanks for your help in advanced.

Bilal

Please rate useful posts & remember to mark any solved questions as answered. Thank you.
10 Replies 10

Mathias Garcia
Level 1
Level 1

Hi Bilal!

I'm not sure that I understand what your asking for.

If its regarding the what type of load balancing to use it would depend on how your traffic patterns look. If you have a single big talker using src mac wont be very usefull. Neither would dst mac if most of the traffic goes to the same default gateway. Both the nexus and the 6500 supports load balancing depending on source-dest-port which I would think would give the best load balancing in most cases.

As to wheter to use LACP active or not, 2 end points in active should negotiate up an etherchannel. Why this didnt work with your servers I cant say.

But I would just go ahead and configure etherchannel as 'on' in both nexus 5020 and the catalyst 6509.

HTH

Mathias

At the moment we have the following:

Nexus

Nexus03# show int po100
port-channel100 is up
  Hardware is Port-Channel, address is 000d.ecd9.0ac8 (bia 000d.ecd9.0ac8)
  Description: ## Uplink to 6509-01 - Data Fabric ##
  MTU 1500 bytes, BW 8000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 8/255, rxload 11/255
  Encapsulation ARPA
  Port mode is trunk
  full-duplex, 1000 Mb/s
  Input flow-control is off, output flow-control is off
  Switchport monitor is off
  Members in this channel: Eth1/1, Eth1/2, Eth1/3, Eth1/4, Eth1/5, Eth1/6, Eth1/7, Eth1/8
  Last clearing of "show interface" counters 41w5d
  5 minute input rate 44546843 bytes/sec, 44538 packets/sec
  5 minute output rate 34269495 bytes/sec, 40105 packets/sec
  Rx
    3080357366142 input packets 3078984521387 unicast packets 833262709 multicast packets
    539582046 broadcast packets 2321415825480 jumbo packets 0 storm suppression packets
    3679360157690117 bytes
    0 No buffer 0 runt 0 Overrun
    0 crc 0 Ignored 0 Bad etype drop
    0 Bad proto drop
  Tx
    2628843973362 output packets 557908510 multicast packets
    422533328 broadcast packets 1512255475455 jumbo packets
    4883593741115762 bytes
    12895928 output CRC 0 ecc
    0 underrun 0 if down drop     0 output error 0 collision 0 deferred
    0 late collision 0 lost carrier 0 no carrier
    0 babble
    0 Rx pause 0 Tx pause 0 reset

===============================================================================================
interface Ethernet1/1
  switchport mode trunk
  description ## Etherchannel to 6509-01 1/8 ##
  speed 1000
  switchport trunk native vlan 30
  switchport trunk allowed vlan 1,20-21,30,40,100,150-153,160,170
  switchport trunk allowed vlan add 222,670,680,690,800-901,1002-1005
  switchport trunk allowed vlan add 4050
  channel-group 100 mode active

And the 6509

6509-01#show int po100
Port-channel100 is up, line protocol is up (connected)
  Hardware is EtherChannel, address is 001b.d5fb.b4bd (bia 001b.d5fb.b4bd)
  Description: ## Uplink to Nexus03 - Data Fabric ##
  MTU 9216 bytes, BW 8000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 11/255, rxload 9/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is unknown
  input flow-control is off, output flow-control is off
  Members in this channel: Gi4/6 Gi4/7 Gi4/8 Gi4/9 Gi9/41 Gi9/42 Gi9/43 Gi9/44
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interface" counters 39w6d
  Input queue: 0/2000/12410040/0 (size/max/drops/flushes); Total output drops: 49669924
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 286043000 bits/sec, 41777 packets/sec
  5 minute output rate 353698000 bits/sec, 45209 packets/sec
     2617574787276 packets input, 2430925824629657 bytes, 0 no buffer
     Received 963222832 broadcasts (547672334 multicasts)
     0 runts, 0 giants, 0 throttles
     12410040 input errors, 12410483 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     3065187661213 packets output, 3662605403911370 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

===============================================================================================

description ## Etherchannel to Nexus03 1/8 ##
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 30
switchport trunk allowed vlan 1,20,21,30,40,100,150-153,160,170,171,222,670
switchport trunk allowed vlan add 680,690,800-901,4050
switchport mode trunk
mtu 9216
mls qos trust cos
channel-group 100 mode active

Note that the MTU is different... 6509 is configured for jumbo and the nexus isn't will this have an impact? Whats the best way to monitor which link is being utilized the most in this etherchannel?

Thanks

Please rate useful posts & remember to mark any solved questions as answered. Thank you.

If you are configured for Jumbo frames, better make sure all connected devices support jumbo or you will have packet loss.

this could be the case if there are packets with df bit set to 'ON' or '1'

otherwise they i.e packets will simply get fragmented and you should'nt see any packet loss.

all the best.

Ajaz Nawaz

What does the Load section mean, do you know? How am i meant to interpret this? How can i tell which algorithm is being used i.e. by source/destination mac / ip etc.... Thank you

6509-01#show int port-channel 100 etherchannel
Port-channel100   (Primary aggregator)

Age of the Port-channel   = 304d:20h:57m:04s
Logical slot/port   = 14/13          Number of ports = 8
HotStandBy port = null
Port state          = Port-channel Ag-Inuse
Protocol            =   LACP
Fast-switchover     = disabled

Ports in the Port-channel:

Index   Load   Port     EC state        No of bits
------+------+------+------------------+-----------
  0     01     Gi4/6    Active    1
  4     02     Gi4/7    Active    1
  2     04     Gi4/8    Active    1
  6     08     Gi4/9    Active    1
  7     10     Gi9/41   Active    1
  5     20     Gi9/42   Active    1
  3     40     Gi9/43   Active    1
  1     80     Gi9/44   Active    1

Time since last port bundled:    280d:23h:09m:14s    Gi9/41
Time since last port Un-bundled: 294d:22h:31m:36s    Gi4/8

Please rate useful posts & remember to mark any solved questions as answered. Thank you.

You can use show etherchannel load-balance to check what load balancing algorithm your using.

I'm not certain but I believe that the numbers below the load is the link utilization in percentage.

It should be easy to verify.

Hi Mathias, Thanks for your help thus far!

80% link utilisation is pretty high right - its what i see with the output as shown below?

Ports in the Port-channel:

Index   Load   Port     EC state        No of bits
------+------+------+------------------+-----------
  0     01     Gi4/6    Active    1
  4     02     Gi4/7    Active    1
  2     04     Gi4/8    Active    1
  6     08     Gi4/9    Active    1
  7     10     Gi9/41   Active    1
  5     20     Gi9/42   Active    1
  3     40     Gi9/43   Active    1
1     80     Gi9/44   Active    1

Is there not a way to see how its being distributed amongst the port-channel - i.e. load adding up to 100% for the whole port-channel.

So with the load-balance settings as follows:

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
  IPv4: Source XOR Destination IP address
  IPv6: Source XOR Destination IP address
  MPLS: Label or IP

Am i right in thinking that if there were traffic flows going to an ip address it would only use one link in the port-channel? Likewise the same with mac-addresses? This way I have a bottle neck at 1Gbps? How can we distribute this across all links in a "round robin" manner or similar?

TIA

Please rate useful posts & remember to mark any solved questions as answered. Thank you.

bilal.nawaz wrote:


Am i right in thinking that if there were traffic flows going to an ip address it would only use one link in the port-channel? Likewise the same with mac-addresses? This way I have a bottle neck at 1Gbps? How can we distribute this across all links in a "round robin" manner or similar?

TIA

Etherchannel does not do true loadbalancing as round robin or depending on utilization.

It follows a certain pattern. For an easy example lets take the source mac load balancing.

It does this by comparing the last bits in the mac. For a 8 link etherchannel it would be like this.

000 = link 1

001 = link 2

010 = link 3

011 = link 4

100 = link 5

101 = link 6

110 = link 7

111 = link 8.

So if all traffic was generated by servers whos mac address ends in 111, all traffic would pass over link 8.

So to get better distribution you could use an algorithm that used both source and destination. But if the same sender always talked to the same destination the traffic would always go over the same link.

So now you have the option of using the actual layer 4 sockets/ports in the calculation, this should give you a more dispersed usage over your links.

But with etherchannel you should always monitor this.

HTH

I have to chime in here.

I highly suggest not doing this.

"But I would just go ahead and configure etherchannel as 'on' in both nexus 5020 and the catalyst 6509."

Setting an etherchannel to mode on is a very bad idea. Set channel-group mode active.

When you configure a channel group to mode "on" it forces the channel group up regardless of whether the connected peer is up. How is that bad?

Let's say you are connecting the devices via a fiber optic cable. As you may know with fiber it is very possible to have a uni directional link. Meaning one strand of fiber is good while the other is bad. So one switch will see a link while the other will not. Now imagine the switch that sees the link is set to channel group mode on. That switch is going to forward packets to a unidirectional link where the link on the peer side is down, thus dropped packets not to mention the impact on spanning tree events. Sure UDLD will protect from this but it is not wise to force a channel group to on. If active is not working, especially between CISCO switches, troubleshoot why and resolve it. You will have spanning tree problems if you use mode "on". On the server side it is not as big of a deal because it is an end host that will not create any loops on your network.

Also on the Nexus be sure to set the port channel interface as

spanning-tree port type network

As far as etherchannel load balancing. By default 6500 switches and Nexus 5000 use source - destination IP as the load balancing algorithm. That should suffice just fine and you should not need to change that. Also that is a global setting so you cannot configure that for one etherchannel link.

I will have to agree that I was a bit hasty to recommend configuring the etherchannel as on.

There are as you say serious risks with this.

As far as the load balancing algorithm goes, yes in most cases source - destination IP should work just fine.

But that's not to say it always is the case.

The OP seems to have quite high load on link 8 in particular. Indicating that 2 ip are talking a lot between each other, or just by a quirk several sender receiver pairs that talk a lot end up on link 8.

Off course this needs to be investigated thoroughly, to make sure that it was not some random occurrence before starting to plan a change in a production environment.

Review Cisco Networking products for a $25 gift card