cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3075
Views
8
Helpful
7
Replies

Etherchannel Perfomance Issues with WS-X6704-10GE

dilipratna
Level 1
Level 1

I wonder if anyone can assist with an issue we are experiencing.  We currently have two 6509E switches both running IOS version 12.2(17d)SXB10.

We currently have an etherchannel between the switches made up of 3 x 1Gbps fibre links on a WS-X6748-SFP line card. The configuration for each respective switch is shown below: -

como6509sw01

interface Port-channel1

description Link to como6509sw02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

end

interface GigabitEthernet2/1

description Trunk to como6509sw02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

interface GigabitEthernet2/2

description Trunk to como6509sw02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

interface GigabitEthernet2/3

description Trunk to como6509sw02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

como6509sw02

interface Port-channel1

description Trunk to como6509sw01

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

end

interface GigabitEthernet1/1

description Trunk to como6509sw01

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

interface GigabitEthernet1/2

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

interface GigabitEthernet1/3

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode desirable

end

This configuration has been working fine for years without any issues.  We have recently introduced a WS-X6704-10GE line card in each core switch.  The switches are connected via two CX4 cables (10m) and the configuration applied is shown below: -

como6509sw01

interface range Te1/1 - 2

description TRUNK TO COMO6509SW02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

no shut

end

interface Port-channel4

description 20GBPS TRUNK TO COMO6509SW02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

no shut

end

como6509sw02

interface range Te2/1 - 2

description TRUNK TO COMO6509SW02

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

no shut

end

interface Port-channel4

description 20GBPS TRUNK TO COMO6509SW01

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

no shut

end

When a ping is carried out to como6509sw02 using the Po1 etherchannel the responses comes back within 1ms and there are no timeouts from my workstation.  But when the 10Gbps configuration is applied and Po4 is active, the ping responses to como6509sw02 are slow and there are timeouts occurring from my workstation.  When the Po4 etherchannel is shutdown performance returns back to normal over Po1.  We did also shutdown Po1 when Po4 was activated just in case there were issues, but this did not make any difference.

Is there any issues relating to the IOS version I am running with the line cards or is the issue due to the fact that I am trying to create an etherchannel over the CX4 cables?

Any advice or assistance is much appreciated.

7 Replies 7

yang Lv
Level 1
Level 1

Dear D,

     for the perfomance issue, i think you should know An EtherChannel can be established using one of three mechanisms:

  • PAgP - Cisco's proprietary negotiation protocol
  • LACP (IEEE 802.3ad) - Standards-based negotiation protocol
  • Static Persistence ("On") - No negotiation protocol is used

Any of these three mechanisms will suffice for most scenarios, however the choice does deserve some consideration. PAgP, while perfectly able, should probably be disqualified as a legacy proprietary protocol unless you have a specific need for it (such as ancient hardware). That leaves LACP and "on", both of which have a specific benefit.

LACP helps protect against switching loops caused by misconfiguration; when enabled, an EtherChannel will only be formed after successful negotiation between its two ends. However, this negotiation introduces an overhead and delay in initialization. Statically configuring an EtherChannel ("on") imposes no delay yet can cause serious problems if not properly configured at both ends.

To configure an EtherChannel using LACP negotiation, each side must be set to either active or passive; only interfaces configured in active mode will attempt to negotiate an EtherChannel. Passive interfaces merely respond to LACP requests. PAgP behaves the same, but its two modes are refered to as desirable and auto.

In the Campus Network High Availability Design Guide, Cisco recommend forgoing the use of a negotiation protocol and configuring EtherChannels for static "on/on" operation; however they also caution that this approach offers no protection against the effect of misconfigurations.

so, you can try to set the mode to "on" at the both ends.

let me know what else i can do for you.

Alex Lv

Alex,

Thnaks for the useful information.  I have gone down the path of now implementing LACP across our infrastructure now and trying to move away from the propriotory PAgP.

As you can see from the configuration of the 10Gbps links I have used active for the negotiation process do do away with mis-configurations.  I believe that the configuration applied is fine and correct as the etherchannels are up and operational.  Just have the issue of the performance to resolve.

I will look at the URL link you supplied and see if that can assist me in diagnosing the issue further.

Regards,

Dilip

While I have no relation to the poster, thank you for your response.  I am in the CCNP switch course, and

your post solve a question I happen to ask tonight in class!

I am curous though....

>>so, you can try to set the mode to "on" at the both ends.

Why would going from PAGP to LACP solve his problem if he is using both Cisco switches?

Thank you,

blacktrack
Level 1
Level 1

Hi

Please send output of sh int PO4.

regards

Hicham Azarou

Hicham,

See below output as requested: -

como6509sw01#sh int po4
Port-channel4 is administratively down, line protocol is down (disabled)
  Hardware is EtherChannel, address is f866.f2f3.be11 (bia f866.f2f3.be11)
  Description: 20GBPS TRUNK TO COMO6509SW02
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 0/255, rxload 0/255
  Encapsulation ARPA, loopback not set
  Auto-duplex, Auto-speed
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/2000/6/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     747077 packets input, 156600427 bytes, 0 no buffer
     Received 6064 broadcasts, 0 runts, 23293 giants, 0 throttles
     6 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 input packets with dribble condition detected
     941493 packets output, 848925880 bytes, 0 underruns
     0 output errors, 0 collisions, 5 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier
     0 output buffer failures, 0 output buffers swapped out

como6509sw02#sh int po4
Port-channel4 is down, line protocol is down (notconnect)
  Hardware is EtherChannel, address is f866.f2f3.bf51 (bia f866.f2f3.bf51)
  Description: 20GBPS TRUNK TO COMO6509SW01
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 0/255, rxload 0/255
  Encapsulation ARPA, loopback not set
  Auto-duplex, Auto-speed
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/2000/7/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     946925 packets input, 851114459 bytes, 0 no buffer
     Received 52101 broadcasts, 0 runts, 185361 giants, 0 throttles
     7 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 input packets with dribble condition detected
     752333 packets output, 158266947 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier
     0 output buffer failures, 0 output buffers swapped out

I did notice that the counters for Giants were increasing but after some reading the articles indicated that its nothing much to worry about?

Regards,

Dilip Ratna

force speed and duplex et leave a return.

Hicham,

I don't think that will make any difference, I have come across issues where the speed and duplex settings needs to be hard coded and such issues show up as collisions and CRC errors and these tend to occur on switchports that support multiple speeds.

The ports are only 10Gbps and I would have assumed thats the only speed they would connect at and no other speed.

Regards,

Dilip Ratna

Review Cisco Networking for a $25 gift card