cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
974
Views
0
Helpful
7
Replies

Cisco 6509 S2t54-adventerprisek9-mz-SYA-151-2 and NetXtreme BCM 5720 NIC

Eric R. Jones
Level 4
Level 4

Hello, we have some VMware servers with NetXtreme BCM 5720 NIC's connecting to our 6509 WS-X6848-GE-TX blade.

We are getting runts, giants, and CRC errors on some ports but not on others.

The configuration for the ports are:

 switchport
 switchport mode access
 switchport access vlan ##
 snmp trap mac-notification change added
 snmp trap mac-notification change removed
 spanning-tree portfast edge
 spanning-tree bpdufilter enable
 spanning-tree bpduguard enable

These were the same settings on the previous 6509 with some ports not having spanning-tree portfast configured while others did.

The output from sho int is:

 Hardware is C6k 1000Mb 802.3, address is 64f6.9d6e.0978 (bia 64f6.9d6e.0978)
  Description: yvmw003
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 253/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interface" counters 6d20h
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 22000 bits/sec, 16 packets/sec
  5 minute output rate 5000 bits/sec, 6 packets/sec
     10408244 packets input, 3064417434 bytes, 0 no buffer
     Received 1283 broadcasts (0 multicasts)
     3218 runts, 65 giants, 0 throttles
     183885 input errors, 145987 CRC, 31724 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     4172593 packets output, 526645344 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

This is a new VSS pair setup and has the same basic configuration from the previous non VSS setup.

The only modifications we made were to have portfast, bpduguard and bpdufilter enabled on all ports connecting to the NetXtremes.

Does anyone know of an imbalance between NetXtreme BCM 5720's and the latest cisco IOS s2t54-adventerprisek9-mz.SPA.152-1.SY1a.bin that could cause this?

Or could it be that the cards don't work together with this configuration and something like portfast needs to be disabled or something else enabled?

So far no news from my Cisco TAC that I opened.

ej

1 Accepted Solution

Accepted Solutions

Hi Eric, 

The "reliability" doesn't lie.  If the cables are not faulty then it would read as "255/255".  To support my theory, the counters were cleared "6d20h" and Input Errors are incrementing.  

What line card is installed on this slot?  We could try run a TDR on this port: 

1.  Command:  test cable tdr interface <BLAH>; 

2.  Wait for 61 seconds; 

3.  Command:  sh cable tdr interface <BLAH>; and

4.  Post the output to #3.

View solution in original post

7 Replies 7

Leo Laohoo
Hall of Fame
Hall of Fame

reliability 253/255, txload 1/255, rxload 1/255

Faulty cable used.  The value should ALWAYS be 255/255.

We did go down that path and had the cables checked with a tester.

They appeared to be fine, we have not strung a cable directly between the devices bypassing the currently used cables and the patch panel, we also thought of a bad patch panel.

Seems kind of coincidental that only the 12 cables, not the total number of cables, between the VMware servers and the switch would all go bad.

ej

Hi Eric, 

The "reliability" doesn't lie.  If the cables are not faulty then it would read as "255/255".  To support my theory, the counters were cleared "6d20h" and Input Errors are incrementing.  

What line card is installed on this slot?  We could try run a TDR on this port: 

1.  Command:  test cable tdr interface <BLAH>; 

2.  Wait for 61 seconds; 

3.  Command:  sh cable tdr interface <BLAH>; and

4.  Post the output to #3.

oh, no argument from me, I'm just trying to sort out the issue of 6 connections on 2 different chassis having the same issue but only these six. Our Layer pro will be back tomorrow and we shall start from layer 1 and move up the stack. I believe, since the devices are physically close to each other, we may try placing a known piece of cat 5Eor 6 between them and if it's good we can move forward with ruling out the patch panel and then the copper runs.

ej

Sorry forgot the line card information, it is a  6509 WS-X6848-GE-TX.

ej

We figured it was layer one but until we actually got a diagram we couldn't figure out the why. A new link between a couple new patch panels had been added and we believe that's where the issues lies. We plan to by pass the entire run since it is coming out and just go directly between the servers and the core. Thanks ej

Thanks for the feedback, Eric.

Review Cisco Networking for a $25 gift card