We have a issue where some CSFP's, every third or so, have disabled txpower on bootup (-40dBm in show int tran ) if you populate a 4506E chassis with 4 or more WS-X4640-CSFP-E. "hw-module module 2 reset" brings up the card in functioning state.
Swapping the LC's does not help, tested on several boxes and with different types of power supplies.
The ports can be completely unconfigured and it still happens. Every port have the same configuration.
Completely new hardware. Tested two IOS:
14755 -rw- 126060564 Aug 7 2014 13:42:10 +02:00 cat4500e-universalk9.SPA.03.04.04.SG.151-2.SG4.bin
14756 -rw- 177993372 Sep 5 2014 05:11:57 +02:00 cat4500e-universalk9.SPA.03.06.00.E.152-2.E.bin
Is anyone using fully populated C4506E / SUP7LE / WS-X4640-CSFP-E with success?
See below for some output of commands.
sh int gi2/59
GigabitEthernet2/59 is down, line protocol is down (notconnect)
Hardware is Gigabit Ethernet Port, address is f40f.1ba7.2d0a (bia f40f.1ba7.2d0a)
MTU 1552 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, Auto-speed, link type is auto, media type is 1000Base-2BX10-D
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
0 packets input, 0 bytes, 0 no buffer
Received 0 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
0 packets output, 0 bytes, 0 underruns
0 output errors, 0 collisions, 5 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
show int tran
Gi2/1 36.1 3.24 -6.1 -40.0
Gi2/2 36.1 3.24 -6.6 -40.0
Gi2/3 32.7 3.25 -40.0 -40.0
Gi2/4 32.9 3.25 -6.7 -40.0
Gi2/5 37.8 3.24 -5.9 -40.0
Gi2/6 38.0 3.24 -6.9 -40.0
Gi2/7 34.7 3.26 -40.0 -40.0
Gi2/8 34.7 3.26 -6.0 -40.0
Gi2/9 36.1 3.24 -5.3 -40.0
Gi2/10 36.1 3.24 -5.9 -40.0
Gi2/11 34.7 3.26 -40.0 -40.0
Gi2/12 34.7 3.26 -5.5 -40.0
Gi2/13 36.9 3.23 -5.6 -40.0
Gi2/14 36.9 3.23 -5.9 -40.0
Gi2/15 35.2 3.25 -40.0 -40.0
Gi2/16 35.0 3.25 -5.5 -40.0
Gi2/17 34.7 3.21 -6.7 -40.0
Gi2/18 34.7 3.22 -5.7 -40.0
Gi2/19 36.6 3.26 -40.0 -40.0
Gi2/20 36.7 3.25 -6.5 -40.0
Gi2/21 34.6 3.24 -5.9 -40.0
Gi2/22 34.6 3.24 -6.3 -40.0
Gi2/23 35.6 3.26 -40.0 -40.0
Gi2/24 35.6 3.25 -6.1 -40.0
Gi2/25 37.4 3.22 -5.6 -40.0
Gi2/26 37.4 3.22 -6.8 -40.0
Gi2/27 37.1 3.26 -40.0 -40.0
Have you found a solution to this? I see the same isssue on WS-C4503-E chassis and WS-X45-SUP8-E supervisor and WS-X4640-CSFP-E linecards.
When we move the linecards to a Sup6 device everything works as normal so I don's suspect there is something wrong with the hardware.
Do you have problems with negotiating speed on the working Gig interfaces as well? I need to set 'speed nonegotiate' on the interfaces to get link, but 'sh int transceiver' shows both optical RX og TX signal.
We are currently waiting for Cisco TAC to verify this issue, they are waiting for hardware for their labs. Maybe in the beginning of December there is acknowledgement from Cisco on the issue...
I have confirmation from another service provider that they are having the same issues, they have also a TAC case going on, but afaik, without progress.
I can confirm that these cards/SFP's seems to have a issue when running "speed nonegotiate", it seems to link up with itself. Quite random. I have not followed up this issue.
EDIT: The self linkup happens when there is no cable connected, I suspect there is not enough isolation in the SFP between tx laser and rx sensor.
We are seeing the same thin on our 4506E chassies with SUP7E and the 4640 linecards.
The problem always appears on port 3 and then on every 4th port as seen in the post above.
We've had some succes resolving this by either toggling the adminstate of the ports or by resetting the whole linecard. At the moment though we are in a situation where these tricks doesn't help at all.
I've opened a TAC-case on the issue but with no feedback yet.
Has anyone got any explanation from Cisco regarding this behavior?
We are still waiting for Cisco to get their lab togheter.
But maybe you can do the following if you have a lab enviroment available.
switch#debug platform logging feature glmpluggable
switch#debug platform logging feature linecardinterrupts
switch#debug platform logging feature transceiver
We would have to reboot the device and as soon as supervisor comes up we should enabled debugs (line cards are still booting) and stop debugs once some of the ports go down. At that time it would be helpful to gather "show int tran" to see which ports are impacted.
Additionally I would suggest to increase logging buffer size to capture all debugs and disable logging debug output to both console and monitor.
Can you please advise if you can proceed with above steps?"