cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1452
Views
0
Helpful
29
Replies

Pair of CIsco 4507R+E each with a single WS-X45-SUP7-E / 1 1GbE Uplink

vboyd
Level 1
Level 1

I have a customer that has a pair of 4507R+E Chassis each with a single WS-X45-SUP7-E.  Issue is the SUP's were connected via a single 1GbE Uplink (ouch).  In the interim to provide some redundancy, I install another 1GbE GBIC and the pair are up now.  Even with 2x1GbE uplinks they are getting hammered and one of them is reporting massive input and CRC errors.  I need to upgrade to a pair of 10GbE GBIC's, or if the switch's will support it, 4x10GbE uplinks 

I don't believe i can swap the 1GbE GBIC's out one at a time as the Po won't form with different speed GBIC's

I don't believe i can pull them both out then insert the 10GbE GBIC's as that would result in a dual active situation = no fun

I can't down the either switch's as many of the IDF's only have single connection to either one of the 4507's = no fun

Since each 4507 is not operating in stand alone redundancy mode could i use the x/3/3 and x/3/4 interfaces to create an additional Po?  then assuming it sync's up, pull the 1GbE GBIC and replace them with 10GbE GBIC's?  If i recall correctly if the 4507's are running dual SUP's x/3/3 and x/3/4 are disabled?

Any advice or help would be greatly appreciated!

 

interface TenGigabitEthernet1/3/1
 description link to FIN-DCSW2
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
no cdp enable
 channel-group 1 mode on
 service-policy output VSL-Queuing-Policy
 
interface TenGigabitEthernet1/3/2  <-This one was disconnected
 description link to FIN-DCSW2
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
no cdp enable
 channel-group 1 mode on
 service-policy output VSL-Queuing-Policy

 

interface TenGigabitEthernet2/3/1

 description link to FIN-DCSW1
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
no cdp enable
 channel-group 2 mode on
 service-policy output VSL-Queuing-Policy
 
interface TenGigabitEthernet2/3/2<-This one was disconnected
 description link to FIN-DCSW1
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
no cdp enable
 channel-group 2 mode on
 service-policy output VSL-Queuing-Policy

FIN-DC01#sh redun
Redundant System Information :

------------------------------
Available system uptime = 23 weeks, 1 day, 1 hour, 11 minutes
Switchovers system experienced = 0
Standby failures = 0
Last switchover reason = none

Hardware Mode = Duplex
Configured Redundancy Mode = Stateful Switchover
Operating Redundancy Mode = Stateful Switchover
Maintenance Mode = Disabled
Communications = Up

Current Processor Information :
------------------------------
Active Location = slot 1/3
Current Software state = ACTIVE
Uptime in current state = 23 weeks, 1 day, 1 hour, 8 minutes
Image Version = Cisco IOS Software, IOS-XE Software, Catalyst 4500 L3 Switch Software (cat4500e-UNIVERSALK9-M), Version 03.11.04.E RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2021 by Cisco Systems, Inc.
Compiled Mon 08-Mar-21 15:37 by prod
BOOT = bootflash:cat4500e-universalk9.SPA.03.11.04.E.152-7.E4.bin,12;
Configuration register = 0x2102

Peer Processor Information :
------------------------------
Standby Location = slot 2/3
Current Software state = STANDBY HOT
Uptime in current state = 23 weeks, 1 day, 1 hour, 7 minutes
Image Version = Cisco IOS Software, IOS-XE Software, Catalyst 4500 L3 Switch Software (cat4500e-UNIVERSALK9-M), Version 03.11.04.E RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2021 by Cisco Systems, Inc.
Compiled Mon 08-Mar-21 15:37 by pr
BOOT = bootflash:cat4500e-universalk9.SPA.03.11.04.E.152-7.E4.bin,12;
Configuration register = 0x2102

 

FIN-DCSW1#sh cef state
CEF Status:
RP instance
common CEF enabled
IPv4 CEF Status:
CEF enabled/running
dCEF enabled/running
CEF switching enabled/running
universal per-destination load sharing algorithm, id 95764BE6
IPv6 CEF Status:
CEF disabled/not running
dCEF disabled/not running
universal per-destination load sharing algorithm, id 95764BE6
RRP state:
I am standby RRP: no
RF Peer Presence: yes
RF Peer Comm reached: yes
RF Peer Config done: yes
RF Progression blocked: never
Redundancy mode: sso(3)
CEF NSF sync: enabled/running

CEF ISSU Status:
FIBHWIDB broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
FIBIDB broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
FIBHWIDB Subblock broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
FIBIDB Subblock broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
Adjacency update
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
IPv4 table broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
IPv6 table broker
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.
CEF push
Slot(s): 13 (0x2000) (grp 0x691D8540) - Nego compatible.

29 Replies 29

I have already upgraded them to 3.11.4 and completed the ROMMON upgrade...and you are spot on with how long it took...and it went as expected...so all is good there. 

I'm only looking at upgrading the links between each 4507 and their respective SUP7 interfaces (going from 1Gpbs to 10Gbps)

What I am saying is the upgrade, from a 1 Gbps Layer 3 link, to a 10 Gbps Layer 3 link is not going to take that long of a downtime.  

ahh ok...is the best approach to create the additional 2 port channels on the remaining two 10GbE interfaces on each SUP7?

So adding the following to the existing config (just adding Po31 and Po32 to keep is simple):

SWITCH 1

interface TenGigabitEthernet1/3/3
 description link to FIN-DCSW2
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
 channel-group 31 mode on
 service-policy output VSL-Queuing-Policy
 
interface TenGigabitEthernet1/3/4
 description link to FIN-DCSW2
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
 channel-group 31 mode on
 service-policy output VSL-Queuing-Policy
Switch 2
 
interface TenGigabitEthernet2/3/3
Description Link FIN-DCSW1
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
 channel-group 32 mode on
 service-policy output VSL-Queuing-Policy
 
interface TenGigabitEthernet2/3/4
Description Link FIN-DCSW1
 switchport mode trunk
 switchport nonegotiate
 no lldp transmit
 no lldp receive
 channel-group 32 mode on
 service-policy output VSL-Queuing-Policy

Looks good.

based on you're experience, do you think adding that config and brining up those new Port channel's will cause downtime?  I fully understand and appreciate there are never any guarantee's and you couldn't possibly understand the entire environment based on this limited chat ....but let's assume the environment is stable (aside from that GBIC with the errors), IDF's are configured with redundant connections to the respective 4507's, and STP is stable/solid throughout the environment.   Considering the aforementioned, do you believe those 4507's will stay online (no standby reboot) and they won't go into a dual active situation when i make those adjustments?  

In my experience, if I was upgrading or moving a link based on the config provided, the downtime I have seen and witnessed are minimal.  That's a Layer 2 Etherchannel.  

But what I would do, as not to complicate the Etherchannel process, I'd make sure I'd disable all the link members that are part of the Etherchannel, except one.  Move that only link enabled.  Make sure everything is working fine, including Etherchannel before moving and enabling the rest.

got it....so confirming based on the configuration I’m dealing with:

Disable the following interfaces: 1/3/1 and 2/3/1 (those are the interfaces taking all the errors) 

Leave interfaces 1/3/2 and 2/3/2 enabled and connected to keep the link between the 4507's alive and in the current active/standby state

Disable the following interfaces 1/3/3 and 2/3/3 (new 10GbE links)

Configure interfaces 1/3/3 and 2/3/3 in the new port channel

Enable interfaces 1/3/3 and 2/3/3 and confirm the port channel comes up and observe to make sure nothing implodes

If nothing implodes:

disable interfaces 1/3/2 and 2/3/2 replace with 10GbE GBIC’s

enable 1/3/2 and 2/3/2 and confirm port channel comes up and observe to make sure nothing implodes

replace interfaces 1/3/1 and 2/3/1 with 10GbE GBIC’s

enable interface 1/3/1 and 2/3/1 and confirm port channel comes up and observe to make sure nothing implodes

Question: will the 4507 support all four interfaces on the SUP7 in an Etherchannel configuration layer 2 linking two switches together?

If yes, then next step would be to enable interfaces 1/3/4 and 2/3/4 and confirm port channel comes up and nothing implodes.

If something implodes when I attempt to bring online the first 10GbE interfaces, I suspect your recommendation will be to run away as fast as possible…only to return with lots of Red Bull and a console cable. 

Thanks again for all your feedback on this...

(In a non-VSS, all 4 ports of the Sup7E can support 10 Gbps.)

1.  Create Port-Channel 10 (as in 10 Gbps).  
2.  Copy all the config from the 1 Gbps Po into Po10.  
3.  Enable Po 10
4.  Shut down all the 10 Gbps ports all except 1.  
5.  Assign that port into Po 10. 
6.  Light up the link.  
7.  Confirm data is passing through. 
8.  Disable the 1 Gbps Etherchannel. 
9.  Assign each remaining 10 Gbps port into Po 10 and enable the port.  


@Leo Laohoo wrote:
3.11.4 is the "last" firmware for the Sup7E/LE.

@vboyd

The statement above is both wrong and correct:  I have just upgraded my test 4510R+E, with Sup7E, to 3.11.9 (release date 10 October 2023).

Let me explain how I did it (or manage to accomplish this):

The "heart" of a 4500X is a Sup7E.  This means that both the 4500X and the Sup7E can both run the same piece of code.  Officially, the "last" firmware that can support the Sup7E is 3.11.4 (release date 22 March 2021).  However, the "last" 4500X software is 3.11.9 (release date 10 October 2023).

So now, I have a 4510R+E chassis, Sup7E and a handful of WS-X4648-RJ45V+E line cards running on IOS-XE version 3.11.9!

i'm up on the 10GbE VSL links between the 4507's.....i ended up scheduling a 1 hour maintenance window for the work.  I also noticed on the switch that while dual-active fast-hello was "enabled" it was not configured (ouch), so i configured a pair of 1GbE copper interfaces as VSL Fast-Hello Links and pulled both 1GbE Fiber connections out of each of the SUP's (knowing that the active primary would reload) and since I had the fast hello links setup, I didn't have to worry about a dual active situation.  I swapped out the 1GbE GBIC's with Cisco 10GbE GIBC's and reconnected the Fibre...and bingo...15 minute downtime and now we have 2x 10GbE VSL links and a pair of fast-hello links.  The VSL links are pushing just under 1.5 gig of traffic and systems are behaving as one would expect when connecting to a pair of older workhorse chassis.

vboyd
Level 1
Level 1

SWITCH 1

TenGigabitEthernet1/3/3 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet Port, address is f872.eae0.ad42 (bia f872.eae0.ad42)
MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s, link type is auto, media type is 10GBase-SR
input flow-control is on, output flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:16, output never, output hang never
Last clearing of "show interface" counters 1d22h
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 2000 bits/sec, 2 packets/sec
13619 packets input, 2943020 bytes, 0 no buffer
Received 13619 broadcasts (13619 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
179395 packets output, 13813144 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out

SWITCH 2

TenGigabitEthernet2/3/3 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet Port, address is fc99.47ea.2342 (bia fc99.47ea.2342)
MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s, link type is auto, media type is 10GBase-SR
input flow-control is on, output flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters 1d22h
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1000 bits/sec, 1 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
179481 packets input, 13820830 bytes, 0 no buffer
Received 166813 broadcasts (96078 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
13628 packets output, 2945058 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out

vboyd
Level 1
Level 1

i have OM5 MM   new

vboyd
Level 1
Level 1

 I didn't dump them in a VLAN on the 4507... but i connected them one of my nexus 5548's and they came up fine...

vboyd
Level 1
Level 1

then i put them in the 4507 SUP7's and connected them...they both went UP/UP

 

vboyd
Level 1
Level 1

both of those have been running in the 4507 for 48 hours...

 

Review Cisco Networking for a $25 gift card