cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
748
Views
10
Helpful
7
Replies

scheduled electrical maintenance, best practice and temporary core net

abtt-39
Level 1
Level 1

Hello,
next Saturday, electrical maintenance work will take place, and the power will be cut for several hours in our main building.

The core network ( a stack of 3850's) is in this building, as well as our access switches, the isp routers for the MPLS VPN, one of our firewalls, several esx servers.
We also have in the neighboring building where the power will not be cut, our second firewall, esx servers and access switches, but no secondary core switch.

We want to use this maintenance windows to test different things.

All the routing (static and dynamic) is provided by the core switch (L3), the vlans, svi etc...

The core network is connected ( to 2 other stacks of access switch )to building B by port channel fibers.
Currently, as we don't have a "backup" core network, when we go to cut the power, the routing and the MPLS links will no longer be ensured.

But we wish, just during this maintenance window, to take a 3750 switch, to put all the configuration of the current core there (at least, to activate the routing, the static routes, the vlans, the SVIs, to configure it VTP server.
and, when the power outage will take place, disconnect the links between the core production network and the site B access switches, reconnect these 2 links to the temporary core switch.

Before that, we will have put the esx in maintenance mode so that all the Vm of site A go to the esx of site B.

On site A, I will shut down all the switches, then I will cut them electrically, they are all connected to PDUs. Do I have to disconnect the power cable from each equipment?

For equipment shutdowns, I start with the access switches, then, lastly, the core network?
Or just trip the PDUs?

Link 1 arrives on the core network of site A, on the active ASA, the standbys, on site B, and vice versa for internet link 2.
Logically, there should be internet access on site B, if asa failover works (a year ago, the isp link 1 had broken down and we had automatically switched to link 2)

Did I forget something?

Then, when the electrical work is finished, we will disconnect our temporary core network, turn on all the equipment on site A, reconnect the core network channel ports to the 2 stacks of site B

can temporarily putting another network core and then putting it back as it was before cause problems? should I restart the access switch stacks? ARP problem?

Our goal is to do tests and later, to put a second stack of core switch on site B and to connect them with HSRP or other, and that each of our access switch stacks are connected to core network 1 and core network 2/ We do not have a distribution switch layer

 

 

 

7 Replies 7

Leo Laohoo
Hall of Fame
Hall of Fame

Before powering down the stack, do the following: 

  1. Save the config
  2. Do a continuous ping the stack. 
  3. Use the "reload" command to reboot the stack. 
  4. Once the ping stops, quickly remove the power cables to the stack.  

Because this is a stack of 3850 (IOS-XE), things will be very different: 

  • Performing regular reboot (like every 12 to 18 months) on IOS-XE is good. 
  • Particularly 3850 on 16.X.X, perform proactive reboots every 6 months

Ok thanks.


  1. @Leo Laohoo wrote:

    Before powering down the stack, do the following: 

    1. Save the config
    2. Do a continuous ping the stack. 
    3. Use the "reload" command to reboot the stack. 
    4. Once the ping stops, quickly remove the power cables to the stack.  

    Because this is a stack of 3850 (IOS-XE), things will be very different: 

    • Performing regular reboot (like every 12 to 18 months) on IOS-XE is good. 
    • Particularly 3850 on 16.X.X, perform proactive reboots every 6 months

    and thank you. is there any particular reason to do this?

 

abtt-39
Level 1
Level 1

I have another question.
on my core network, what I do:
#sh swi

1 Member 00b6.70c4.e100 13 V03 Ready
*2 Active 00b6.70c4.e500 10 V03 Ready
3 Standby 70c9.c6bd.9100 7 V07 Ready
4 Member 70c9.c6bd.0a80 4 V07 Ready


#sh running-config | i provision
switch 1 provision ws-c3850-24s
switch 2 provision ws-c3850-24s
switch 3 provision ws-c3850-24t
switch 4 provision ws-c3850-24t

We can see that the active switch is not the one with the highest priority

I think a colleague rebooted one of the stack switches recently.

With the power cut tomorrow, after the Maintenance window, I would like it to stay that way for now after the restart

I've had weird cases before after a reboot, port renumbering etc...
and I'll run out of time tomorrow, so if I want it to stay exactly like that by plugging my switches back in?

For the power outage, I have to unplug the 2 member switches first.
After, the standby, and lastly, the master switch
And when I reconnect the switches, do the reverse.

Connect first:
Active 00b6.70c4.e500
Then Standby 70c9.c6bd.9100
Member 00b6.70c4.e100
Member 70c9.c6bd.0a80.
From when, after the start, takes place the re-election of the master?
Because since switch Member 00b6.70c4.e100 has the highest priority, if I don't wait long enough, will he become the master?

 

 

abtt-39
Level 1
Level 1

and also, to unplug / replug. The switches have dual power supplies, but there are also power stack cables

#sh stack-power DETAil
Power Stack Stack Stack Total Rsvd Alloc Sw_Avail Num Num
Name Mode Topolgy Pwr(W) Pwr(W) Pwr(W) Pwr(W) SW PS
-------------------- ------ ------- ------ ------ ------ ------ ----- -----
Powerstack-2 SP-PS Ring 2800 30 920 1850 4 8

Power stack name: Powerstack-2
Stack mode: Power sharing
Stack topology: Ring
Switch 3:
Power budget: 230
Power allocated: 230
Low port priority value: 22
High port priority value: 13
Switch priority value: 4
Port 1 status: Connected
Port 2 status: Connected
Neighbor on port 1: Switch 2 - 00b6.70c4.e500
Neighbor on port 2: Switch 4 - 70c9.c6bd.0a80

Switch 2:
Power budget: 230
Power allocated: 230
Low port priority value: 20
High port priority value: 11
Switch priority value: 2
Port 1 status: Connected
Port 2 status: Connected
Neighbor on port 1: Switch 3 - 70c9.c6bd.9100
Neighbor on port 2: Switch 1 - 00b6.70c4.e100

Switch 4:
Power budget: 230
Power allocated: 230
Low port priority value: 21
High port priority value: 12
Switch priority value: 3
Port 1 status: Connected
Port 2 status: Connected
Neighbor on port 1: Switch 3 - 70c9.c6bd.9100
Neighbor on port 2: Switch 1 - 00b6.70c4.e100

Switch 1:
Power budget: 230
Power allocated: 230
Low port priority value: 19
High port priority value: 10
Switch priority value: 1
Port 1 status: Connected
Port 2 status: Connected
Neighbor on port 1: Switch 2 - 00b6.70c4.e500

Neighbor on port 2: Switch 4 - 70c9.c6bd.0a80

so I must already disconnect these stackpower cables before, Is it possible to "disconnect" them hotplug without problem?

 

 


@abtt-39 wrote:
 1 Member 00b6.70c4.e100 13 V03 Ready
*2 Active 00b6.70c4.e500 10 V03 Ready
 3 Standby 70c9.c6bd.9100 7 V07 Ready 
 4 Member 70c9.c6bd.0a80 4 V07 Ready​


This is a very easy fix.  In enable mode, just do the following

 

switch 1 priority 15
switch 2 priority 13
switch 3 priority 11
switch 4 priority 9

 

Save the config and when the stack reboots it should boot a little faster because the switch master is already priority 15. 

When it comes to IOS-XE, take every opportunity to do a cold reboot.  Before the maintenance, remove the redundant power supplies and make sure the remaining power supplies are being fed by raw power.  

(I always take that opportunity to upgrade the firmware of switches during maintenance.)

 

abtt-39
Level 1
Level 1

ok, thanks for your  answers.

In our second room, I plugged in a temporary network core (there was no longer any link with the main server room or the power was cut), but the routing only worked for a moment before it failed. no longer function.
I didn't have time to see why. I had enabled dynamic routing, create vlans and svi.
I had put the same ip address on vlan 1 as the main core.
This ip served as default gateway for the 2 other switches ( L2) located in the secondary building. From th CLI, I pinged the interfaces of the different switches, but 2 stations in 2 different vlans did not ping.
As we ran out of time, we returned to the original condition after the return of electricity to the main building.
This morning, I redid via packet trace the same config as on my real test switch and everything was working, so I don't know where the problem came from.

Review Cisco Networking products for a $25 gift card