cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3144
Views
0
Helpful
3
Replies

Questions -- VLANs and vPC

bear21
Level 1
Level 1

I'm supporting a UCS configuration I didn't do. It is functional, but a recent failover test (shutting off the primary fabric interconnect) went poorly: The B never did pick it up and all our VMs were turned off when vCenter lost connection. We had to power the primary back on and once we turned the vCenter box back on, it started bringing the VMs back up. Has anyone seen anything like this before. Two thing I noticed about our configuration:

Portchannel 1 on the A Fabric Interconnect goes to vPC 3 on the 5010s

Portchannel 2 on the FI goes to vPC4 on the 5010s.

Is it OK for the PortChannel numbers not to match on the uplinks?

Also, VLANs 4042, 4043, 4044 and 4047, which are used for communication between chassis and UCS are not explicitly configured on the Fabric Interconnects, should they be?

Thanks

Andy Wells

3 Replies 3

Robert Burns
Cisco Employee
Cisco Employee

Andrew,

When you have upstream switches that support VSS or VPC (In case of Nexus 5K/7K) it's a best practice to utilize this.   Each FI will have a standard Port Channel (2 member or more) going up to the two N5K/N7Ks.  Each N5K connecting to the same Fabric Interconnect must have the same vPC #.  In you case you have vpc 3 and vpc4, each connecting to a different Fabric Interconnect which is fine.  See the topology below which is what you have.  A practice I normally implement is making the Port Channel ID # on the Fabric Interconnect, the same as the upstream VPC #.

As for the reserved VLANs, there is NO need to create these on the Fabric Interconnects of anywhere upstream.  These VLANs exist only internal to the UCS system are are used soley for intra-chassis communication. In regards to your issue, ensure the VPC configuration includes Port Fast.  Each Interconnect operating in EHM (End Host Mode) operates like a single host.  There's no spanning tree running below your uplinks so you can safely utilize Port Fast.

A few more questions

- How are the ESX host NICs configured in UCS?  Failover enabled?

- What version of UCS are you running?

- Describe your networking connectivity (vSwitch etc) for the host running your vCenter.  Assuming you have two NICs presented to the vSwitch/DVS, there should be seamless failover.

Regards,

Robert

Rob:

Good info -- Thanks. Portfast is a great idea and we are going to implement that this afternoon. The 6120XPs are on 1.3.0 for the Boot Loader and 4.1(3)N2(1.3o) for the Kernal and System. The 5010s are on 1.3.0 for the BIOS and 4.2(1)N1(1) for the Kickstart and the System.

What screen with UCS Manager is best to look at for the ESX host NIC configuration? I poked around, but didn't see anything explicitly about failover.

I did see that each card has 6 NICs, 2 use the VMWare soft switch and exist mainly for vMotion and other VMWare functions. We have two 1010 appliances and the VMWare virtual switch is only used for this. One NIC goes to each Fabric Interconnect. 2 other vNICs use the 1000V and are for uploading VMs off the storage device. One NIC goes to each FI. The last 2 vNICs use the 1000V and are to allow the VMs to communicate with each other and out to the world. One NIC goes to each FI.

Thanks

AndyW

DAVE GENTON
Level 2
Level 2

Yep many times I've seen that failure in the cluster.  I will save alot of time covering what I was about to now I've seen your code, you need to upgrade and I'll leave it at that.  MANY bugs meaning you need 1.4(3) and 1.4(3M) is out, just used it with working failover in cluster.  YOu can try rebooting the B in the until you can and see if it sync's up, check the HA status see that it is ready and full and all that.  CLI stuff you can do but at this point push new firmware and get off 1.3 my friend.

d-

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card