cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5301
Views
15
Helpful
9
Replies

Fabric Interconnect Connectivity to One Subent

Hello all, 

 

I have two 6296UP Fabric Interconnects in a cluster controller two blade chassis and 9 blades. I recently have been looking into setting up VM-FEX and the UCS/VMWare integration plugin bit I have ran into a roadblock. The roadblock appears to the the fabric interconnects ability to communicate with the subent that the vCenter server resides in. From what I can tell neither interconnect is able to ping anything in that particular subnet; however, they are able to get to every other subent without issue. I tried to setup an ACL on the upstream device to catch the ICMP  traffic when I ping and the traffic never reaches the device. This would imply to me that packets destine for that subent are never finding there way out of the interconnect.

 

Does anybody have any ideas where to start to troubleshoot this issue? Any help would be appreciated. 

1 Accepted Solution

Accepted Solutions

I was able to resolve the issue. 

 

It appears that when out re-seller had set up the IP pools initially all of the blades pulled an address for outband management form a pool of addresses of 192.168.100.x (the same subnet the vCenter server resides). The outband addresses on the FI's are in the 192.168.0.x subent so this prevented outband KVM from working for one. I made a new pool of addresses in the 192.168.0.x subent (same subent the FI's outband management NIC belong too) and assigned a static address to all of the bladed from this pool. Once I did this I was able to PING vCenter without issue from either FI in either direction. 

 

I guess the issue had something to do with there being 192.168.100.x addresses on the outband ports. 

View solution in original post

9 Replies 9

Qiese Dides
Cisco Employee
Cisco Employee

Hi Brandon,

 

If I'm following this correctly as of now the Fabric Interconnect is unable to reach the VCenter subnet. A few questions for you:

 

- Can any blade in this UCS domain reach the VCenter Subnet? Can you ping from the Vcenter to any blade in the specific UCS Domain?

 

- Are you using Active / Active NIC redundancy or Active / Stand by. Can you go ahead and disable the NICs going to Fabric Interconnect B and see if it is able to reach through only going through Fabric Interconnect A and Vice Versa?

 

- Is the VCenter currently sitting on this specific UCS domain or does it have to go through an L3 switch to be able to reach it?

 

- What is your Topology? If it is transversing the L3 network then you need to make sure the paths are open on those upstream switches. A good example would be if it was routing through Layer 3 and your topology was (UCS --> FI-A --> 7K --> FI-B --> UCS) to check if the blade can first ping the default gateway, if not can it reach the 7k, if not then it's having an issue on the Fabric Interconnect possible (is the specific VLAN being allowed) and so forth.

 

If this is holding an installation and the above did not help, I would recommend opening a Cisco TAC case so an engineer can join a live WebEx and troubleshoot this issue live (it makes it a lot easier). If you are unable to send me the above questions and I will do my best to try to resolve this for you.

 

- Please mark all helpful solutions as correct so other members that are running into the same issue can find it.

 

Qiese Dides

- Can any blade in this UCS domain reach the VCenter Subnet? Can you ping from the Vcenter to any blade in the specific UCS Domain?

All of the blades are running VMWare ESX and have an IP addresses in the same subnet as the vCenter server and I verified they are able to ping the vCenter server.

 

- Are you using Active / Active NIC redundancy or Active / Stand by. Can you go ahead and disable the NICs going to Fabric Interconnect B and see if it is able to reach through only going through Fabric Interconnect A and Vice Versa?

The NICs are Active/Active. What is the beast was to disable the NICs on a host? Remove all of the NICs connecting to fabric B from the vSwitch?

 

- Is the VCenter currently sitting on this specific UCS domain or does it have to go through an L3 switch to be able to reach it?

The vCenter server is a virtual machine running on  the blades. Since vCenter resides on a different subent than the interconnect it is necessary for the traffic to pass through a L3 switch. 

 

- What is your Topology? If it is transversing the L3 network then you need to make sure the paths are open on those upstream switches. A good example would be if it was routing through Layer 3 and your topology was (UCS --> FI-A --> 7K --> FI-B --> UCS) to check if the blade can first ping the default gateway, if not can it reach the 7k, if not then it's having an issue on the Fabric Interconnect possible (is the specific VLAN being allowed) and so forth.

From the Fabric interconnects the manamgent ports are connected to a Catalyst 2960x stack. The 2960x stack up-links to two Nexus 9k's that are vPC peers. Then the 9k's connect to the fabric interconnects. So the topology looks like this: FI-A/B (Management Port) --> 9k --> FI-A/B --> vCenter VM.

 

Hopefully that anserred your questions let me know if I need to clearify anything provide you more information. Thank you for the reply!

 

 

Thank you for the response Brandon,

 

What is your UCS Firmware Version?

 

Also what configuration guide are you using to setup VMFEX?

 

I am running UCSM 3.2(2b)

 

I am using the following configuration guide: 

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/vm_fex/vmware/gui/config_guide/b_GUI_VMware_VM-FEX_UCSM_Configuration_Guide/b_GUI_VMware_VM-FEX_UCSM_Configuration_Guide_chapter_01.html

 

I also referenced these older videos I found on Youtube to help apply more context to what I was doing:

https://www.youtube.com/watch?v=QYDn2WpU6p8

 

I can get as far as making the distributed switch. That is the point where I then receive an error in UCSM that the vCenter server is unreachable. This is what prompted me to check the connectivity between the two by pinging the other vCenter server from the interconnects. After the PING from the interconnect failed to every host in that subnet I figured I had located the root issue. 

I'm assuming you imported the certificate already and did all those required steps.

 

Below are crucial components that need to be followed exactly, I'm going to copy two crucial points that need to be followed:

 

The Windows-based machine that you install VMware vCenter on must have network connectivity to the Cisco UCS management port and to the uplink Ethernet port(s) being used by the ESX host. The management port connectivity is used for management plane integration between VMware vCenter and
Cisco UCS Manager; the uplink Ethernet port connectivity is used for communication between VMware vCenter and the ESX host.

 

2) Please verify the firewall ports are open. The HTTP and HTTPS ports (normally TCP 80 and 443) must not be blocked between vCenter and the Cisco UCS domain.

 

All requirements below:

VMware vCenter

You need VMware vCenter (vCenter Server and vSphere Client) for VM-FEX for VMware. The VMware vCenter must meet the following requirements:

* The Windows-based machine that you install VMware vCenter on must have network connectivity to the Cisco UCS management port and to the uplink Ethernet port(s) being used by the ESX host. The management port connectivity is used for management plane integration between VMware vCenter and
Cisco UCS Manager; the uplink Ethernet port connectivity is used for communication between VMware vCenter and the ESX host.


NOTE
The HTTP and HTTPS ports (normally TCP 80 and 443) must not be blocked between vCenter and the Cisco UCS domain.

* A VMware vCenter extension key provided by Cisco UCS Manager must be registered with VMware vCenter before VMware vCenter acknowledges the Cisco UCS domain.

In addition, you must configure VMware vCenter with the following parameters:
* A datacenter.
* A distributed virtual switch (DVS).
* ESX hosts added to the DVS and configured to migrate to pass-through switching PTS/DVS.
* For each Cisco VIC adapter, two static vNICs (one for each fabric) added to the DVS.
* Virtual machines (VMs) required for the VMs on the server.
* (For VMware vMotion) Hosts with common shared storage (datastore) that are properly configured for vMotion.
* (For VM-FEX in high-performance mode) All guest memory on the VMs must be reserved.
* (For VM-FEX in high-performance mode) The port profiles and VMwarePassThrough Ethernet adapter policy that you have previously configured in Cisco UCS Manager must be specified.

I am sure the issue is network related.  There is no firewall in the topology and no ACL that is blocking port 80 or 443, or any port for that matter. From the management ports on the interconnects to the subent where the vCenter server resides there is no network connectivity. I am fairly certain the issue resides on the interconnects. If you do a traceroute from the vCenter server you get to the 9k then get a drop but it you traceroute from the interconnects (either one) the traffic never leaves the interconnect. This issue only exists from the interconnect to that one subent. Every other subent can connect to the UCSM and to vCenter. The interconnects can just not reach that one subent. Is there perhaps something configured incorrectly on the interconnects that is causing the traffic to try to go inband instead of out-of-band through the management ports? 

Brandon,

 

Thank you for that information (Trace-route). It most likely will be some configuration issue (maybe VLAN isn't being added correctly) which is not allowing traffic to push through.

 

I'd recommend if possible opening a case with Cisco TAC so they can look at the logs / host a WebEx session and knock it out quite fast, there would be a few things we would need to check live.

Okay. I will proceed with a TAC ticket. Thank you for your assistance. 

I was able to resolve the issue. 

 

It appears that when out re-seller had set up the IP pools initially all of the blades pulled an address for outband management form a pool of addresses of 192.168.100.x (the same subnet the vCenter server resides). The outband addresses on the FI's are in the 192.168.0.x subent so this prevented outband KVM from working for one. I made a new pool of addresses in the 192.168.0.x subent (same subent the FI's outband management NIC belong too) and assigned a static address to all of the bladed from this pool. Once I did this I was able to PING vCenter without issue from either FI in either direction. 

 

I guess the issue had something to do with there being 192.168.100.x addresses on the outband ports. 

Review Cisco Networking products for a $25 gift card