If you have ever wanted a sneaky peak under the UCS Kimono (GUI) then this posts for you.
The goal of this post is to clarify the end-to-end path from a Cisco UCS vNIC through the UCS Infrastructure to the point we egress from the Cisco UCS Fabric Interconnects.
Having this information and being able to check utilization and statistics of all virtual and physical interfaces within the Cisco UCS environment will save you allot of time and give you a much better understanding of how all the elements tie together.
This post builds on from a previous post “Understanding UCS VIF Paths” where we used a combination of the GUI and CLI to establish the end-to-end traffic path used by a vNIC/vHBA. In this post we exclusively use the CLI, so if you haven’t done so already perhaps worth checking the previous post out first.
Anyway I was troubleshooting an intermittent performance issue the other day from a Cisco UCS Blade all the way back to the Storage Array. And thought it would make a useful post to document this part of the process.
And certainly if you ever get as far as needing to open a Service Request (SR) with Cisco, being able to provide the below information will save you and TAC allot of time.
During this process I will be attaching directly to the ASICs within the IO Modules and these ASICs differ depending on whether you are using Generation 1 or Generation 2 Hardware.
As a nice “Cheat Sheet” I have provided the below table and graphic to show the relevant Cisco UCS ASIC and code names some of which we will need for this process.
In this example we will confirm the end-to-end path of a vNIC named vNIC_FabB1 of Service Profile DCN4PBKW001 which is in Chassis 3 Slot 2
First determine the Virtual Interface (VIF) and Which Fabric the vNIC/vHBA is currently using (If Fabric Failover is enabled)
Command “show service-profile circuit server <Chassis #>/ <Slot #>”
Active Fabric and Assigned VIF
As we can see vNIC_FabB1 is Active and Primary on Fabric B and Passive and Standby on Fabric A. Therefore we can determine that the Active Fabric for this vNIC is Fabric B.
We can also see that the VIF associated with this vNIC is VIF 2024.
SideNote) This vNIC will connect via a Virtual Network Link (VN-LINK) to a vEth port of the same name vEth2024 on Fabric Interconnect B which can be viewed and statistics collected via the Connect NXOS command at the UCSM CLI.
The next thing to determine is which internal IOM (FEX) Port VIF 2014 is using.
From the UCSM CLI
Connect iom <Chassis #>
Show platform software woodside sts (use “redwood” for IOM 2104)
I love the above command because it shows a representation of exactly how the FEX is being used. You can see all the Internal Blade Facing “Satelitte” ports or Host Interfaces (HIFs) and all the External FEX Network Ports or (NIFs).
As can be seen Blade 2 has access to internal FEX ports 3&4 but has only one active connection to the FEX on FEX Port 3, which maps to HIF 27 (Outlined in Red)
NB) The reason FEX Port 4 is disabled, is that the ports of a 220x FEX alternate between the mLOMs and the Mezzanine slots of the Blades, the Mezzanine slots in the above example being empty (hence all alternate (even) ports display as “–“ for Disabled)
Now we know which Host Interface (HIF) we are using, we next need to determine which FEX Network Interface (NIF) is being used.
If you are using a Port-Channel between the FEX and the FI all servers will be mapped to the port-channel and distributed over the members by the LACP algorithm.
In this case the FEX Links have been left at the default setting which is “Discrete Pinning” mode and as such then the relationship between server slot and FEX Network Interface is as follows:
HIF to NIF Mapping (4 FEX Links)
So as can be seen above FEX Port 3 maps to Network Interface 2.
The HIF to NIF mapping differs depending on the IOM used and how many FEX cables are actually connected, the above shows all four links of a 2204XP connected, the below example shows how the HIF to NIF mapping occurs if 2 FEX Cables are used:
HIF to NIF Mapping (2 FEX Ports used)
So Blade 2 (FEX Ports 3 &4) maps to Network interface (NIF 2)
OK so next we establish which Server Interface (SIF) on the Fabric Interconnect we are using , which we do with the below command:
Show fex <Chassis #> detail
FEX Port to Fabric Interconnect Server Port Mapping
So as you can see FEX Port E3/1/3 is using FI Fabric Port Eth1/10
The last port we need to know is which FI Uplink port we are pinned to
Show pinning server-interfaces | inc Veth2024
vEth to Uplink Port Pinning
NB) show pinning border-interfaces active can also be used to see the information from another perspective.
As you can see Veth 2024 is pinned to FI Uplink Port-Channel 11
So armed with all the above information you can draw out all the ports in the Cisco UCS traffic path. This in itself will save a lot of time if you need to engage TAC.
End-to-End Traffic Path within the UCS Infrastructure
NB) For further information on advanced Cisco UCS Troubleshooting at the CLI I would strongly suggest checking out the recorded session by Robert Burns (First CCIE DC, TAC and Cisco Community Legend) available for free at https://www.ciscolive.com just set up an account.
Hello All, We are looking to integrate Cisco ACI with Openstack however we aren't able to identify the relevant documents which will help us with the integration. There are multiple documents available online however nothing that helps us. ...
Hi,I have applied Taboo contract on EPG in order deny specific traffic.and also changed the Policy control enforcement preference of the VRF associated with the EPG to UnenforcedThe actual behavior is that the specific traffic is allowed(by the VRF ...
I'm currently researching Data Centre designs and I'm looking specifically at the 9K series of switches. I have large port count requirements and high bandwidth east-west traffic flows.The design I'm considering is the following:Core - 9500 100GAggregatio...
Hi, I have to make a choice between using separate VDCs vs one VDC with 2 VRFs on a Nexus 7710 M3 card platform. (VPC Pair of 7710s version 8.4)Due to the ASICs on the 48 port 1/10gb cards, they can only be split into 24 port sections for VDC. (so 24...