This document describes the procedure and concepts that can be used to configure the UCS for the networking on your LAN network. There could be many different VLAN configured in a LAN environment for different applications and purposes. For example one VLAN VMware ESXi server management and access purposes and another for the server application that is running on the VMware ESXi server as a virtual machine.
In UCS the implementation of VLAN is influenced by the concepts of End-host mode. The UCS system by default works in End-host mode. In End-host mode the UCS 6100 series fabric interconnect appears to the external LAN as an end station with many adapters.
Make sure you have read and undersood the following document. It will provide the foundation for the concepts discussed here
The IP networking between the virtual machine is done via the different type of adapters that are part of the blade server. There are four types of network adapters that are supported to provide functionality required for the diverse workload in a Data Center.
Follow the links above to take a look at the detailed description of each one of them.
UCS 82598KR is 10 Gigabit Ethernet card which provides dual port Ethernet connectivity and designed for efficient high performance Ethernet transport. The important thing to notice here is that this card doesn't provide Fiber Channel port.
UCS M71KR-E provides a dual port connection to the midplane to the blade server chassis. This card has 10 Gigabits per second Ethernet and 4 Gigabits per second fiber channel controller for fiber channel traffic all on the same mezzanine card. The UCS M71KR-E presents two discrete fiber channel HBA ports and two 10 Gbps Ethernet network ports to the operating system.
UCS M71KR-Q provides a dual port connection to the midplane to the blade server chassis. This card has 10 Gigabits per second Ethernet and 4 Gigabits per second fiber channel controller for fiber channel traffic all on the same mezzanine card. The UCS M71KR-R presents two discrete fiber channel HBA ports and two 10 Gbps Ethernet network ports to the operating system.
The only difference between M71KR-E and M71KR-Q is that former has a Emulex fiber channel controller and later has Q-Logic FC controller.
UCS M81KR card is a dual port 10 Gbps Ethernet mezzanine card that supports upto 128 PCIe virtual interfaces that can be dynamically configured.
This document and the concepts presented here are based on the UCS M71KR Converged Network Adapter
In Unified Computing system (UCS) each VLAN is identified by a unique ID. The VLAN ID is a number that represents that particular VLAN. VLAN can be created in such a way that it is visible in either one fabric interconnect or both fabric interconnect. It is also possible to create separate VLANs with same name on both of the fabric interconnects but with different VLAN IDs. While creating a VLAN in UCS, the VLAN is also assigned a name but the name of a VLAN is known only within the UCS environment, and outside of the UCS the VLAN is represented by the unique ID.
Guidelines and Requirements
The configuration and options presented in this document assumes that following guidelines are met and understood.
UCS-6120XP Fabric Interconnect Switch should be in the End Host mode. Switch mode is not discussed as part of this document
This document only presents the UCS Manager GUI based configuration option.
There could be multiple ways to configure the UCS Manager GUI to achive the same tasks
In End Host mode, all the blade servers and their interfaces attached with the 6120XP looks like bunch of NIC cards to the 6120XP
There is an inherit pinning happening between the blade server and uplink Ethernet switch (CAT6K or Nexus5K)
VLAN creation in UCS means that the configured VLANs are automatically allowed as well
vNIC within the UCS system is a host-presented PCI device which is managed centrally by the UCS manager.
Uplink Switch Configuration
Make sure that the appropriate VLANs are created on the uplink switch (e.g CAT6500).
UCS Fabric Interconnect Configuration using UCS Manager
It is highly recommended to create all the necessary VLANs before configuring the Service Profile.
Create a Common/Global VLAN under the LAN Tab in the UCS Manager as shown in the following picture
A common/global VLAN as the name suggest visible to both fabric interconnect switches
This VLAN (For example VLAN 433) must be visible under the Servers tab. Also make sure right VLAN-ID is populated
Further verification can be done by looking under the equipment tab in UCS Manager by making sure that the VLAN are visible under the appropriate vNIC as shown in the following picture
With the above configuration the virtual machine should be able to ping the default gateway and the IP connectivity should be completed.
Creating VLANs After Creating Service Profile
As mentioned before that it is recommended to create all the VLANs before creating a service profile. But UCS is flexible to accomodate and apply the new VLAN to the existing Service Profiles as well. So in this situation, few considerations should be made.
If you configure a VLAN after the Service Profile creation and if you notice that the "new" VLAN is not bind to the vNIC on the blade server then the solution is to modify the vNIC configuration and add the new VLAN to it.
Under the Servers tab in UCS Manager check to see if the "new" VLAN is visible or not. Following picture shows that the "new" VLAN 433 is not visible.
Click "Modify" on the vNIC configuration option (shown in the above picture)
Following screen shows the Modify vNIC screen
Make sure the VLAN trunking options is checked and Native VLANs is correctly selected as per your IP network policy. And then check the "new" VLAN (VLAN 433 in this case)
Hello,Our Datacenter deploys a topology which is almost alike a Spine and Leaf VXLAN-enabled design. The Spines are N3K-C3164Q and leafs are N9K-C9396PX. Routerports connect the 9396 to the VXLAN fabric. There is an anycast-rp configured on the C3164...
Greetings,My understanding of GLEAN in ACI is that it is for only ARP.. So when a ARP request is for Silent host and when Flooding is not enabled, Spine is queried and when Spine does not know the endpoint, initiates a GLEAN. (please correct me if i am wr...
Hello, Imagine that we have one base epg and one micro epg on same bridge domain. Normally I look endpoints of micro epg from GUI , USeg EPG -->Operational--> Client End-Points. But I realized that when I run "show endpoint ip x.x.x.x deta...
Hi, I have an issue about learning sources in ACI fabric. When I run " show endpoint ip 192.168.5.20" , I see that endpoint is shown in different nodes and different ports by different sources (vmm and learned) . But when I run same comma...
Hi,ACI Version: 4.1ACI Multipod Design, error on 2 Leafs in 2 Podswe use QUALYS Scanner we get those alerts is it possible that Process reboots on the Leaf process Policyelem triggered of Qualys security scan over SSH passwords,normally few SSH ...