on 10-05-2009 12:47 PM - edited on 03-25-2019 01:23 PM by ciscomoderator
The information in this document is based on Cisco UCS.
The information in this document was created from devices in a specific lab environment. All of the devices used in this document started with a default configuration. If your network is live, make sure that you understand the potential impact of any command.
UCS-6120XP – 20 Fixed ports, 10GE/FCoE, 1 expansion module
UCS-2104XP -- 4 Port Fabric Extender
UCS-B200-M1 -- Half width blade server
Catalyst 6506 Switch for uplink ethernet connectivity
Standard LC to SC cable to connect 6120XP to Cat6K Switch
SFP-10G-SR -- in 6120XP for ethernet uplink
X2-10GB-SR module in CAT6K for ethernet connectivity with UCS-6120XP
Twinax cable SFP-H10GB-CU5M to connect Fabric Extender to UCS-6120XP
Cisco UCS Fabric Interconnects provide these port types:
Server Ports—Server Ports handle data traffic between the Fabric Interconnect and the adapter cards on the servers.
You can only configure Server Ports on the fixed port module. Expansion modules do not support Server Ports.
Uplink Ethernet Ports—Uplink Ethernet Ports connect to external LAN Switches. Network bound Ethernet traffic is pinned to one of these ports.
You can configure Uplink Ethernet Ports on either the fixed module or an expansion module. Uplink Ports are also known as border links.
Uplink Fibre Channel Ports—Uplink Fibre Channel Ports connect to external SAN Switches. They can also directly connect to the SAN Storage Server itself using FC protocols (typically for LAB based environment). Network bound Fibre Channel traffic is pinned to one of these ports.
You can only configure Uplink Fibre Channel Ports on an expansion module. The fixed module does not include Uplink Fibre Channel Ports.
The Ethernet switching mode determines how the Fabric Interconnect behaves as a switching device between the servers and the network. The UCS fabric interconnect operates in either of the following Ethernet switching modes:
End-Host Mode
Switch Mode
End-host mode allows the Fabric Interconnect to act as an end host to the network, representing all server (hosts) connected to it through vNICs. This is achieved by pinning (either dynamically pinned or hard pinned) vNICs to uplink ports, which provides redundancy toward the network, and makes the uplink ports appear as server ports to the rest of the fabric. When in end-host mode, the Fabric Interconnect does not run the Spanning Tree Protocol (STP) and avoids loops by denying uplink ports from forwarding traffic to each other, and by denying egress server traffic on more than one uplink port at a time.
For the purpose of this document and for running other applications using vmware on UCS, it is recommended to configure the fabric interconnect in the End-Host mode.
Switch mode is the traditional Ethernet switching mode. In this mode the Fabric Interconnect runs STP to avoid loops, and broadcast and multicast packets are handled in the traditional way.
Switch mode is not the default Ethernet switching mode in UCS, and should be used only if the Fabric Interconnect is directly connected to a router, or if either of the following are used upstream:
Layer 3 aggregation
vLAN in a box
Click here for more inofrmation on the Ethernet Swithching mode with UCS
At this point it is good idea to validate few items before moving forward. If you are intended to use two black color twinax cables between UCS-6120XP and UCS Chassis then you should define it under the Equipment policy setting as shown below. Also notice that only 1, 2 or 4 links are possible between UCS-6120XP and UCS chassis fabric interconnect. The idea of these links is to distribute the blade server traffic load between 10GE twinax cables. In our scenario we have only two half width blades on our single chassis system. So blade#1 will be using link#1 and blade#2 will be using link#2. This assisgnment is automatic and cannot be changed.
Also check the following configuration to make sure that as admin you can see all the tasks because by default only "Faults and Events" are selected in the filter.
The recommended and default ethernet mode of operation for UCS-6120XP is "End-Host-Mode". When you change the Ethernet switching mode. the UCS manager logs you out and restart the farbric interconnect (UCS-6120XP). For cluster configuration, the UCS Manager restarts booth fabric interconnect sequentially. Launch the "LAN Uplink Manager" to verify and configure the mode.
In the following picture, the mode is "Switch" right now. Change it to End-Host mode.
ALso create a vlan that will be used to connect for the ESX server management and for other management purposes. This should be a vlan that would not carry other traffic.
On the CAT6K side configure following
interface TenGigabitEthernet5/2
description UCS-6120-XP Fabric Interconnect Port 20
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
spanning-tree portfast edge
When we will reach to the point where we would configure the vNIC. We will have to make sure to select Native VLAN option as follows. Otherwise it is not going to work with the above CAT6K configuration
There are three different types of ports that can be configured on the UCS-6120 switch
Now it is time to configure the Unified Fabric Interconnect Switch (UCS-6120) Port to be connected to the Chassis.
Assuming that the twinax cable is already connected between the chassis and the UCS-6120, configure the port as follows by selecting Configure as Server port.
And the overall connectivity with the backplane port as well looks like following
Each UCS Fabric Interconnect provides ports that can be configured as either Server Ports or Uplink Ports. Ports are not reserved for specific use, but they must be configured using the UCS manager (preferabiliy). You can add expansion modules to increase the number of Uplink Ports on the Fabric Interconnect. LAN Pin Groups are created to pin traffic from servers to a specific uplink Port.
UCS-6120XP ports are 10GE ports and hence the regular SFP transceiver cannot be used to connect it to the uplink ethernet port. Unlike 10/100/1000 bits ethernet port, 10GE ethernet port cannot do auto-negotiation of speed and duplex and hence the UCS-6120XP port should be connected to the 10GE uplink ethernet port. A 10GE transceiver (e.g SFP-10G-SR) is required on the UCS-6120XP switch. Make sure to order it as part of your UCS order because the Netformix ordering tool will not give you any indication if you had missed it.
On the CAT6K switch, X2-10GB-SR SC interface type transceiver can be used and a regular LC-SC fiber cable can be used.
Or you could have CVR-X2-SFP10G adapter inside tha X2-10GB-SR to convert it into a LC type connection and can use regular fiber cable or twinax (SFP-H10GB-CU5M) cable to connect UCS-6120XP to uplinl Ethernet Switch.
The port configuration on UCS-6120XP are done at the Service-Profile vNIC level. VLAN trunking could be seleted either YES or NO. In a typicaly deployment this setting should be set to YES. And on the CAT6K switch side the port should be configured as a trunk port. Following is an example of CAT6K configuration
interface TenGigabitEthernet5/2
description UCS-6120-XP Fabric Interconnect Port 20
switchport
switchport trunk encapsulation dot1q
switchport mode trunk
spanning-tree portfast edge
!
show interfaces tenGigabitEthernet 5/2
TenGigabitEthernet5/2 is up, line protocol is up (connected)
Hardware is C6k 10000Mb 802.3, address is 0023.04d7.1d5d (bia 0023.04d7.1d5d)
Description: UCS-6120-XP Fabric Interconnect Port 20
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
On the UCS-6120XP side the steps are very similar to configuring the server port. In the UCS Manager GUI, right click on the port that you want to configure as a uplink ethernet port and just select the option.
Once the connection with the ethernet switch is established the port will be moved under the Uplink Ethernt port group and will show enabled/up state as follows.
Service Profiles are what define a blade server as a "complete server". Service Profiles contains all the information related to a server.
For example the NIC card that it has and the MAC address that is going to be assigned to the NIC etc. Think Service Profile as a "Virtual Hardware".
Service Profile is the definition of actual physical hardware.
In the UCS Manager in the"Server" tab, you will notice that root organization is already created with different default options also selected. Right now the information that is available is enough for us to create a service profile and apply it to the blade server.
There are many different possible ways of doing these steps and this document only discusses one possible way. In a multi-tanent environment or for large organizations where there is a need to seprate resources based on various functonal groups for example Engineering, Support, Sales etc. You can define sub-orginzation under "root" orginization too. But this document is not discssion these advanced options.
Under the server tab select "Create Service Profile (expert)" option. This is very important that you select this option otherwise you will end up getting error messages to fix other steps. That are not very difficult to fix but this approach is the fast and easy approach to create a service profile and apply it to the physical blade
Specifiy the unique identifier for the service profile. This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special characters, and you cannot change this name after the object has been saved.
The UUID drop-down list enables you to specify how you want the UUID to be set. There are multiple options available.
You can use the UUID assigned to the server by the manufacturer by choosing Hardware Default. If you choose this option, the UUID remains unassigned until the service profile is associated with a server. At that point, the UUID is set to the UUID value assigned to the server by the manufacturer. If the service profile is later moved to a different server, the UUID is changed to match the new server. So in order to keep the machine stateless, we have picked up option to specify a UUID manually. Typically a customer would create a UUID pool and then it should be picked up from the pool automatically. Also it is important to make sure the UUID is available by clicking the corresponding link.
Next step is to configure the storage options.
If you want to use the default local disk policy, leave this field set to Select Local Storage Policy to use. If you want to create a local disk policy that can only be accessed by this service profile, then choose Create a Specific Storage Policy and choose a mode from the Mode drop-down list.
We have selected Create Local Disk Configuration Policy link. It enables you to create a local disk configuration policy that all service profiles can access. After you create the policy, you can then choose it in the Local Storage drop-down list.
Select simple in the following screen.
Simple—You can create a maximum of two vHBAs for this service profile
Expert—You can create an unlimited number of vHBAs for this service profile
No vHBAs—You cannot create any vHBAs for this service profile
If you want to:
Use the default WWNN pool, leave this field set to Select (pool default used by default).
Use a WWNN derived from the first vHBA associated with this profile, chooseDerived from vHBA.
A specific WWNN, choose 20:XX:XX:XX:XX:XX:XX:XX or 5X:XX:XX:XX:XX:XX:XX:XXand enter the WWNN in the World Wide Node Name field.
A WWNN from a pool, select the pool name from the list. Each pool name is followed by number of available/total WWNNs in the pool.
We randomly picked up following WWNN name. Because we didnt want to assing the WWNN name from the physical HBA to make our server stateless. Make sure to verify that it is available.
Ignore thw arning in the following screen
For now we have selected virtual CD as the first boot device and local hard drive as second. It is also a good practice to have it this way in case your local hard drive has some issues so that you can always boot from the CD.
Now associate the physical blade with this service profile.
Each physical blade is capable of supporing remote KVM and remote media access. This is made possible by associating IP addresses for the cut-through interfaces that correspond to each blade's BMC. Typically, these addresses are configured on the same subnet as the management IP address of the UCSM. This pool is created from the Admin tab, by selecting "Management IP Pool".
In our exaple, the ip address 10.194.40.208 is used to connect to the UCS Manager. And 10.194.40.33 is assigned to the "Management IP Pool" so that it can connect to blade server using KVM and will also be uder for remtoe media access.
Notice that in our example we have only one IP address because we are configuring only one blade right now. So one ip address is required per blade server.
At this point, the VMWare EXSi 4.0 hypervisor (VMware-VMvisor-Installer-4i.0.0-164009.x86_64.iso) is installed on the UCS blade server and you can customize the vmware hypervisor with the IP address related information so you can connect to it using the vCenter and manage it from the vmwar vCenter clinet.
So after that part is done, notice in the above picture that you won't be able to connect/ping the 172.19.246.187 as of now. Because there are some more steps requried before you could be able to do that.
Why?
Becuase remember that there are two different ethernet networks here.
Management Network: 10.194.x.x network (usd to connect to UCS Manager and for KVM over IP purposes for the blade.This is out of band network and connect to the management port on the UCS fabric interconnect UCS-6120XP)
UCSManager----UCS-6120XP----->ManagementPort---->10.194.x.x network
VMWare management Network: 172.19.246.x network (this is used to connect the vmware hypervisor using the in-band network so that it can be managed by the vCenter. This same network can be used for iSCSI traffic if needed). So physically speaking this traffic will be going through the UCS Chassis fabric extender to UCS fabric interconnect switch and to the CAT6K ethernet uplink port
ESX4i----->UCS-Blade----->UCS-Chassis---->Fabric-Extender---->UCS-6120XP----->CAT6K--->172.19.246.x network
So the next task is to configure the vmware management network.
Now before we can move any further it is time to understand Utility Computing concepts.
This document will now focus on following items in order
UCS provides the infrastructure to support stateless server environments. Utility Computing, in general allows for logical servers (OS/Applications) to be run on any physical server or blade, in a way that hides traditional H/W identifiers (WWN, MAC, UUID etc.) as an integral part of the logical server identity. -- this logical server identity is called "UCS Server Profile"
So we begin creating the Pools and Policeis first. And also we create vNIC and vHBA templates and then assign them to Service Profile template. And to make rapid-provisioning as eash as possible, "server profile templates" can then be used to rapidly instantiate actual service-profiles (possibily even automatically associating actual server profiles to actual physical servers/blades.
So think of these "UCS Servie Profiles" as a vmware virtual server configuration that will be associated to the physical UCS blade server.
Pools are the base building block for provisioning unique identifiaction of actual hardwae resources. Define and use Pools as a standrd practice. Make sure that
In order to connect the SAN storage you have multiple choices available. First you have to decide about the protocol. You could use iSCSI or Fiber Channel (FC) protocol to connect to your storage server. Cisco recommendation is to use the Fiber Channel with the UCS solution. For the purpose of this document we are going to use the FC as well.
Then for the physical connectivity you can either directly connect your fabric interconnect switch to the SAN storage or you can connect the fabric interconnect switch to another SAN switch (could be MDS or Nexus switch) and then connect to SAN-storage. In a typical enterprise data center environment it is highly likely that you would use choice#2. And Cisco recommendation is to use the multi tier model to avoice single point of failure.
For the scope of this document we will discuss choice#1.
Choice#1
UCS-Chassis----FCoE-------- ucs-6120xp ------FC---------SAN-Storage
|________Ether____CAT6K
Choice#2
UCS-Chassis----FCoE------ucs-6120xp----FC----MDS-9124----FC----------SAN-Storage
|___ether______CAT6K
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: