cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
823
Views
0
Helpful
1
Replies

General UCS Questions

nicksousa
Level 1
Level 1

Hello,

 

I have a few questions about UCS. Feel free to answer any and all of them, thanks!

 

1.    What are the main benefits of using FCoE vs FC for UCS?
2.    Can someone point me toward performance data of FCoE vs FC within UCS?
3.    What is the difference between the fixed and expansion modules on the FIs, other than being able to replace the expansion modules? One difference I noticed is that when you select "Configure Unified Ports" on the FI, it says that the entire FI will have to reboot if you use fixed ports, but the expansion port module can reboot independently. Are there any other differences between the fixed and expansion modules?
4.    What is the best practice for creating Organizations? Is the boundary usually the datacenter? What benefits are there for combining multiple datacenters (that might be within the same geographical region) into a single organization?
5.    Do most people configure UCS from "root", or do they always configure everything within the specific Organizational units that they create?
6.    This video herementioned that most organizations setup separate VSANs for each fabric (A & B)
a.    How are the fiber channel switches setup to accommodate the VSANs on the FIs?
7.    Is there any benefit to having separate VSAN/VLAN IDs between Fabric A and B? Some examples I see use the same VSAN ID on both Fabrics, whereas others do not
8.    In this diagram (link), there are 4 connections going from the FIs to the SAN. Each FI is cabled to each Storage Processor. In this configuration, how should the SAN be configured - ALUA?
9.    (For ESXi hosts) What are the pros/cons from creating a vNIC template for each fabric, versus a vNIC template for each vmnic?
10.    Under what conditions should you not check off "enable failover" for vNIC templates for esxi hosts?

11.  What is the best practice for the FEX-to-IOM connection under Equipment > Policies > Global Policies > Chassis/FEX Discovery Policy? Do many companies configure it as a Port Channel or not?

 

1 Accepted Solution

Accepted Solutions

1) complexity is the best I can tell.  For us, we decided to stay with FC native as we already have the native FCC buildout. Our network group would like for us to switch to FCoE (we use N5K for our datacenter fabric switches).  But for our SAN group, we all all comfortable with native FC right now and didn't want to convolute our UCS deployment with a switch to FCoE.

2) UCS is FCoE from the interconnect back.  There is no performance difference. Once the FC lands at the interconnect, the interconnect opens the packet and converts it to FCoE headers, places it on a VLAN you configure (must be a unique VLAN, it will not let you overlap with production VLANs defined in UCS already) and sends it to the VIC. The VIC then opens the FCoE packet and delivers it to the OS as if it were native fibre channel.  Therefore the performance difference between the two is negligible.

3) The fixed module is controlled as one.  When you change unified ports the entire interconnect reloads.  When you load an expansion module only the expansion module reboots if you change it's unified ports setting.  Another thing to note when you purchase an expansion module you get additonal license ports that can be used wherever (does not have to license an expansion module port).  Other than that, performance and functionality is identical to fixed.

4) We create organizations for each type of OS because different groups manage different OS'. For example we have an org for VMare, one for Linux, one for windows and then a special oracle one for our DBAs. We then assign permissions via AD to locales so the groups get access to their servers with minimal effort from us.

5) everything in root unless needed below (haven't run into this yet)

6) As a "best practice" you should always have two separate fabrics. UCS needs to put the fabrics into FCoE VLANs.  To avoid confusion it is best to put each fabric in its own VSAN. For example we run our production SAN fabric A in VSAN 10 everywhere, and FCoE VLAN 3210.  SAN fabric B is in VSAN 11, FCoE VLAN 3211. 
6a) From a MDS/Nexus perspective you set up a "f-port trunk". UCS logs into the fabric as a nport with NPIV (ucs logs in and then tells the fibre switch "by the way, these additional WWPNs are also on this port I just logged in to, send their data to me").  You put the f-port channel you create on the mds/nexus device into the VSAN you want (in my example, VSAN 10 for A and VSAN 11 for B). Also to note: you never cross fabrics from the interconnect up to your SAN fabric. Interconnect A will have a port channel to the A SAN fabric and 
Interconnect B to the B SAN fabric.  Think of the Interconnects as HBAs on a normal server.  You don't mix the A and B fabrics.  This is how you get SAN redundancy.

7) See above.  I would highly recommend you create separate VSANs for each fabric.  This will ensure nothing ever gets crossed.

8) If you connect the FIs directly to the SAN processors you must enable Switch mode for fabric on the UCS.  You will not be able to connect the UCS to any fibre switches once you do this, and all zoning must be done in UCSM. It does work (we do this in our test environment), just be aware that it does not scale well past one domain.  You would provision your SAN the same as before.  Enable ALUA.  The first port on each Storage Processor would go to FI-A and the second port on each SP would go to FI-B.  Then zone appropriately in UCSM.

9) Our ESXi hosts get a minimum of six vNIC templates.  We create a mgmt A, mgmt B, vmotion A, vmotion B, data A and data B.  As you can guess, we pin each template to a fabric and do not let it fail over.  This gives us a ton of flexibility inside of ESXi.  Additionally, I believe it is "best practice" not to use fabric failover for VMware. (If you chose to, you will get alarms in vmware stating there is no nic redundancy.  There is an Advanced Setting to disable that.  Don't have it offhand, you would need to google that.). I recommend making a template for each fabric and keeping them updated as a pair.

10) I would not use fabric failover for anything VMware related,  VMware does a great job of managing redundancy.  Let it do it's job.  We only use fabric failover for windows and Linux hosts running on ucs.

11) yes configure as a port channel.  Do this before you connect chassis as you cannot non disruptively do it later.  When a chassis gets auto acknowledged, port channels or pin groups are setup behind the scenes.  You cannot modify these directly. Therefore if you switch to port channels later, the only way to "rebuild" is to re-acknowledge the chassis, which will take the chassis and everything in it offline.  Bad day.  If you don't use port channels, the blade vETH interfaces get pinned to an uplink (following the defined process from cisco, again google for UCS Pinning to get more info).  The port channels pin servers to the port channel interface instead of the uplink interface which makes it much more stable and resilient to link failure.  With pinning, if memory serves, if a link a server is pinned to goes down, the interface for the server is down.  It doesn't "repin" the link.  So for example, if you don't use port channel, and a server gets pinned to uplink 1/1, and it goes down, the server will see that interface as down.  With a port channel, as long as one link in the channel is up, the server will show up.  

I hope that helps. I typed this from my ipad -- feel free to reply and I'll try to clarify.

View solution in original post

1 Reply 1

1) complexity is the best I can tell.  For us, we decided to stay with FC native as we already have the native FCC buildout. Our network group would like for us to switch to FCoE (we use N5K for our datacenter fabric switches).  But for our SAN group, we all all comfortable with native FC right now and didn't want to convolute our UCS deployment with a switch to FCoE.

2) UCS is FCoE from the interconnect back.  There is no performance difference. Once the FC lands at the interconnect, the interconnect opens the packet and converts it to FCoE headers, places it on a VLAN you configure (must be a unique VLAN, it will not let you overlap with production VLANs defined in UCS already) and sends it to the VIC. The VIC then opens the FCoE packet and delivers it to the OS as if it were native fibre channel.  Therefore the performance difference between the two is negligible.

3) The fixed module is controlled as one.  When you change unified ports the entire interconnect reloads.  When you load an expansion module only the expansion module reboots if you change it's unified ports setting.  Another thing to note when you purchase an expansion module you get additonal license ports that can be used wherever (does not have to license an expansion module port).  Other than that, performance and functionality is identical to fixed.

4) We create organizations for each type of OS because different groups manage different OS'. For example we have an org for VMare, one for Linux, one for windows and then a special oracle one for our DBAs. We then assign permissions via AD to locales so the groups get access to their servers with minimal effort from us.

5) everything in root unless needed below (haven't run into this yet)

6) As a "best practice" you should always have two separate fabrics. UCS needs to put the fabrics into FCoE VLANs.  To avoid confusion it is best to put each fabric in its own VSAN. For example we run our production SAN fabric A in VSAN 10 everywhere, and FCoE VLAN 3210.  SAN fabric B is in VSAN 11, FCoE VLAN 3211. 
6a) From a MDS/Nexus perspective you set up a "f-port trunk". UCS logs into the fabric as a nport with NPIV (ucs logs in and then tells the fibre switch "by the way, these additional WWPNs are also on this port I just logged in to, send their data to me").  You put the f-port channel you create on the mds/nexus device into the VSAN you want (in my example, VSAN 10 for A and VSAN 11 for B). Also to note: you never cross fabrics from the interconnect up to your SAN fabric. Interconnect A will have a port channel to the A SAN fabric and 
Interconnect B to the B SAN fabric.  Think of the Interconnects as HBAs on a normal server.  You don't mix the A and B fabrics.  This is how you get SAN redundancy.

7) See above.  I would highly recommend you create separate VSANs for each fabric.  This will ensure nothing ever gets crossed.

8) If you connect the FIs directly to the SAN processors you must enable Switch mode for fabric on the UCS.  You will not be able to connect the UCS to any fibre switches once you do this, and all zoning must be done in UCSM. It does work (we do this in our test environment), just be aware that it does not scale well past one domain.  You would provision your SAN the same as before.  Enable ALUA.  The first port on each Storage Processor would go to FI-A and the second port on each SP would go to FI-B.  Then zone appropriately in UCSM.

9) Our ESXi hosts get a minimum of six vNIC templates.  We create a mgmt A, mgmt B, vmotion A, vmotion B, data A and data B.  As you can guess, we pin each template to a fabric and do not let it fail over.  This gives us a ton of flexibility inside of ESXi.  Additionally, I believe it is "best practice" not to use fabric failover for VMware. (If you chose to, you will get alarms in vmware stating there is no nic redundancy.  There is an Advanced Setting to disable that.  Don't have it offhand, you would need to google that.). I recommend making a template for each fabric and keeping them updated as a pair.

10) I would not use fabric failover for anything VMware related,  VMware does a great job of managing redundancy.  Let it do it's job.  We only use fabric failover for windows and Linux hosts running on ucs.

11) yes configure as a port channel.  Do this before you connect chassis as you cannot non disruptively do it later.  When a chassis gets auto acknowledged, port channels or pin groups are setup behind the scenes.  You cannot modify these directly. Therefore if you switch to port channels later, the only way to "rebuild" is to re-acknowledge the chassis, which will take the chassis and everything in it offline.  Bad day.  If you don't use port channels, the blade vETH interfaces get pinned to an uplink (following the defined process from cisco, again google for UCS Pinning to get more info).  The port channels pin servers to the port channel interface instead of the uplink interface which makes it much more stable and resilient to link failure.  With pinning, if memory serves, if a link a server is pinned to goes down, the interface for the server is down.  It doesn't "repin" the link.  So for example, if you don't use port channel, and a server gets pinned to uplink 1/1, and it goes down, the server will see that interface as down.  With a port channel, as long as one link in the channel is up, the server will show up.  

I hope that helps. I typed this from my ipad -- feel free to reply and I'll try to clarify.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card