First i'm new to UCS so excuse my lack of knowledge. I had some questions about ISCSI and NFS traffic over 10GB ethernet.
We are planning on having 5548's and two 6248's interconnects, when does it make sense to use the 5548's for our ISCSI SAN's versus using Appliance ports on the interconnects? It makes more sense to me to connect them to the 5548's.
Should this traffic but on a seperate port channel or just on the port channel with the rest of the traffic on a isolated VLAN.
Does it make sense to pin VNICS to interconnect ports or is that unneccessary with all the bandwidth?
In regards to your questions, unless the iSCSI SANs have dedicated ports just for UCS, or is solely to provide storage for UCS, then I wouldn't do direct attach. It's a far simpler design if you connect via your 5548's especially if you're going to server traffic to servers outside of the UCS system also.
Unless you have extremely high iSCSI utilization, I probably wouldn't suggest you'd need a separate uplink Pin Group either. Pin Groups can be used here, but they do add complexity and having a multi-10G uplink port channel you'll likely never push it to capacity anyway. A single uplink port channel should be adequate and keep it simple at the same time.
Along with this allow the system to round robin your vnics to uplinks. If you're not using Pin Group Uplinks, then you don't need to do vNIC pinning. Again, more complexity and in your case unless you're versed on troubleshooting and supporting it, I would stick to the straightforward design.
What I would suggest is the following:
- Put all your iSCSI traffic on a separate L2 VLAN.
- Try to locate your target iSCSI SAN on this same VLAN (avoid routing and packet fragmentation)
- Enable Jumbo frames on this VLAN End-to-End. (There are multiple whitepapers and posts on this topic). iSCSI + Jumbo Frames @ 10G is the closest thing you'll get to Fibre Channel performance on a SAN.
- Setup one of the UCS Class of Service queus for Jumbo frames (this will set your MTU accordingly). One thing that's often missed here, is appropriately marking the CoS on the return traffic. You would do this on the interface & switch the iSCSI SAN directly connects to, which should be the 5548's in your case.
Great, thanks for the advice. Couple of other questions in regards to Vmware environments.
When would I use Dynamic Vnics or not use them? I assume I don't want my vnics changing, I don't understand the concept behind Dynamic Vnics.
When designing say Vnics for vmotion, iscsi, etc. Would I use two of them, one to fabric A and one to Fabric B or do I just need one as UCS handles the redundancy? I think I read somewhere to do one to each fabric but not to use teaming in Vmware, set one for Active and one for passive/standby.
Dynamic vNICs (aka VM-FEX) is a 1000v-style vNIC assignment of UCS resources directly to VMs. Essentially it gives you network control (via port profiles) for VMs from UCS. There's a wealth of resources detailing dynamic vNICs. Here's a couple to review.
As for the teaming, if you have an Adapter that support Failover Failover (M71 or M81 series adaptors), then you can have a single vNIC visible to the host in which UCS will failover if there is a fabric failure. Any failures are transparent to the host. If you'd prefer the host to be aware of a path failure and respond accordingly you can go the tradition route and use two vNICs, one to each fabric but there's no Port Channeling (teaming based on IP hash). You can use both NICs simultaneously still by using the default vSwitch teaming method (teaming based on virtual port ID) or by using active/passive teaming.
I prefer utilizing both adapters with the default teaming method. This provides 2 x 10G active uplinks, rather than only 10G which is what a single vNIC with Fabric Failover & Active/Passive teaming would provide.