08-22-2012 01:56 PM - edited 03-01-2019 10:35 AM
Hello everyone,
I plan on deploying my first UCS system, very excited about this!!!I have purchase two Cisco UCS 6200 Fabric interconnect, 2 Cisco 5548UP with the routing card. I also have one 5100 series chassis with the new 8 10GB ports fabric extenders. The chassis is fully populated with 8 B200M3 servers all using the amazing Virtual Interface care 1280. I plan on using vMware vSphere 5 Enterprise Plus. My design calls for using distributed switch WITHOUT the CISCO V1000 switch. In addition I’m using a NetApp Fas array with only IP base storage ISCSI/NFS.
So here is my issue, with all the flexibility provided with the UCS VIC1280 I don’t know how to best configure my ESX host network. Meaning I don’t know how many VNIC’s I should present to the host and with what configuration. I can’t seem to find a document that describes best practices for a network design for ESXi using UCS VNIC1280. I was thinking If I setup the UCS NICs for automatic fabric failover I should only need one vNIC for each type of traffic, Management, Storage, vMotion, FT and production traffic. I can think of another 100 different ways to get this done... too many options. Can anyone please point me and the right direction? Thank you very much in advance.
08-26-2012 08:08 AM
Ben,
Great to see you excited about UCS. You have a great looking setup in the works!
First observation, you said you have B200 M3 with the VIC 1280. Did you order your B200 M3's with the mLOM (VIC 1240) also?. If you're trying to get the best bang for your buck we have to be sure you're using the correct configuration.
Let me try to explain.
The VIC 1280 itself is "capable" of handling 4 x 10G lanes per port (80G total for the adapter)
The 2208 FEX provides 4 x 10G backplane lanes to each blade.
The B200 M3 has two IO slots, one for the mLOM specifically, and a "mezz" slot for the VIC/3rd Gen Adapters.
On the B200 M3 blades, these 4 x 10G backplane lanes from each IOM are divided between the mLOM and Mezz slots. This means if you ONLY have a VIC 1280 in your B200 M3 with a 2208 FEX, you will only get a max bandwidth of 20G per fabric. Though the VIC 1280 is "capable" of 4 x 10G lanes per fabric, the B200 M3 only presents 2 x10G lanes per fabric to the Mezz slot. If you want to get up to that 40G per fabric max, you need to utilize the mLOM.
The B200 M3 has an optional module LOM (mLOM) aka VIC 1240 onboard. The 1240 mLOM uses a specific slot purposed only for the VIC 1240 mLOM. The 1240 does not populate the Mezz slot.
With an add-on IP expander card in the Mezz slot, the VIC 1240 + Expander utilizes the remaining backplane lanes to get you up to that 40G per fabric mark. The VIC 1240 + Expander also aggregates the interface so they operate as 2 x 40G interface. The IO Expander is not visible to the PCI bus. All it really does is transparently enables the remaining backlanes to the VIC 1240.
So what if you have mLOM VIC 1240 + VIC 1280? Well in short its overkill. There's two main things to consider here:
1. The VIC 1280 though capable of 40G per fabric, when used in the B200 M3 mezz slot it will give you a max of 20G per fabric. The appropriate configuration if you want to max out your bandwidth is to use the VIC 1240 + IO Expander.
2. As I said previously using the VIC 1240 + Expander aggregates the bandwidth into 2 x 40G lanes transparently. Using the VIC 1240 + VIC 1280 will render 4 x 10G distinct lanes. You host OS would see min. of 4 interfaces, where as with the IO Expander option it would only see 2.
I know its a bit confusing, but this is how the B200 M3 is designed.
********
Now that we've cleared that up let's look at your design question.
If you're looking for design guides or recommendations there's plenty on this forum. Just search for the original VIC M81KR or "Palo". I know Jeremy Waldrop has posted counteless examples of his VIC deployments.
As long as you understand the benefits of each capability you should be able to decide on a design. As you know any Cisco VIC (M81KR, 1240, 1280, P81e) will allow you to virtualize the adapter and present distinct PCI devices to the host. Each vNIC/vHBA can have unique policies assigned to it such as QoS, CDP, VLANs, Fabric Failover, Adapter Policies etc. For each virtual interface you create, its one more point of management to the host, so keep that in mind. Also if your model VIC provides say 10G per physical port, then no matter how many virtual 10G vNICs you present to the host, the underlying 10G bandwidth limit will also be present.
Where we see most people dividing up virtual interface with ESX is for Management, VMotion, FT, VM Data, IP Storage. Basically these are "categories" of virtual traffic you may want different UCS QoS policies applied to. A good example is IP storage. Since you're only utilizing IP storage, you may want to utilize Jumbo Frames. In this instance you could create one vNIC on each fabric with an MTU of 9000. This requires some additional configuration (detailed in numerous posts on here) but you can see where I may want different policies applied to certain vNICs. If you're not planning on applying different policies to each vNIC, then there's no point of dividing them up virtually. They will just become another vmnic/vSwitch/vDS uplink to manage.
Regards,
Robert
09-07-2012 08:09 AM
Robert how does this affect vconn placement going forward ? I have yet to do an installation with more than one of the above pieces of hardware at a time. Does the mLOM show up as another vconn available ? Since its not technically a mezzanine slot I am curious how this works. Do we have multiple vconns to spread vnics across when the 1240 and IP Expander are installed or scenario with 1240 and 1280 both installed ? and is the IP Expander card required when the 1240 and 1280 are both populating the blade to bond them ???
thanks
dave
09-07-2012 08:24 AM
@Dave
The mLOM is a separate adapter. If you have the mLOM and another mezz card (except the Port Expander), then you will have two adapters on the blade and therefore can utilize two different vconn's. This just gives you some control over which adapter you'd like your vNIC/vHBA's instantiated on. This would also include the scenario if you had an mLOM 1240 + VIC 1280. You would have two possible vconn targets.
With the Port Expander it is NOT a separate adapter and will not affect the vconn placement. All vNICs/vHBAs created would be instantiated on the mLOM 1240 VIC.
As for your last question there are only two slots. One specifically for the mLOM and a mezz slot for the 1280 VIC or 3rd party adapter. If you use an mLOM and VIC 1280, they are not bonded at the UCS layer and each will operate independently. If you want to bond them you'll need to use host teaming software to do so. For ESX & Linux you can natively bond adapters. For Windows we have a VIC teaming driver which would allow you to create teaming bonds with members from the mLOM and VIC 1280 - to provide adapter-level redundancy.
Regards,
Robert
09-07-2012 08:26 AM
Got it, perfectly, thanks so much for your time.
Dave
03-11-2013 07:51 AM
Hi Robert,
Hi,
I am designing a Cisco UCS Blade environment for customer DEMO
I need to know:
1. In a Cisco UCS Chassis, is is ok to mix B200M3 servers with different types of Mezzannine Cards and mLoMs? e.g.
2. To utilize the Cisco VM-FEX Technology and VMware VMDirectPath whether:
Rather, when using the Cisco VM-FEX Technology, where does the VM-FEX IP Network switching occur?
Thanks.
03-10-2015 06:55 AM
Hi,
I have a 1240 + 1280 with Fex 2208 combination.
OS installed is ESXi with 8 vNICS.
I could see 20 GB per interface of ESXi. Max for a blade is 80GBs
Is it normal for the OS to see 20 GB per interface or is this a Bug
03-10-2015 10:11 AM
This is just how ESX presents the NIC. The underlying VIC 1240 is capable of dual 20GB back traces to each fabric. Same with the VIC 1280 (when added as a Mezz card to M3/M4 blades). Just like if you were to create 5 vNICs using the M81KR adapter, you would see 5 x 10G NICs - since the M81KR VIC only supported a single 10G backtrace. Every vNIC you create on UCS with the VIC1240 or VIC1280 will appear as 20G NICs on the host when used with the 2208XP.
Robert
10-04-2012 12:40 PM
Wow!! Thanks so much for the information! I only have the vic1280. I did settled on a six vnic configuration. 2 for iscsi 2 for all other VMware and production traffic and lastly 2 more for everything having to do with vclodu director traffic.
12-22-2012 02:31 PM
I got the B200 M2 servers. I think it has only 1 vic 1280. How much bandwidth per blade do I get with this vic?
03-13-2013 04:00 PM
Depends which IOM you have. Please read agove where this is all explained.
Robert
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide