01-20-2013 09:41 PM - edited 03-01-2019 10:50 AM
I enabled jumbo frames on some of the vnics I created under the vnic templates Is there any place else within ucs or the FIs, that i need to enable jumbo frames?
How should I be setting up mu vnics for jumbo frames specifically the iscsi and vmotion nics?
By enabling mtu 9000 on the vnic template, do I need to do some QoS policies?
01-20-2013 10:05 PM
Hello,
Please update MTU value for appropriate QoS class ( LAN Tab > QoS System Class > MTU ) used by those vnics.
Padma
01-20-2013 11:03 PM
even after I ceated the Qos Policy and the system class.vmotion keeps failing. on the esx side, all the vmotion and vswich is set to 9000.
but then i set them back to 1500, vmotion works. the upstream 7k are enabled for jumbo already
01-21-2013 08:31 AM
I had similar issues with iSCSI traffic.
My issue was the traffic coming back from the upstream switch into the FI uplinks wasn't passing the same CoS/QoS value I had defined in UCSM so the MTU wasn't being matched.
Once I set up CoS/QoS on the upstream switch correctly, the MTU value was matched.
Hope that helps.
Lewis
01-21-2013 08:47 AM
Would you be able to tell me how the QoS sytem class should be configured with these traffic
FC
vmotion
ISCSI
VM traffic
how should the weight and CoS setting be configured on these?
thanks
01-21-2013 09:07 AM
Tony,
Though you configured the adaptor/vnic with a jumbo MTU you also need to configure and assign a QoS class with a Jumbo MTU alslo.
The only traffic you should really consider fiddling with the MTU is your iSCSI traffic. Be careful because if you dont configure jumbo frames correctly from source to target you'll do more harm than good. This includes configuring jumbo frames northbound or wherever your target connects to your LAN.
The tricky thing that Lewis mentioned is handling the return traffic. UCS will mark on the way out of the system towards your iSCSI target, but it can't mark anything coming on the return - You'd need to configure your northbound switches to do so.
You have two options to handle the return iscsi traffic (assuming your iscsi target is outside of UCS).
A) You can simply configure your "Best Effort" CoS with a 9216 MTU within UCS.
or
B) You need to mark the iSCSI target interface with the correct CoS marking. This is usually done on the switch the target connects to. Then you need to set the matching CoS value on UCS to also set the 9216 MTU. Ex. Let's say you have your iSCSI target connecting to an N5K northbound to UCS. On the interface the iSCSI target connects you need to tag the CoS value to say "3". Then on UCS you would configure the CoS 3 with an MTU of 9216.
The far easier way is to go with option a). I wouldn't recommend you mess with QoS elsewhere in your network unless you're comfortable with it.
Also don't forget to set the jumbo MTU on your iSCSI target's interface as well.
Regards,
Robert
01-21-2013 09:38 AM
right now, whenever I change my vmk nics on the esxi side to mtu 9000, vmotion is failing.
when i change them back to 1500 it works.
but my vnic templates in ucs are all set at 9000.
and my system class mtu is set at 9216.
I tied both at 9000 and it wont work either.
01-21-2013 10:04 AM
Then start by provide us an exact topology - from vmk port to iSCSI target.
Include every switch in between.
Robert
01-21-2013 10:52 AM
hi
I took your recommendation to change best effort mtu to 9216 and vmotion works now. this is my system class right now.
Do you have recommendation on how the weight % of FC/ISCSI/Best effort should be?
I will be moving my iscsi appliance into the nexus 7k core. RIght now it hanging off a 6509. Do you have a config on how the iscsi port should be configued on the nexus 7k?
thank you
01-21-2013 11:01 AM
Since you're using the Best Effort class for jumbo frames you can revert back the other classes to default "normal" MTU, unless you need jumbo frames on those classes also.
As far as the class weighting goes leave the defaults. Unless you have some serious high-bandwidth intensive apps you should be fine with the default breakdown.
For the N7K config, there's plenty of guides detailing jumbo frames config if you search.
http://www.cisco.com/en/US/products/ps9670/products_configuration_example09186a0080b44116.shtml
Regards,
Robert
01-22-2013 05:27 PM
Hi
Going back to what you said about Cos
B) You need to mark the iSCSI target interface with the correct CoS marking. This is usually done on the switch the target connects to. Then you need to set the matching CoS value on UCS to also set the 9216 MTU. Ex. Let's say you have your iSCSI target connecting to an N5K northbound to UCS. On the interface the iSCSI target connects you need to tag the CoS value to say "3". Then on UCS you would configure the CoS 3 with an MTU of 9216.
How would this apply to vmotion which has a gold CoS on the upstream switch (N7K). Right now its using best effort and mtu set at 9216. Outbound its using Gold. How do I setup the upstream switch to use Cos Gold on the way back to the UCS fabric?
thanks
01-22-2013 06:19 PM
Tony,
VMotion doesn't need jumbo frames. Is there a particular reason you need to mark that traffic? I would highly recommend leaving the vmotion as regular MTU. You wont get any performance gains off of bursty traffic like vmotion.
The N7K has no concept of "Gold", "Silver" CoS etc. What the N7K will recognize are CoS values. Depending what your Gold CoS = a number. By default in UCSM, Gold CoS is 9.
You'll need to do some reading on Nexus 7000 QoS marking for the return traffic/interface with CoS 9.
Regards,
Robert
01-23-2013 03:57 PM
ok now when i put best effort mtu back to normal, vmkping to other vmk interfaces work.
tried vmotioning al they still work. the behaviour is very strange and unpredictable
01-27-2013 08:34 PM
I enabld cdp and found this on my iscsi vnics. any idea why its showing 1500 on the iscsi vnic when it set at 9000 in ucsm?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide