03-11-2018 11:18 PM - edited 03-01-2019 01:27 PM
Hi All,
Below is my setup:
I have a UCS B M200 blade servers with 1340 VIC adapter each and 2304 FEX connected to 63xx Fabric Interconnects.
The Blade servers will host ESXI which are connected using DVS.
Here is my question:
The VMware admin guy creates a port-group and links every port-group to a Vlan hence vlan tagging happens on the DVS before it leaves the blade server. (Correct me if I'm wrong)
From the server side, I would have created multiple vNICs and vHBAs mapping them to a specific vlan. Each Vnic created in the service-profile is represented as Vmnic in the vSphere.
Now here is where my confusion lies, if the packet arriving to the vmnic(vnic in USC) is already tagged by the DVS in the VM environment then why should I assign a vlan to the vmnic (vnic in UCS) in the service-profile? Where and what is this (assigned in service-profile) vlan used for?
I just needed a clear understanding of the packet walk from the time packet leaves the vm machine to the time it reaches the northbound switch connected to FIs. Which devices are involved in vlan tagging and untagging.
Solved! Go to Solution.
03-12-2018 04:45 AM - edited 03-12-2018 04:56 AM
Your VMnic in esxi is essentially a trunk port passing along tagged traffic that got tagged at the virtual port-group level. As the traffic is being 'put on the wire' at the VMNIC, then VNIC level, the tagged traffic is simply being allowed on the vnic (trunk port) when you select the various vlans. You are not adding vlan tags at vnic level, but simply evaluating & allowing frames for the 'allowed' vlans.
Native marked vlans at vnic level, eventually add vlan tags added for 'untagged' frames, but this is not applicable to your situation.
As far as number of vnics, one per side will generally be able to handle most network, workloads you will have. Common situations for multiple vnics typically involve Backup, DMZ, disjoint layer2 configurations where the there are different vlan configurations on different ports going north from the FIs.
We do typically see customers configure a vnic pair that carries just ESXi management traffic, and another pair that carries guestvm traffic.
If you are planning on utilizing NFS or iSCSI storage, then those may have their own vswitch design requirements, QOS, etc.
Kirk...
Kirk...
03-12-2018 03:36 AM - edited 03-12-2018 03:40 AM
Greetings.
You need to think of the vnic more of as a trunk port.
By selecting the various 'allowed' vlans in your service profile for your vnic, you are allowing them to traverse the vnic.
You do need to be mindful of assigning a vlan as 'native', as this can cause issues if you are actually tagging the vlan at the vmware level.
Esxi configurations generally don't need the service profile vnic vlan with 'native', and we usually just see that with a Windows baremetal config.
For ESXi, you don't need to create a vnic for each Vlan. Again, this is because the vnic is essentially a trunk port.
Typical use cases for multiple vnics (besides redundancy & load balancing for each side) would be different QOS class needs, disjoint layer 2, different MTU needs, etc.
Thanks,
Kirk...
03-12-2018 04:10 AM
03-12-2018 04:45 AM - edited 03-12-2018 04:56 AM
Your VMnic in esxi is essentially a trunk port passing along tagged traffic that got tagged at the virtual port-group level. As the traffic is being 'put on the wire' at the VMNIC, then VNIC level, the tagged traffic is simply being allowed on the vnic (trunk port) when you select the various vlans. You are not adding vlan tags at vnic level, but simply evaluating & allowing frames for the 'allowed' vlans.
Native marked vlans at vnic level, eventually add vlan tags added for 'untagged' frames, but this is not applicable to your situation.
As far as number of vnics, one per side will generally be able to handle most network, workloads you will have. Common situations for multiple vnics typically involve Backup, DMZ, disjoint layer2 configurations where the there are different vlan configurations on different ports going north from the FIs.
We do typically see customers configure a vnic pair that carries just ESXi management traffic, and another pair that carries guestvm traffic.
If you are planning on utilizing NFS or iSCSI storage, then those may have their own vswitch design requirements, QOS, etc.
Kirk...
Kirk...
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide