12-22-2010 02:24 PM - edited 03-01-2019 09:46 AM
I had two customers looking for this. In other blades, they have the ability to hook up to 8 NIC's to a switch and then that switch(s) can be in DMZ1, DMZ2, internal etc. So basically they are able to do physical segmentation of their vhosts on ESX via physical NICS. Since UCS has only one NIC we have to trunk multiple vlans down and logically segment or use PALO but its still not a physical segementation. The only way i can think this would work is to use a UCS uplink port into say the DMZ as a access port. Then configure the vNICS on the server as access ports in the DMZ vlan. Finally pin the server to a FEX and traffic should enter the FI on the uplink in that vlan, hit the DMZ vlan on UCS and go down the pinned FEX to the server. Obviously there is a whole lot of issues such as ESX management traffic, vmotion traffic, and really only one link can be used so scaling is a huge issue. With a small customer with a couple of servers in a DMZ that could work.
The other way is using the 1000v in conjunction with the PALO and VN-link to tag traffic. Then you could use the 1000v to setup ACL's to segment traffic in a sort of SMT fashion or possibly use vShield. I really dont have any hands on with vShield or worked with VN-link wondering if anyone else has tried a similar scenario.
12-24-2010 07:40 PM
Hi
With the M81KR (VIC) adapter you could create multiple vNICs and assign it to different vSwitches/uplink port profiles etc to provide segmentation.
Going out of the UCS system, you could use pinning (as long as your upstream is not a disjoint Layer 2 in EHM) to deterministically route traffic.
When one looks at DMZ isolation etc, a lot of it comes depends on the environment one is looking at.
Nexus 1000v has a guide published at http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/dmz_virtualization_vsphere4_nexus1000V.pdf (not UCS specific) on hot to achieve this using diff portgroups (VLANs essentially) /uplink port profiles/ACLs/PVLANs etc and you could apply it to a UCS environment with the M81KR.
The various vNICs presented to the hypervisor or bare metal OS are distinct PCI entities but as you correctly mentioned they are not physically segmented going out to the fabric - for example you create 4 vNICs on Side A. They all will go on the same IOM-FI link as in UCS, HIF (the interfaces downwards from the IOM) and not vNIC to FI link pinning is followed. The full width blades (with 2 adapters) give you more choices though as the number of HIFs is more.
Hope it helps.
Thanks
--Manish
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide