cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3466
Views
0
Helpful
7
Replies

UCS 1.4 support for PVLAN

davidcheung
Level 1
Level 1

Hi all,

Cisco advise UCS 1.4 supports PVLAN. But i see the following comment about PVLAN in UCS 1.4

"UCS extends PVLAN support for non virtualised deployments (without vSwitch ) . "

"UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports."

Does this means PVLAN won't work for virtual machine if VMs is connected to UCS by Nexus1000v or vDS although i am using PALO (M81KR) card?

Could anybody can confirm that?

Thank you very much!

1 Accepted Solution

Accepted Solutions

Hi

Yes - PVLANs need to be end to end.

You can extend the PVLAN's defined on the 1000v and then for bare metal servers (connected to UCS) to a promiscous port *outside* UCS like a L3 interface or a backup station.

Pls keep in mind that promiscous ports are not supported in UCS. Only isolated access for now.

With the M81KR (Palo) adapter you would need to create a diff interface for each isolated VLAN (as UCS cannot do isolated trunks yet).

In you case case you seem to have 1 isolated VLAN hence 4 vNICs in total to be given to the ESX running 1000v.

2 vNICs for regular VLANs (load sharing and redundancy)

2 vNICs for the isolated VLAN (load sharing and redundnacy).

See attached a doc which talks abt doing so which I belive is what you are asking.

Thanks

--Manish

View solution in original post

7 Replies 7

cpaggen
Cisco Employee
Cisco Employee

Both the Nexus 1000V and the native VMware vDS provide support for PVLAN, independently of UCS. The UCS PVLAN feature is intended for environments that do not use the Nexus 1000V or the vDS. With UCS PVLANs you can set a vnic into an isolated (secondary) VLAN directly. You can't trunk (802.1Q) a secondary VLAN and other VLANs on a UCS vnic. You can have only one secondary VLAN per vnic _or_ one or a set of regular VLANs, but not both at the same time. Therefore using a UCS PVLAN on a vnic going to an ESX/ESXi host doesn't make much sense.

Hi Christophe,

Thank you very much for your reply.

I have a question about your point "Therefore using a UCS PVLAN on a vnic going to an ESX/ESXi host doesn't make much sense."

But I do have a scenario may request PVLAN on UCS vnic going to ESX/ESXi host:

One physical WINDOWS DB server

Multiple VMs (e.g. Web and APP server) on a few blades, e.g. 2 ESX hosts as one VM cluster

The Physical Windows server and VMs are in different security zone.

To backup the VMs and physical windows server, we want to deploy a single Backup vlan (isolated PVLAN) for alll VMs and physical server while not to compromise the security request (Web, APP and DB can't talk to each other directly even on the backup side)

To achieve this, Nexus 1000v and UCS FI should have the same view of PVLAN. 

Could you please advise how Nexus 1000v and UCS FI can be integrated together for PVALN?

Thank you very much!

Hi

Yes - PVLANs need to be end to end.

You can extend the PVLAN's defined on the 1000v and then for bare metal servers (connected to UCS) to a promiscous port *outside* UCS like a L3 interface or a backup station.

Pls keep in mind that promiscous ports are not supported in UCS. Only isolated access for now.

With the M81KR (Palo) adapter you would need to create a diff interface for each isolated VLAN (as UCS cannot do isolated trunks yet).

In you case case you seem to have 1 isolated VLAN hence 4 vNICs in total to be given to the ESX running 1000v.

2 vNICs for regular VLANs (load sharing and redundancy)

2 vNICs for the isolated VLAN (load sharing and redundnacy).

See attached a doc which talks abt doing so which I belive is what you are asking.

Thanks

--Manish

Thank you very much!!!

Looking for further clarification...

I have a large virtual infrastructure in which I'm looking to implement PVLANs for isolation to multiple tenant maintenance networks. We're using the Nexus 1000v -> 6120 -> 5020s -> 7000s. Does that document state that I can't trunk multiple PVLANs together with other standard VLANs out a single pair of 10Gbe interaces on the blades? If I have, say, 4 tenants for whom I want to use isolated PVLANs, that I have to carve up the Palo card so that each vnic only has 1 isolated PVLAN segment each? If I have 20 tenants with 20 maint networks, that's 20 vnics? I'm hoping not... Add to that the need to carve everything up to avoid having vMotion saturate my 10Gb interfaces, and I've got a lot vnics to deal with.

Can't the 6120 just pass through the pvlan tagged traffic (they're just single 802.1q tags, right?) up to the 5020s or back down to the VEMs on each host and let them deal with the PVLAN isolation? Am I missing something?

Jim,

Since you are using the Nexus 1000V you can create a single vnic per fabric with palo and trunk all the vlans down to the VEM module. At that point the VEM module will take care of the PVLAN configuration. If you don't want to use the PVLAN feature on the Nexus 1000V than yes you would need one vnic on Palo for each isolated VLAN you want to present to the VMs on the ESX host.

So recommendation would be to use Nexus 1000V or VMware DVS to host all PVLAN definitions and just trunk all VLANs down to a single VNIC on the Service Profile. Just create normal vlans on UCS and make sure to add them to the VLAN definition of the vNIC.

For VMotion saturation if you are using Nexus 1000V 1.4 you can use QOS Fair queuing to set a specific bandwidth policy for Vmotion. Very easy to setup and get running. The other solution is to create a QOS class in UCS and then create a vNIC off Palo assigned to that class just for Vmotion.

louis

Have not got that working so far...how would that traffic flow work?

1000v -> 6120 -> 5020 -> 6500s

(2) 10Gbe interfaces, one on each fabric to the blades. All VLANs (including the PVLAN parent and child VLAN IDs) are defined and added to the server templates - so propagated to each ESX host.

At this point, nothing can do layer 3 except for the 6500s. Let's say my primary VLAN ID for one PVLAN is 160 and the isolated vlan ID is 161...

On the Nexus 1000v:

vlan 160

  name TEN1-Maint-PVLAN

  private-vlan primary

  private-vlan association 161

vlan 161
  name TEN1-Maint-Iso
  private-vlan isolated
port-profile type vethernet TEN1-Maint-PVLAN-Isolated
  description TEN1-Maint-PVLAN-Isolated
  vmware port-group
  switchport mode private-vlan host
  switchport private-vlan host-association 160 161
  no shutdown
  state enabled
port-profile type vethernet TEN1-Maint-PVLAN-Promiscuous
  description TEN1-Maint-PVLAN-Promiscuous
  vmware port-group
  switchport mode private-vlan promiscuous
  switchport private-vlan mapping 160 161
  no shutdown
  state enabled
port-profile type ethernet system-uplink
  description Physical uplink from N1Kv to physical switch
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan all
  channel-group auto mode on mac-pinning
  no shutdown
  system vlan 20,116-119,202,1408
  state enabled
This works fine to and from VMs on the same ESX host (PVLAN port-profiles work as expected)...If I move a VM over to another host, nothing works...pretty sure not even if in same promiscuous port-profile. How does the 6120 handle this traffic? What do they get tagged with when they leave the 1000v?

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card