cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3111
Views
0
Helpful
5
Replies

PVLAN Config with UCS and 1000v Question ??

eric_meisinger
Level 1
Level 1

We are attempting to setup a multi-tenant environment where the customer would like each tenant to have a single subnet which is segmented with private vlans. The design calls for three (3) UCS chassis'. Two (2) of the chassis' are fully populated with half-width blades and the third chassis has two half-width blades. The two chassis' that are fully populated will all be ESX hosts with Palo adapters and will operate with 1000v.

  As their current VLAN plan is to have approximately 10 private vlans per tenant/subnet, i'm concerned this design will not scale well at all within a UCS enviroment due to the limitations related to the total number of vnics/vhbas per chassis. I viewed a post where it was indicated that we could bypass the vnic limitation by simply trunking down all VLAN's to the VEM and configure all private-vlans on the 1000v only. This would allegedly alleviate the vnic limitation in a larger multi-tenant environment. Is this a valid and supported design/configuration and/or does this actually work? Or do we instead actually need to create a vnic for every private vlan we want to present to each ESX host as recommended/required in the config guides?

5 Replies 5

Joseph Ristaino
Cisco Employee
Cisco Employee

Hey Eric,

I'm a bit confused as to why you think there would be a design contraint here.

As long as the vNics on the service profile are carrying the primary and the isolated vlans than you will be able to add those to the uplink Port Profile with the Private vlan config.

From there any vethernet port profile should have the appropriate config for the vlans.

One thing to keep in mind tho is the config guide says that there are two different ways to configure this depending on whether or not your upstream switches can support native vlans.

I would always configure the private vlans on the N1K however to keep the enforcing of these within the UCS chassis and Nexus 1000v.

The config guide is here:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/port_profile/configuration/guide/n1000v_portprof_6pvlan.html

I think there shouldn't be a design limitation here though as you should only need to trunk down the isolated and primary vlans expecting that these are the only vlans that should be in use and routing (when necessary) is configured on the upstream devices.

Hope this helps,

Joey

Hey Joey,

  Thanks for the response. I've added this same post to the TAC case but i'll update this discussion with the same for anyone else who may be interested.

  Our customer is still debating their requirement for PVLAN use within this Pod. However, if they choose to move forward, my primary concern is mostly with the UCS configuration related to PVLANs, how the UCS PVLAN configuration differs (if at all) with the integration of the 1000v and if this multitenant setup will cause the Fabric Interconnects to exceed their max VIF count. Related to the UCS configuration, it's my understanding that for every isolated private VLAN we would like to present to a blade, we need to create a separate vNIC in UCS. This customer is attempting to construct an environment based on the FlexPod model where multiple tenants would be present in the environment. Their idea, was to create a single subnet for each tenant and then isolate tier/purpose traffic via layer-2 PVLANs within each tenant subnet. This is where the need for the PVLANs come in. Simply a customer request in their design.

  So how this relates to my primary questions; if for every tenant we have to add 5+ vNICs as they are introduced into the Pod, my understanding is that this will easily cause us to have more than 120 VIFs per chassis in no time. It's my understanding that we have a total of (15*[number of I/O Module uplinks] - 2) VIFs available in total. (I'm assuming this is 118 total VIFs for two fully populated 2104's per chassis???) Currently, the design calls for a combination of ten (12) VIFs (10 vNICs and 2 vHBAs) as a standard on each of the eight blades in each chassis. On top of this would be when we begin adding tenant specific vNIC's for each tenant's PVLANs (If this is the proper/required config). However, I have later read that if we integrate the 1000v's into the environment, the necessity to create a new vNIC in UCS for every isolated PVLAN is no longer in play as all that is required is to trunk down all "parent" vlans to the VEMs and there, at the 1000v level only, we can perform the PVLAN config. It has been recommended to configure this environment similar to the strategy where the upstream switch does not understand/perform PVLANs. Is this correct or would we still need to add the vNIC's to the service profiles in UCS even when integrating the Nexus 1000v's?

  I have yet to really find a document that discusses the use of PVLANs within the UCS environment when implementing Nexus 1000v which would tie all of these questions together.

Thanks,

Eric

Hey Eric,

I was not sure about having more than one isolated vlan on a vnic so I tested it out and you are right!  The UCS will not allow you to have more than one isolated vlan per primary vlan.  It also will not let you have more than one primary vlan on the vnic as well.

I think this would be your best bet:

1) Pass all the vlans on the UCS service profile as regular vlans.  This would allow you to not have to use multiple vnics per isolated vlan.

2) Configure the private vlan mappings on the nexus 1000v port profiles and then on the uplink switches you have connected to eh fabric interconnects.  This will allow you to control the private vlans on the hosts while allowing all the vlans in the UCS without hitting any vnic limitations.

Does this make sense?  I would definitely test this in a QA environment before going into production.  At first glance I don't see any limitations or issues with this but it is still good to test.  Put all this in the tac case as well if it helps.

Joey

Hi, I

I am trying to do the similar setup, but it is not working as expected. Here is the setup

We have ESXi 5.1 configured on the UCS Blades. We have dVSwitch setup with multiple VLANs. We are trying to configure Private VLAN ( PVLAN ) on the dVSwitch. The steps to taken to create the PVLANs are as follows

1. Create the Master (vlan id 583) the Secondary PVlan (vlan id 515) in the Physical Switches.

2. We have created both Master and Secondary VLANS in the UCS as normal VLANS

3. On the Distributed vSwitch, Created the Primary VLAN id (583) and Created a secondary VLAN (515) The Type for the Secondary VLAN is Isolated.

4. Created a Port Group and the VLAN type chosen is Private VLAN and the Private VLAN Entry as Isolated (583,515). Deployed a VM int the Port Group and we are not able to ping the Gateway.

The Gateway is on the Promiscuous Port. If i change the VLAN type in the Port Group to Promiscuous, then  i can ping the gateway which is expected. 

Is there any mapping needs to be done in the Upstream Switches for this configuration to work? 

Here is the configuration in on the Upstream Switch side.

vlan 515 name testpvlan-isolated

private-vlan isolated

vlan 583 name testpvlan-master

private-vlan primary private-vlan association 515

interface TenGigabitEthernet1/1 description UCSFI01

switchport mode trunk

Thanks for the assistance.

-Vijay

Were you able to get this resolved? I too was wondering if the Upstream switch (i.e. 3750) was handing PVLANs, if the PVLANs were trunked to the UCS as regular VLANs and the UCS was configured for disjoined L2 then vNICs would just carry the VLANs up to a vDS were the PVLANS would be configured.

 

I have heard of this as a solution rather than configuring multiple vNICs for private-vlans, but haven't tried it myself.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: