cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4067
Views
0
Helpful
5
Replies

Common Tenant, Shared Services, VzAny

pascalfr0
Level 1
Level 1

Hi,

I'm working on an ACI design for our new datacenters.

We will have Customer tenants, and a specific tenant that will propose shared services to the customers tenants (DNS, NTP, backup, VTOM scheduling...).

First I planned to use a standard tenant for shared resources with usage of VzAny :

- contracts provided by the shared services EPG, consumed by all the customers EPG [VzAny]. VzAny allow us to drastically reduce TCAM cumsomption.

=> It works.

 

- contracts provided by VzAny in the customers tenants, to allow the Shared services EPG to reach agents running on every system in the customers tenants (masters => agents connections, VTOM for instance).

=>This does *not* work as ACI doesn't support VzAny as provider of a service that is exported to other VRF/Tenants.

 

Without usage of VzAny this model doesn't scale well (TCAM consumption is way too high if we have to define 1 contract provided by many customer EPG, consumed by Shared Services masters). With 300 EPG per customer tenant, it is simply impossible to implement (because of TCAM resources exhaustions).

 

I've seen that Common Tenant could be the answer to my problem as objects in Common tenant seem to be reachable without import/export from any other tenant.

In my design, I don't want customer tenants and common tenant to share the same VRF, each tenant must have its own separate VRFs.

 

1/ Can anybody confirm that it would be possible to configure :

- contracts provided by EPG in common tenant, consumed from VzAny in any other tenant,;

- contracts provided by VzAny in any other tenant, consumed by EPG in common tenant, without hitting the "shared service restriction for contracts provided by VzAny" thanks to common tenant spacial status ?

 

Every tenant VRF should have its own default route,I don't want the customers Tenant to learn 0/0 from common tenant. Is it ok ?

 

I suspect there's a tradeoff somewhere for using common tenant... is there any big thing I should know about it ? Obviously subnets in common tenant are leaked in every other tenant VRF, but is there anything else important I'm missing ?

 

 

Last question, Cisco gives a limit of 16 contracts consumed and provided per VzAny. Does it also apply to contracts from/towards common tenant ?

 

Pascal

 

 

 

 

5 Replies 5

iwearing
Level 1
Level 1

Hi Pascal,

 

Did you ever get an answer to this issue.

 

 contracts provided by EPG in common tenant, consumed from VzAny in any other tenant,;

- contracts provided by VzAny in any other tenant, consumed by EPG in common tenant, without hitting the "shared service restriction for contracts provided by VzAny" thanks to common tenant special status.

 

I have the same scenario and would like to deploy what you mention above.

 

regards

 

Ian

Yes, I was given a workaround.

As contract filters are L4 stateless filters (ie simple ACL), we can define the "Resources" Tenant as provider for all contracts, even those where the connections are initiated from the others Tenants/VzAny.

We only need to write the filters in the opposite direction for it to work (remember : filter are stateless, so we can write filters that allow a shared Resource EPG in one tenant to have access to VzAny in other tenants, simply by inverting filter terms).

Hence we overcome the limitation where a standard tenant can't have its VzAny sharing contracts to others (with this workaround, VzAny is always consumer, never provider).

 

But maybe using Common tenant could be easier if you've not gone too far in your deployment (I had no response about that).

 

Another issue we're facing at the moment is that, for the provider tenant/EPG to export its routes into the other consumers Tenants, we need to add specifically in the provider EPG the list of subnets or /32 hosts dedicated to that EPG. Without this configuration, inter-tenant leaking and filtering doesn't work.

It looks weird to have to do so, as it goes against the elasticity we expected from ACI... especially as we didn't plan to reserve address subnets for spefic EPG. Anyway this is only need for tenant having to provide contracts/leak route to other tenants.

 

 

Pascal

 

 

 

Hi all, saw this thread and wanted to pitch in..

 

I have a similar requirement whereby we have multiple tenants A-G

Each tenant has its own L3 out to the Data Centre above.

In each tenant I have 1 x VRF using 1x BD. I have 8 subnets applied to the BD. Many EPGs in the BD can use the subnets.

 

I have a requirement where Tenant G has a "backup EPG" and I want all other Tenants (A-F) and their single VRFs with multiple subnets to be able to backdoor route to Tenant G's Backup EPG - without having to go via the L3 out.

 

I am able to dedicate a subnet for sharing in tenant G if needed, but what I just trying to do is have 1:Many and Many:1 VRF leaking or sharing. It really shouldn't be this complicated..

 

Been talking to TAC who advise you have to do it on a EPG/individual shared subnet basis per tenant mapped to the shared resource one at a time, I think this is nuts, not scalable at all - aside from being really inefficient from a TCAM perspective.

 

I thought the VZAny would be able to help me here but seem to get more and more confused.

 

Any advise would be most cool...

jamie-nicol
Level 1
Level 1

I've come up against this problem a few times, most recently when contracting between VRFs (in the same Tenant).

I want vzany in tenant-x:vrf1 to offer a shared contract (e.g. admin access to servers on port 22).

An EPG in tenant-x:vrf2 will then consume the shared contract.

 

Whilst this configuration does produce faults, it does actually work! I can access my servers from the EPG in vrf2.

The faults appear when i add the contract to the consumer side, not the provider side.

 

So I have 2 questions:

  1. Does the shared service restriction apply to vrfs (within the same tenant)?
  2. When Cisco say it's "not supported", do they mean it shouldn't work or that it might work but TAC won't support it?

PS. I'm on ACI v3.2(3n)

hi Jamie, how did this end up for you? any issues? I have a similar requirement in a single tenant, with one main VRF inc many EPGs, that need to speak to each other, as well as a number of L3outs each in their own VRF. Concerned VZAny may not work properly across VRFs in same tenant. 

you said you had a fault, but it worked, wondered if it stayed that way?

thanks

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Save 25% on Day-2 Operations Add-On License