Note that i've previously posted this question to the Application Networking sub-forum using a different account, however I haven't received any replies so I thought I'd repost it here.
My ACI multipod deployment consists of two pods, with each pod having its own L3Out. Each L3Out contains a 0.0.0.0/0 external EPG.
I wish to create a second external EPG (e.g. Admin) containing the /32 IP addresses of certain systems and workstations that require additional access above what is provided to the 0.0.0.0/0 external EPG. For example; this external EPG may consume a contract allowing SSH/RDP to select EPGs.
This second external EPG would be created under both L3Outs as we would wish to maintain the same access for the external systems in the event of an L3Out failure, or if an L3Out association of an EPGs BD was changed, or if the external system was routed via the second L3Out.
However, from what I understand, it appears that this configuration - i.e. creating the same /32 IP address/subnet in an external network under different L3Outs within the same multipod fabric - is not supported and will result in a "Prefix Entry Already Used in Another EPG" fault message.
With this in mind, is there any way in which such access (consistent policy applied to a /32 address in an external EPG) can be configured for a fabric with multiple L3Outs, or is this a fundamental limitation?
This is a limitation longest-match wins. The only external epg which you can configure multiple times per VRF is the 0.0.0.0/0. So in your case you must configure a stretched L3out over both PODs.
In configuring a stretched L3Out over both PODs, is there any way in which to then control which BDs/networks are advertised to which L3Out peers?
For example; I would prefer to advertise a network "local" to POD 1 to L3Out peers attached directly to POD 1 and have client traffic at the POD 2 site destined for a POD 1 network preferably route via the WAN than the IPN.
Similarly, I'd like the option to "migrate" the network and selectively advertise it to L3Out peers in POD 2.
I would like to know as well. I have been desperately trying to find some time to look into this. I have a common L3Out that is applied to both pods. The prefixes are redistributed into OSPF as E2 routes. Hot potato routing. Just get it to the fabric as quickly as possible, and if I have to traverse the IPN, so be it.
That's fine for the most part. But I am seeing a trend that I like. Server teams are starting to ask for a separate subnet at each DC. I would rather see those prefixes as E1 routes where I can adjust costs accordingly. But I don't think that is possible.
What I was thinking was it may be possible, using a route map, to only advertise those prefixes out pod 2 or only out pod 1, except for when the ospf adjacency in said pod is lost. I just haven't been able to find the time to dig in and do some testing and / or research.
Not an answer, I know, but something to consider maybe?
In current releases you can only add route profiles to the L3Out or Bridge Domain. So any modifications you make will be present on all nodes and to all peers.
Because it’s at the L3 Out level you can make a second L3Out and have one advertise subnets with different metrics by using different route profiles on each L3Out. Problem with this approach as has been outlined in this thread is that you can’t have the same IP subnet as the external subnet prefix on two different L3Outs, so this messes up your security policy if the two L3Outs effectively go to the same external network.
To overcome the limitation in ACI route control you could use BGP and manipulate the path attributes on the router to which ACI peers.
Finer grained route control, for example at the node for OSPF or at the peer for BgP would solve this problem. Hopefully this appears in 4.0.
Thank you. I think you just answered some questions and confirmed some suspicions I had. Every time I start down the road of thinking through this, I realize that, ultimately, I am probably looking at using GOLF anyway and I would probably be doing my future self more favors now if I invested my limited lab / research time in GOLF instead.
GOLF has it’s own caveats, mainly limited support on the upstream routing platforms and lack of multicast support.
A future release will support /32 advertisement in regular L3Outs (which is the main advantage of GOLF in non large scale environments). Once this happens the driver to use GOLF is largely gone in enterprise environments.
Great points. The last comment, re: driver to use GOLF being mostly moot.. Ouch. Kinda hit home. :)
Time to do some real thinking on this.