From what I can understand I can use the same subnet within multiple EPGs that reside in a single BD as long as that subnets gateway resides in ACI. I am not sure if the same subnet is a common practice or not for multiple EPGs, but wasting IPs in an environment where you don't have enough IP space as it is not possible.
So if I have a subnet of 10.0.0.0/28
EPG-Web = WebSrv01 (10.0.0.10/28)
EPG-App = AppSrv01 (10.0.0.12/28)
EPG-Database = DbSrv01 (10.0.0.13/28) DbSrv02 (10.0.0.14/28)
So assuming the use of a single subnet for multiple EPGs that reside in the same BD, I have an EPG that has two database servers that cannot communicate with each other for compliance reasons. So rather then burning another EPG for the secondary DB server I was going to use the EPG isolation option for the Database EPG so these servers cannot talk within the same EPG even being on the same Subnet. I think this is like PVLANs in the Classic Cisco Switch world.
So this being said contracts would be needed for any communication going in and out of the EPG in a normal situation from Web Server to Database, but with EPG isolation I would need contracts for anything talking to DbSrv01 and DbSrv02, so essentially two times the amount of contracts for the same EPG.
Is my thinking correct on this scenario?
When deciding how to group endpoints (such as your Datacase server endpoints) in EPGs just ask yourself "Should all Endpoints be allowed to communicate with exactly the same other resources?" If the answer is "No", then you're best served putting the endpoints into different EPGs. If the answer is "Yes", then go ahead and group them in the same EPG. The next question then becomes, should these endpoints be able to freely communicate with each other? If "Yes", no further action required, if "No" (such in your case), then you can enable Intra-EPG isolation on the DB_EPG. This would require a single contract(s) between the DB_EPG and any other EPGs they need to communicate with, while prohibiting them from talking together. No need to "double" contracts like mentioned above.
Another common use case for Intra-EPG isolation is for Backup Interfaces on Endpoints. There will be the need for Backup_Endpoints to communicate with a Backup_Target, but the clients themselves should really be prevented from seeing each other. This makes for an easy security design leveraging a single contract between the BackupClients_EPG and BackupTarget_EPG.
Ok so my thinking was on track with most of this. I am always worried about making too many EPGs for an organization standpoint, I don't want so many that its hard to navigate. So sounds like two DBs in one EPG with Intra-EPG isolation is ok as long as they are talking to the same endpoints...but what if they are not? Then is it easier to put it in its own EPG and also the other endpoints different from the ones talking to DB server 1 in separate EPGs or maybe even a different BD? There are so many ways to slice a pizza when it comes to ACI...I read people that create a BD for every application and build EPGs with the three tier application flow in mind, Some use one BD with multiple EPGs for different applications. I want to make sure I am doing "Best Practice".
If you are looking for real life microsegmentation examples and EPG best practices, I would recommend watching this:
BRKACI 2301 - ACI Micro Segmentation - https://www.ciscolive.com/global/on-demand-library.html?#/session/1564527368333001crvi
Yes, you can certainly chops things up countless ways.
From a "one or many BDs" perspective this usually comes down to another simple way to decide. If you've setup your Fabric in an Application-centric way (where EPGs are functionally organized such as Web, App, DB), then a single BD per Application Profile should suffice. If you're going for the Network-centric approach (VLAN == BD == EPG), then you're going to have one BD per VLAN. Assuming you're going Application Centric from the details above, the only time you might want to have more than one BD per application profile would be if you need to change the forwarding behavior. BDs default to HW-proxy forwarding, which for most use cases fits the bill. There are application and use cases where you may prefer traditional flooding behavior such as an externally-hosted GW, or needed by the Application itself. So my advice is incorporate more than one BD into a single application profile only if you need to tweak/tune any of the forwarding behaviors. (Ex. some DB/Clustering applications rely on legacy networking semantics like ARP flooding in order to function correctly).
I wouldn't really overthink it unless you're pushing the scale limits of your fabric. the more granular, the more control, but you always don't want to have to manage policy-sprawl. The trick is to know your limits and plan-within it :)
I wanted to come back to this. I am creating an EPG with EGP isolation enabled which will stop all my intra-EGP communication. Now that means contracts will be required for these database servers to talk to app/we servers. So isn't this like a poor-mans microsegmentation?
What is the real difference with EPG isolation scenario vs microseg?
Yes, in a sense it is.
You have 3 EPGs here, DB, App, Web. Let's assume this is the classic 3-tier application deployment where Web talks to App, App talks to DB. And in your case you don't want endpoints in DB to communicate with each other.
Your Security Policy will look something like this (only implemented as ACLs using pcTags):
Correct. This is still leveraging the "Isolation" flag on an EPG, I'm just illustrating how it would be programmed via ACLs on the switch. There's nothing additional you need to configure it terms of Filters/contracts.
Without the 'Isolation' flag, there's an implicit "allow" entry applied to the EPG (Src EPG_A, Dest EPG_A = Permit)