Thank you for looking into that question Tuan. Appreciate that!
I'll do some research to find out what policy model objects are involved.
You could get answer for your question through below Link:
as It's based on Syslog Definition for Contract
We are about to migrate from a traditional DC to ACI fabric (network-centric model first). So I'm glad that I could get some advise from the experts.
There would be a L2 trunk all interface (vPC/Port-channel rather) connecting the greenfield to the brownfield Core switches, which got assigned as a static port in all possible EPGs. Those legacy Core switches would eventually be replaced by ACI as they're approaching End of Support. For Day-1 preparation, how would you go about:
1. Migrating hosts with Etherchannel / MLAG connection from brownfield switches (non-Nexus) to the ACI leaves? How long of a downtime should we expect for each of these LAG / MLAG migration?
2. Can the migration vPC interface (L2 Trunk vPC interface) be used for both EPG static port and L3Out SVI? For migration purpose, as devices having OSPF neighbourship with our Core switches would eventually be migrated to ACI.
--- I read from a Cisco document that if we're using the same encap for interfaces within an L3Out Interface Profile, then it would essentially create an External BD, making OSPF neighbourship formed between ACI, Core switches and outer devices in the same segment. We don't really care whichever would be the DR/BDR as it would just be a temporary setup.
3. In a network-centric model, the current gateway MAC (on our firewall) would also be used by the BD as we migrate the default gateway to the ACI fabric. How long should we expect for the downtime? (The firewall's gonna be inserted as PBR unmanaged eventually)
In version 4.2 (or later than 3.2) what's your opinion on using vzAny-vzAny contract, its subject has common/default filter (with PBR to redirect to firewall)? Would it work, or do we have to use a more detailed filter (like IP traffic being redirected, non-IP bypassing the firewall, etc.)?
Thanks for your assistance.
I'm glad to get a response from you.
The reason I was asking about vzAny-vzAny contract is because:
1. There are too many EPGs as we use network-centric approach.
2. On this white paper, at Contracts and filtering rule priorities it mentioned vzAny-vzAny contract with common/default would be programmed with priority value of 21 (same as the implicit deny). Which would mean the implicit deny actually takes precedence since deny wins over permit and redirect for the same priority. Shipment hasn't arrived and I couldn't set this up to test on dCloud (as in configuration is allowed but there's no actual endpoint to test that use case).
Also, for Port-channel migration, what would happen if you keep one cable on the brownfield switch, and the other to one of the ACI vPC? Would the existing Port-channel flap?
Thank you very much.
We are planning to move from a Nexus 7000 network to a new ACI network. (network centric)
Is there any best practic for dhcp and pxe boot, inside a ACI network ?
I'm planning to migrate our load balancers F5 to the ACI. It's a simple one-arm design with SNAT. My only concern is the VIP subnet and interconnection subnet (between ACI and F5) are not the same. I found a few solutions on the Internet:
2. L3out EPG for the F5
3. configure the VIP subnet and interconnection subnet under the same BD
but they are not very detailed.
Could you please advise what the recommended solution is for this migration? Thanks a lot.
Thank you so much for your quick reply. I think I understand what you explained about the scenario when the VIPs are part of the LB BD subnet. But if they are not? Let me take an example from the PBR whitepaper (picture below), what should I do if the VIP here is not 172.16.1.100 but 172.16.2.100? I suppose that the client will not know how to reach that IP. Can I simply add, for example, a subnet 126.96.36.199/24 to the BD to make it work? I hope my question is clear, thank you for your advice.
How do you define this "application-centric" approach ACI tries to evangelize, in terms of VRF/BD/EPG configuration, how is it different and beneficial to modern applications, and why would a network-centric approach not suffice under that scenario? A follow-up question from developers is potential drawbacks of having several networks in the same BD which goes in the opposite direction of what a traditional network model does.