Hi guys, we decided to deploy ACI for our dev/qa/stg environment and possibly roll it out to prod. I'm new to the concepts but I'm trying to get up to speed with it.
here's an overview of what we currently have.
3 /16 subnets one for each of dev, qa, and stg Plus a DB subnet
dev/qa/stg L3 SVI's are on a virtual domain on a fortigate and the DB's are on a different vdom on the same firewall. So pretty much access and policy is controlled on the firewalls.
Security policy allows dev to dev qa to qa and stg to stg communication no inter communications between the 3
the problem that arised was the amount of policies that we had to create on the firewall because unlike an ASA a Fortigate doesn't allow communication between vlan interfaces and a policy is needed everytime connection is needed. This became a nightmare and so we got thr ACI
2 9336 for spine
2 9372 for leafs and of course 3 APIC controllers
what I have in my head is to create 3 tenant for each of dev/qa/stg
create an EPGs for lets say the websevers within dev tenant as well as for apps and DBs and set contracts between
And in case of a layer 4 or 7 device inspection or load balancing is needed we can accomplish that using a service graph.
Not sure on how to used bridge domains, should I use them one per vlan per EPG or put multiple vlans in it.
note that I will be grouping vlans in the EPGs. And a UCS chassis will be hosting the vms.
I dont know how clear I made this but please let me know if you need clarifications.
Welcome to ACI!
Sounds like ACI is a perfect fit to help relieve some of the challenges you're detailing. Hopefully you've read the ACI Fundamentals & Design Guides, but I think you're on the right track.
Each Business Unit/Entity gets their own tenant. This is for nothing more than RBAC. You could give sub admin access within each group to manage/control their own tenants.
Depending on your needs & granularity, Each tenant would have the respective App tier EPGs. This allows you to control which tiers communicate with other tiers within that Tenants (IE, allowing Dev Web servers to communicate with Dev DB servers etc).
A single VRF & BD for each tenant should suffice, unless you want to change the forwarding behaviors (flooding vs proxy) for each. Since you'll have separate tenants anyway, this would be the minimum you'd need.
You will probably create a shared L3 out, to connect the fabric to your external network. This would be best suited in the common tenant, unless you'd prefer to create a L3-Out per tenant and really keep things separated. I would highly recommend you look at VMM integration if you're not already (not mentioned above) that will make the policy model deployment very easy into your virtual environment. VMM integration can come with either the native Vmware vDS or Cisco AVS (assuming VMs are VMware).
For your Fortinet integration, you have the options of doing fullly managed device (assuming your appliance is supported http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-734587.html?referring_site=RE&pos=1&page=http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualizatio... or doing unmanged mode - which allows ACI to provision the VLANs, but certain policy is done separately & manually on the L4-7 device. With the fully managed mode, you have the added bonus of keeping all your policies within ACI.
I'll stop there, get your input and we can discuss further.
Thanks for the detailed explanation. Sounds like we're on the same page. The implementation part will be more challenging I'm sure. As of for the VMM domain, we're trying to draw the line between NetEng and Sys Infra on who will be managing ACI and we want to that to be handled by NetEng but we probably should work tightely with them. Now for the vlans created on the fabric but policy is enforced on the firewall how would you create the L2 on the fabric and the layer3 on the firewall. Is that accomplished using a service graph and without a BD?
Would you say that you want to have your tenants behind the fortinet vdom to secure traffic from/to each tenant a,d use the contract within the tenant ?
if yes, then you would have l3 active on the fabric for each tenant and an l3out for each tenant to reach its own vdom, creating sort of cloud above each vdom.
you should probably :
- extend epg to allow worload on the aci fabric
- create an l3out between the vdoms and aci tenant
- during a maintenance window, disable the dmz's on the vdoms and enable l3 forwarding on the fabric (and attach the l3out/advertise externally)
I don't know if you could prepare the contract between the epg before activating l3 forwarding not tried it yet.
and also create the contract for outside worl to reach the tenant
would this match what you are thinking about :
Using the ACI roles (built-in) you can give your NetEng & SysInfra teams different access to allow each of them the ability to do their respective jobs. The last thing we want is to any internal group to feel distances from their roles. There's even a plug-in for vCenter which allows the creation of simple policies (EPGs, Filters, Contracts etc) from within vCenter. Personally, I prefer getting all teams comfortable using the APIC as there's much more control - but it remains an option.
If you want to maintain your policy on the Firewall (FW), you still need to create Bridge domains for your EPGs to use, however you would not need to create BD subnets. You can leave the GW on the FW instead and allow routing to be done by the FW. For this you would need to enable "flooding" within the respective BD. Here's a good guide that details about "unmanaged" mode for L4-7 devices - as what you might use to allow ACI to only provision the VLANs, but allow policy to be separately-managed on the FW. http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/L4-L7_Services_Deployment/guide/b_L4L7_Deploy_ver121x/b_L4L7_Deploy_ver121x_chapter_010000.pdf
The default password is "password". http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-c200-m1-high-density-rack-mount-server/111455-setup-cimc-c-series.html
For the initial setup dialogue, I'd suggest just doing it manually. It's usually only done to 3 controllers so not that big a big deal. That being said you can script it, but this requires the IMC to have Serial over LAN configured.
Thanks again. Is there an installation guide for the ACI simulator? I downloaded the ISO file off the Cisco site and trying to install it on vmware workstation but I'm not successful. I keep getting error 15: file not found for the ensieme fabric controller. Do you know how to install it?
The APIC Simulator image we make available publicly is only for our Hardware Based Simulator appliance (modified C220 M3). That iso image will not work in a Virtual (VMware) environment running as a VM.
There is an installation Guide, but again, it's for the Hardware Simuluator appliance.
I work with Sleiman.
Per your request:
|HQ_APIC01# acidiag verifyapic
openssl_check: certificate details
subject= CN=FCH2017V01S,serialNumber=PID:APIC-SERVER-M2 SN:FCH2017V01S
issuer= CN=Cisco Manufacturing CA,O=Cisco Systems
notBefore=May 6 05:44:48 2016 GMT
notAfter=May 6 05:54:48 2026 GMT
Also, Currently running 1.2(1i) Which code rev is Cisco recommending please? Thanks!
Hi Roberts, I did the scripted install. For some reason I can't reach the APIC unless I'm on the same subnet. Maybe something wrong with the default gateway. I'm trying to login through the console of the APIC but it's not letting me. I can't even SSH to it. How do I login through the Console CLI?