03-22-2015 11:58 PM - edited 03-07-2019 11:12 PM
Hi NG,
I am currently replacing the combination of an FWSM and ACE-30 out of a Catalyst 6500 chassis with a bunch of F5 AFM/LTM's.
Now, by history of the company the current implementation looks similar to this Layer 2 configuration of ACEs.
The FWSM are serving as Gateway, the same subnet / IP range is spread over two Vlan ID's, the client facing Vlan ID or "shared vlan" and the server facing vlan ID which extends out of the 6500 chassis.
The VIP address lives on the ACE-30's while the rServers are located in the Server facing Vlans.
Existing:
Now I am particularly interested in the shared vlans and vlan group configuration:
The question really is, if I can extend the Vlan IDs out of the Catalyst 6500 chassis which are used as shared Vlans between the FWSM and the ACE-30 which are both chassis based modules within the same 6500 as show below:
Question are:
Is this a valid design? Or basically, will this work?
How do I reflect that design in IOS?
The overall idea is pretty much to first replace the FWSM with the F5 AFM functionality on a per FWSM context level, and then once after migrating on a per Vlan basis the ACE or F5 LTM side of things within a given FWSM context.
Now the only part I really need to know is, can I extend the shared Vlan ID out of the 6500 without walking into some fun?
Also, this state may remain for a period of 1-2 month, depending on the customers preferences on speed/delivery.
Thanks
Colin
03-23-2015 07:20 AM
Colin
I haven't use ACE modules but I did have a similar setup with FWSM in routed mode and CSM-S in L2 mode which is almost the same except the CSM-S didn't support contexts.
So I can give any guarantees but some points/questions.
To extend the front facing vlan out to the F5s is simply a matter of configuring the ports the F5 connects to on the 6500 to be in that vlan.
That's the simple part.
What isn't clear is how you are going to do the VIPs because they can't be on both the ACE and F5 at the same time or the same VIPs can't anyway.
In addition you don't show a connection from the F5s to the vlan behind the ACE ie. where the servers live.
So how are you proposing to run the F5 ie. I haven't used them but are you proposing to run them in L2 mode as with the ACE modules or L3 mode ?
Either way I would have thought you need the F5 to have a connection to the server vlan because traffic returning from the servers has to go back through the load balancer.
Could you clarify as to how you see it working ?
Jon
03-23-2015 09:53 PM
Hi Jon,
ok, sorry I did not give you all the background / details, so let me quickly brief you on what I intend to do and then where my question fits it again.
As is:
Step 1:
Replacing the FWSM part, while keeping the ACE-30 still performing the load balancing.
(Converting a couple lines of ACLs into F5 AFM language is "relatively" easy, will require F5's professional services, conversion tool, a couple of days work and should be done... sounds easy.. I will tell you later on how that went once done :)
But that's what I consider the easier part.
Step 2:
This is the part where we remove the ACE-30's.
Here I basically have de-associated the Server facing Vlan from the ACE-30 and pulled it back onto the Catalyst 6500 chassis, while extending it out of the Cat 6500 towards server switches as the rServer usually live on VM's or where ever.
The origninal question I had is, actually on Step 1, where I have the AFM part done, and I am bridging the shared vlan between the ACE-30 and then instead of pointing to the front to the FWSM, I intend to bridge it out of the chassis towards the F5 as shown below in this diagram:
Step 1 Detail:
Step 1 - view when completed:
STEP 2.1 Detail (replacing the ACE-30's on a per vlan case):
Disabling the ACE-30 on a per Vlan Basis, basically picking which Vlans we will do the ACE/LTM changes one at a time:
Details steps:
Step 2.1 : De-Associate Vlan B from ACE-30 / Catalyst 6500 binding.
Step 2.2 : De-commission Vlan A, the Shared AFM / ACE or Client facing Vlan ID is no longer required.
(Bascially we are pulling the Server facing towards the AFM part, as many trunks facing the server would have to be edited and its less of a pain to change the client or shared Vlan ID.)
Step 2.3: View once migrated:
Finally "ta daaaaaa":
I am not too worried about the server facing Vlan ID's as I know it works already today.
What I wanted to clarify is, If I could bridge out the shared / client facing Vlan ID out of the chassis, without getting into trouble.
I believe you answered that part, and this should work:
----------------------------------------------------------------------------------------------------------------------------------------
Quoting your previous answer:
To extend the front facing vlan out to the F5s is simply a matter of configuring the ports the F5 connects to on the 6500 to be in that vlan.
That's the simple part.
------------------------------------------------------------------------------------------------------------------------------------------
So therefore I know someone did this already somewhere, which gives me a little more confidence that this actually could work and I can follow up on it.
Please correct me if I am miss interpreting your answer.
PS: I am going though all this trouble as all "legacy" application/servers have hard-coded IP addressing within the applications/databases etc..
Changing IP addressing of VIP or rServers or pretty much anything is NOT an option... :)
Thanks
Colin
03-24-2015 06:46 AM
Colin
So therefore I know someone did this already somewhere, which gives me a little more confidence that this actually could work and I can follow up on it.
I didn't say I had done it as I have never used F5s.
What I did say was that as far as the 6500 is concerned it is just a vlan so you should be able to extend that vlan anywhere you like. There is nothing in that vlan (or shouldn't be) except a firewall interface and a load balancer interface.
So for step 1 the F5 will connect back to the 6500 on a routed link and you have a route on the 6500 pointing to the F5 for the IP subnet shared between the two vlans used for load balancing.
Is that correct ?
If that is correct then I can see no reason why that wouldn't work as long as you remove the vlan A from the FWSM which your diagram suggests that is what you intend to do.
And the server switch simply connects back into the client facing vlans for your servers.
Obviously in addition to the route being repointed to the F5 and the client facing vlans being decommissioned from the FWSM you will also need to migrate the client facing IPs that are currently on the FWSM to the F5 at the same time.
In terms of your other steps it looks like the idea is simply to remove the client facing vlans altogether.
I don't have experience with F5 as I say but does the AFM and LTM functionality live in the same chassis ?
I just ask because your step 2 diagram shows one server vlan with the LTM in the middle of it which I don't fully understand ie. you can't have the same vlan on either side of a L2 device otherwise you get an STP loop.
But that is probably down to my lack of knowledge of F5s.
Jon
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide