Say, I have a 6509 core with FWSM.
- 5 VLANs are protected by FWSM.
- These 5 VLANS should have free access between each other
- The 5 VLANs use the FWSM as their gateway.
- 20+ VLANS not protected by FWSM.
Does it make sense to give the 5 VLANS their own switch core instead of the FWSM as the routing gateway?
Has anyone done that or recommended it for performance?
The FWSM is not being heavily utilized, but these 5 VLANs are critical for performance.
Future growth is also a consideration.
Well there are pros and cons to both.
If you really want free access between the 5 vlans then yes there is little point in having them route via the FWSM because all you are doing is permit ip any any between the vlans.
But if they are that critical do you really want free access ? What if one of the machines becomes infected. If you separate them with the FWSM and have access-lists that only allow the relevant ports through you may well limit the effect of an infected server. It has to be said also that you may not - depends on which ports you need to allow and which ports the virus uses.
Lets say for arguments sake that you do want free access would there be any benefit in having a separate switch core as you suggest. Well assuming that the new switch(e) are responsible for the inter-vlan routing this immediately rules out using the FWSM in transparent mode - this may or not be an issue but you are limiting your future options.
You have 2 options really
1) Use the existing switch with the FWSM in it and have the FWSM in front of the MSFC for these 5 vlans. So the inside of your FWSM connects to the MSFC which routes between your 5 vlans. This may or may not be feasible with your topology because you would need to direct traffic for these 5 vlans to the outside interface of the FWSM.
2) Have a new pair orf switches as you suggest that are responsible for inter-vlan routing and have the FWSM control access to those 5 vlans with the 5 vlans free to communicate amongst themselves.
Option 2 as already said rules out using the FWSM in transparent mode for those 5 vlans. It also rules out using bridge mode for CSM/ACE modules in case you were thinking of doing any load-balancing.
Option 2 costs more but that may be justified by the criticality of the servers.
If the servers are standalone ie. not blade servers Option 2 is more in line with the Cisco hierarchical model because you should not really be connecting standalone servers directly into the distribution layer. I have done both in the past and it is usually dictated by cost.
If you decide to go for option 2 i would look at the 4948 switches which have very low latency and are designed for data centre environments, altho this does depend on the amount of servers you have ie. a 6500 solution may be more applicable.
That is just a few of the things to consider. Key points are -
1) Do you really want free access if the vlans are so critical ?
2) Are you comfortable with the fact that you are ruling out configuration options eg. transparent FWSM, bridged ACE/CSM by having a separate pair of L3 switches.
3) The 6500 with Sup720's and FWSM's can shift a lot of traffic so there really is going to have be a lot of inter-vlan traffic between the 5 vlans to justify the cost of additional switches.
It may seem that i am arguing in favour of option 1 but i'm not particularly. I actually think option 2 is a perfectly sensible solution but it does need extra justification.
Thanks, some excellent points. I will take more time to consider this. I think looking at traffic flows between VLANs could use some redesign; typically on these factory networks, latency becomes an issue between, PLCs, and HMIs.
I'll look at IP flows and future designs before I seriously look at another core.
Good point, I may want added security between VLANS in the future, which I can do now.