This question is in the context of servers sitting in a colocation environment behind an ASA5510 with security plus license.
Our colo provider is going to be statically routing a /28 public subnet to our ASA5510 (say 126.96.36.199/28). We will also be getting a single IP (say 188.8.131.52/30) on a small router-to-router subnet (184.108.40.206/30) to which the 220.127.116.11/28 subnet will be statically routed to our ASA5510 from our colo provider.
I will obviously set the outside interface of the ASA to be 18.104.22.168/30 so that the colo provider can route the 22.214.171.124/28 subnet to it. I will also set a default route to 126.96.36.199 which is the IP of our colo providers gateway (and the router that will be statically routing the 188.8.131.52/28 subnet to us).
We have various servers in the same rack as the ASA (connected via a 3750G switch). Some of these servers need to be exposed to the internet (web, email, etc servers) and some do not (database servers).
I'm considering 2 different ways of designing the network but I have questions about both and not sure which way to go:
1) Scenario #1: Using NAT and private IP's for all servers.
In this scenario where/how do I assign the internal network (say 10.1.1.0/24) and the public routed subnet (184.108.40.206/28)? I assume the internal 10.1.1.0/24 is an inside network assigned to the interface connected to the 3750G (to which all the servers connect). However, where do I assign the public routed subnet (220.127.116.11/28) since it is somewhat "nebulous" in that it has to reside somewhere on the ASA so that it can then NAT to the internal (10.1.1.0/24) IP's. Also, is it considered an outside or inside network - and on which interface? My confusion is that If its added to the outside interface then won't that conflict with the 18.104.22.168 IP to which the colo provider is routing our 22.214.171.124/28 subnet to? And if its on the inside interface connected to the 3750G then wont that conflict with the 10.1.1.0/24 private IP range of the servers? I'm missing something here.... (please help:)
2) Scenario #2: Using public IP's for all servers
This scenario seems more straightforward to me: I would want to assign IP's from the statically routed subnet (126.96.36.199/28) to my servers so that range would be configured as an inside network on the interface connected to the 3750G (to which all the servers connect). This would be configured on a specific VLAN (say vlan 50). I would then have another VLAN (say VLAN 100) on the 3750G that has a private IP range (say 10.1.1.0/24) so that non-public servers (database, etc) would reside on there. All public servers that need access to private servers would have a NIC on both VLANS (50 + 100). My question is is this the correct way of approaching this? I also like this because I dont have to worry about NAT and the ASA can act as a router/firewall and things are clear in terms of whats happening.
Ultimately I'm not sure which is the best way to go in terms of having all servers on a private IP range and just NAT to them (as per scenario 1), or implement scenario 2 where servers have two interfaces.
Any advice or suggestions would be great. The main thing thats bugging me from scenario 1 is I'm not sure where/how to assign the statically routed subnet (188.8.131.52/28) on the ASA? (inside? outside? which interface?).
Thanks very much
You would configure the setup in the following way (will list point that you already mentioned also)
So even though it might seem wierd this is how you do it. There is really no other configurations related to the additional public subnet. Its simply used in the NAT configurations and will work as usual.
Now if the situation were so that the ISP didnt actually route the public subnet towards your ASA "outside" interface and actually used it as a "secondary" IP address on their router inteface facing the ASA then you might run into some problems depending on your software.
In some ASA softwares (Think it was around 8.4(2) to 8.4(3)) there was problems with ARP and the solution was to
You would configure the setup in the following way
With regards to the other Vlan for the "non public" servers I would avoid using a server connected to multiple networks. I would let the different networks communicate through the ASA. Though I have to admit the server/IT side isnt that familiar to me as I would like it to be. But I personally have bad expirience from situations where servers are connected to 2 different networks. You might run into routing problems on the actual server itself if its own routing table isnt configured correctly. You might end up in a situation where traffic comes to the server from one interface but gets forwarded out of the other and ASA will block the connection.
There are some of my thoughts. I will add if something else comes to minds but the above points are the most common things I have run into with similiar cases. There might ofcourse be other points that could be made that I havent thought about.
Hope the information was helpfull If so please do rate the answer.
Naturally ask more if needed
I'm with Jouni on recommending against the servers-with-dual-NICs as a general rule. What I do in a fairly similar situation is scenario 3:
1) configure the ASA with outside, dmz, and pci interfaces
2) use the public, routable IP's on the DMZ servers which face outward for DNS, SMTP, HTTP, etc.
3) use private rfc-1918 address on the inside PCI servers hosting applications and databases
The ASA itself serves as the router between the DMZ and PCI servers. If the ASA doesn't have enough throughput for that, then you either need a bigger ASA or you are back to the multiple interfaces on the servers scenario.
In scenario 3 you do potentially need 3 kinds of NAT rules:
a) identity NAT between the DMZ and PCI subnets
b) PAT for the PCI servers going out the general internet for patching
c) if you don't have a dual-interface management staging host on-link with the PCI servers, or an IPsec tunnel to the ASA, then you might need some NAT rules to allow limited inbound connections to the PCI servers for maintenance.
A new option, scenario 4, is to dual-stack the DMZ servers with IPv4 and IPv6, which you should be doing anyway, put the PCI servers on v6-only, and just have public addresses all around.
-- Jim Leinweber, WI State Lab of Hygiene
Thakns for your reply and sorry for the delay in response, I was out of town. A few followup questions:
1) Why are you also against multi homed servers? Please see my response to Joni in terms of how I'd setup the routing tables - it seems simple to me?
2) Some of the DMZ servers would need access to prviate servers (eg: web server needs to talk to DB server). How would this be handled? Also, for nightly backups each server would be pushing a full gigabit of traffic to a backup server/SAN on the private network and this would crush the ASA. If you don't like the idea of multi-homed srevers (that would talk on a private network and not bother the ASA), then the only other option I see is Scenario #1 where I have an inside and outside network with the routed public subnet statically NATed to inside servers. Thoughts?
Wearing my security hat, I dislike dual-homed servers in general, because they potentially provide alternate paths to evade firewall rules. Wearing my network engineering hat, I agree that running backups through the firewalls tends to be a bottleneck, so my tape servers are dual-homed, just like yours. And my app servers tend to be on the same subnet as the DB servers, to keep that traffic from going through the firewall too. PCI fanatics will cringe at that, and recommend buying more and higher performance firewall hardware. I'd need a bigger budget.
Using public DMZ and private PCI addresses without dual-homing, since both subnets are directly attached to the ASA, you simply set up identity NAT between the DMZ subnet and PCI subnet and let them talk using whatever access-list rules seem approriate. The ASA will automatically route between directly attached subnets; you only need route statements for stuff that isn't directly attached.
On the site to site IPsec VPN tunnel, once again, just have an identity NAT rule covering the two ends, and they'll talk to each other fine. I usually seem to need egress list rules permitting the tunnel traffic too.
-- JIm L